We were using a NetApp 2240 Filer, which was spinning disk and a mix of SATA and SAS. We were trying to put a production SQL Database load on it and the IOPS were way too immense for it, so we ended up buying this AFF box. It solved all the issues, at the time. We haven't needed it for anything else.
All-Flash Storage Arrays IOPS Reviews
Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term IOPS
NetApp AFF (All Flash FAS): IOPS
Dell XtremIO: IOPS
Dell XtremIO provides great performance. The IOPS is less than one millisecond. Because of this, many telecom companies are working with this solution. They are dependent on XtremIO as their main storage source on their side.
Pure Storage FlashArray: IOPS
We use Pure Storage FlashArray because of the increased demand for high IOPs for some of our internal applications that were required to read and write in faster sway.
HPE Nimble Storage: IOPS
reviewer1479924 says in a HPE Nimble Storage review
Senior IT Officer at a financial services firm with 201-500 employees
The replay and IOPS work well. It's user-friendly, and we're satisfied with it.
reviewer1387293 says in a HPE Nimble Storage review
Technical Manager at a tech services company with 11-50 employees
A valuable feature for me is the architecture which is pretty good. We have a good throughput and we like that they have a lot of IOPS and low latency is also pretty good. And we also like the verification features because we get some good things with the method Nimble takes to duplicate. If you have a hybrid secondary storage, it allows you to do remote copy, data recovery and business continuity with the solution. And the integration with some other solutions is also good. For now, Nimble has a way of integrating with each solution, it's very good.
HPE 3PAR StoreServ: IOPS
ITmanager10038 says in a HPE 3PAR StoreServ review
Senior IT Infrastructure & Data Center Operation Engineer at Ministry of Communications and Information Technology (MCIT), Egypt
In our organization, the storage is not detected from the first day, so I don't have the workload. The workload is run in my environment, and 3PAR is the best solution. If I have a workload from Thursday, I don't make the adaptive optimization on this because 3PAR is collecting all the storage and doing all tiering of the storage. If I have another disk from a new line, 3PAR makes it as tiering and adaptive. When a VM has more rights about the storage, it will make such VM of a higher tier. It can make it a C-tier. If the VM has a need for more IOPs, they need to use the scale command every time. This VM will move to another tier, and after the weekend, I will schedule adaptive optimization to check if this VM needs this tier or not. This way I can make all the storage tiered. If the workload is big and needs more IOPs, it moves the VMs from one tier to another tier. This is the main advantage.
reviewer1473348 says in a HPE 3PAR StoreServ review
SAN Consultant at a tech services company with 201-500 employees
The stability is pretty decent. The only thing you have to worry about is that you don't overtax the controller. In other words, you don't do that combination of deduping and compression as well as data replication and then also heavily use the controllers where your IO starts, the IO performance. With IOPS and Truepoint, it's important that users do not over-utilize the environment. Then you have performance issues.
reviewer969309 says in a HPE 3PAR StoreServ review
Storage Manager at a financial services firm with 10,001+ employees
HPE 3PAR StoreServ has limited flexibility in building replication solutions. There are limitations to the number of IOPS the system can do. It's not bad as it is doing its job. However, for the application, if you need a toolbox, you can build everything concerning periodic replication modes of synchronous or asynchronous three-site, four-site, with supported cascading which requires you to buy an IBM product.
It also takes a few hours to one day to upgrade the system and sometimes; it takes more time because, in some HPE 3PAR StoreServ 20000 Storage, you have an eight-node system. If you do an upgrade, you do it node by node and every node might take more than an hour.
Hitachi Virtual Storage Platform F Series: IOPS
reviewer1302357 says in a Hitachi Virtual Storage Platform F Series review
Product Manager at a tech services company with 1-10 employees
I am not an end user. I present the solution to customers. I study a customer's infrastructure and suggest a product based on the customer's needs, such as latency or IOPS performance. I usually work with VSP F700 and F900 models.
The deduplication is useful for us because we don't have that much money for our lab infrastructure. Deduplication means we have more storage available. And the IOPS are really fast.
IBM FlashSystem: IOPS
The most valuable features in IBM FlashSystem are IOPS, performance, duplication, and compression.
Dell Unity XT: IOPS
- One of the most useful features for us was the deduplication. It had been challenging for us to store certain types of data and to use patterns of storage to reduce storage size.
- The IOPS and the speed were also an important part of the solution.
- In addition, there was a Unity machine that offers block-level and NAS, and we used the block-level storage.
- We also use the side-to-side storage verification for the recovery site.
- Finally, the device was flexible and we could change the configuration to meet our needs.
Huawei OceanStor Dorado: IOPS
HPE Primera: IOPS
The most valuable feature is an all-flash system, which means the data access speed is amazing while the latency is almost nothing, and it can deliver IOPS up to 16,000.
reviewer1400358 says in a HPE Primera review
Senior Systems Security Specialist at a government with 51-200 employees
We previously used Dell EMC PowerMax, which is a five to six year-old setup. Primera is a two year-old setup. With Dell EMC, the discs crashed, which needed to be replaced. In the two years we've used Primera, we haven't faced this kind of issue.
PowerMax has very powerful storage. The number of IOPS and the control stability are the things I liked very much, but it's more expensive than other solutions. They are top of the line, but on a higher price range.
reviewer1846275 says in a HPE Primera review
Service Manager at a tech services company with 10,001+ employees
I'm very satisfied with HPE Primera, so I can't think of an area for improvement, but if they could increase the value of the IOPS and the throughput, that would be good.
The IOPS and throughput could be better. That's the only drawback compared to other vendors.
The most important thing for us is the IOPS. It's produced because a full flash system is a requirement for us. Last year we used a three-part hybrid solution in our environment and needed some IOPS on our site. HPE Primera met this requirement for us entirely.
It seems sufficient for our environment, but if our requirement increases in two years, then we can plan. So there's no need to make any improvements right now.
Violin System 7000 Series: IOPS
This is an all-flash storage array. It is the best solution for large batch processing. There are some customer databases that have tens of millions of records in one large database. That's where VIOLIN performs the best.
VIOLIN stands out for million of IOPS with ultra-low latency. VIOLIN can provide about two million IOPS in a 3U unit. We have a latency of less than one millisecond.
Dell PowerMax NVMe: IOPS
Reviewer593747 says in a Dell PowerMax NVMe review
Storage Team Manager at a government with 10,001+ employees
The biggest lesson I've learned using PowerMax is to trust it. For example, with the QoS, don't try and overthink this. It's engineered to take on diverse and disparate workloads. Put it in, watch it for a little bit, and if you don't absolutely need to turn on all the QoS, don't. Let it do its thing.
Don't be shocked by the price per GB. Look at your cost of transactions or IOPS. The days of looking at storage as so much per GB are over. It's how much workload you can pass through that storage device.
Overall, PowerMax is ideal for storage for enterprise-level, mission-critical IT workloads. That is really its strength, as is its ability to handle disparate workloads. I wouldn't use anything else for these high-end, critical workloads.
At this time, scalability is not applicable. I understand it is very easy to scale up. You just add on the drive shelf, then connect it in. That is really it. Now, you have all these drives available to you.
It is being used every single minute of every single day. The IOPs, the throughput data, is about 525 megabytes per second. So, it is actively being used at all times of day.
As time goes on, the usage of it will increase. That is just the nature of it being our primary storage array.
We find the service level option to provision storage very valuable. The ability to define different service levels for storage groups helps us in prioritizing our workload at the infrastructure level.
We also find the compression technology of PowerMax very valuable. In some instances, depending on the kind of data that we have, we can attest to compression ratios of about 9:1, which is very valuable.
The NFS feature is also quite useful for us in our environment. We're able to deploy the NFS capabilities to resolve some of the use cases that we identify.
Its efficiency and performance have been remarkable. It could be because we've not been able to break the limits of what we have. The PowerMax 2000 that we have can do about a million IOPS or so if my memory serves me well. Our use case at the moment isn't stretching as much as that. So, for us, performance has been remarkable in terms of meeting expectations. It has been much better as compared to what we used to have. We see responses to application requests, especially database request queries, in microseconds, as advertised, and even that in some ways gave us a bit of a challenge because the applications couldn't cope with the speed of the response of the storage. So, it was new learning for the providers of the application. The performance has been remarkable. We've seen data within microseconds as advertised. In terms of the IOPS, we've not been able to fully exact the limits, but so far, so good. We are pretty comfortable with that. As we grow organically, we will see more performance and we will be able to drive, but in terms of compression and deduplication, we have received remarkable value.
In the last one year, we haven't had any issues with the availability of the platform, the storage, and the extension of our data. The encryption or data address feature is also there. Even though we've not fully utilized that, it's comforting to know that capability is available for us to explore. We've not had any storage level outage in terms of the data not being accessible within the agreed service. So far, so good.
With the NVMe technology, performance in terms of IOPS has improved. Things are generally faster, although there are some bottlenecks with the integration of IBM servers.
The biggest way that PowerMax has improved the way our organization functions is through an increase in performance. The business of pharma is complex and the IOPS demand is huge. In the past, we used VMAX storage, and there was a big issue with the performance. Everybody complained about performance, servers, and storage, saying that they didn't have enough space. We tried many different solutions in an attempt to solve the performance issue.
For example, we tried reducing the data that was stored on disk, and we tried removing unused data. We turned to development and asked that some programs have fewer features. Finally, management made the decision to implement the PowerMax solution, and it solved the issue. As soon as we migrated from VMAX to PowerMax NVMe, the performance increased and everybody felt better.
The security is good. We enabled DSE for our encryption.
CloudIQ has made our lives better. It provides notifications, where you receive an email to let you know about your storage and your SAN. It is a powerful tool, although we have had to upgrade it a few times. Overall, it is a good monitoring tool that gives us a powerful and easy way to monitor our servers.
The SRDF site-to-site replication for the volumes is the most important feature for us. That enables us to do site recovery and replication for our VMware infrastructure.
Along with that, the NVMe response time is very good. We used to have a VMAX 20K but we have just upgraded, and moved two or three generations ahead to PowerMax, and the response time is great. Because we are coming from a hybrid storage scenario, the performance of NVMe is a huge upgrade for us. The 0.4 millisecond response time means our application works great and we are seeing huge performance improvements in our VMware and physical environments.
Regarding data security, EMC has introduced CloudIQ solution with the PowerMax environment, and that enables live monitoring of the telemetry and security data array of the PowerMax. CloudIQ also has a feature called Cybersecurity. That monitors for security vulnerabilities or security events that are occurring on the array itself. That feature is very helpful. We have been able to do some vulnerability assessment tests on the array, which have helped us to resolve issues regarding data security and security vulnerabilities. We are not using the encryption feature of the PowerMax, because we didn't order the PowerMax configuration for it.
CloudIQ helps the environment and lets us manage the respective connected environments. A good feature in CloudIQ is the health score of each connected infrastructure. It gives you timely alerts and informs you when a health issue is occurring on the arrays and needs to be fixed. Those reports and health notices are also sent to Dell EMC support, which proactively monitors all the infrastructure and they will open service requests themselves.
In terms of efficiency, the compression we are currently receiving is 4.2x, which is very good efficiency. We are storing 435 terabytes of data in just 90 TB. In addition to what I mentioned about the NVMe performance, which is very good, we were achieving 150k IOPS on the VMAX, but on the PowerMax the same workload is hitting 300k-plus IOPS. That is sufficient for the workload and means the application is performing as required, according to the SLAs as defined on the PowerMax.
When it comes to workload congestion protection, we have not faced any congestion yet in our environment. We have some spikes on Friday evenings, but they are being handled by PowerMax dutifully. It can beautifully handle up to 400k IOPS, even though it is only designed for 300k IOPS. That is another illustration of its good performance.
We removed the need to observe whether we ran into issues with the performance of disks or number of IOPS. Previously, our Oracle Database would throw us performance errors. Now, with PowerMax, everything runs smoothly.
I would access the solution’s built-in QoS capabilities for providing workload congestion protection at 10 (out of 10), as we are using the highest, platinum-level minimum response time from the system. The NVMe SCM storage tier feature offers crazy speeds. When we were looking for a storage solution, we were looking for the most reliable, high performance, latest solution to delay end-of-life. Our PowerMax setup everywhere enables the diamond-level setting with enabled monitoring. Until this day, we have not experienced any anomalies. We simply don’t experience workload congestion. Our primary requirement was the reliability of PowerMax, then the rest of the features, like NVMe SCM, were a nice add-on
It is scalable. We recently did an upgrade. You can keep on adding disks within a shelf or even attach additional shelves.
Also, the NVMe scale-out capabilities are very important. Although we are using SSD, all-flash drives, the backend is NVMe. It is quite fast. The IOPS requirements will never reach the max. It is also future-looking storage because it supports storage class memory (SCM). That is where you can utilize the full benefits of the storage solution. Currently, we are not using SCM because it is quite expensive. At the moment, we don't need it, but the storage backend is already NVMe and the controllers are connected using InfiniBand for very high bandwidth.
It's also very easy to add or expand disks in very few steps. Everything can be done online, even the firmware updates, meaning that you don't need any downtime. It's all seamless.
Dell PowerStore: IOPS
reviewer1526187 says in a Dell PowerStore review
Chief Information Officer at a computer software company with 5,001-10,000 employees
We replaced an older, high-performance storage device that was very expensive. With PowerStore, we were able to achieve the IOPS, and we were also able to get a data compression rate significantly above what we had expected. We were able to retire that older, very expensive piece of storage by bringing in the PowerStore. It's been faster and cheaper than we had expected, per terabyte.
Another reason that we were after this machine was PowerStore's VMware integration. We're a very large VMware customer. Some 98 percent of our workload runs on VMware.
reviewer1648785 says in a Dell PowerStore review
Technical Support Manager at a computer software company with 1,001-5,000 employees
PowerStore helps to simplify IT operations. At the site where it is installed, we have consolidated two tiers with the high-IOPS and lower tiers. We have enough capacity with lower power consumption and enough performance to handle the required overload.
It gives us the capacity and the performance we need. Before, things were on 10K disks, while this is flash. There is a very big difference. Previously, we were connected directly, with a back-to-back connection between servers and storage. Now, we have multiple servers connected to SAN switches and those switches are connected to the storage. For sure, the performance of the system is sky-high. In terms of IOPS we are fully satisfied by the PowerStore.
We use the solution’s built-in VMware hypervisor to run VMs and virtualized applications, directly on the storage appliance. We manage multiple sites and we don't have enough teams to allocate support at all sites. So our support team handles all our sites. It's very important for us to have a consolidated infrastructure that we can manage remotely, without needing someone available locally to do the patching/power-up/creation and life cycle management tasks. Having this box, along with the integration with VMware, and VMware's capabilities, gives us what we need.
Jacques DUVERNOIS says in a Dell PowerStore review
Technical Team Leader for Servers and Storage at Orange
Thanks to the duplication and data savings, we have a lot of capacity available to us in the PowerStore. That lets us use and consume logical capacity, which can be done very quickly compared to having to install physical resources inside the PowerStore. The data reduction process is very efficient resulting in very high data reduction if you compare the PowerStore to legacy frames from Dell EMC. This is a very good benefit for us. We were able to very quickly connect new servers and instantly have capacity on the frame because of the data reduction. Moving forward, we can add more disks inside. We plan to have seven drives added in the coming weeks. So we are able to independently add servers, even if we don't have the actual physical capacity on the frame itself.
We have also seen a lot of savings because of the data reduction efficiency, which is currently 4:1 or 5:1.
We will also decommission old frames, and the maintenance contracts on those frames are very expensive. We will save some money as a result and we will also realize some power savings. We also have some environmental-related "green" engagements in Orange, and PowerStore is helping us go in that direction.
There are also space savings because the old frames are using a full rack while the PowerStore is only a 2U unit with almost the same amount of data being stored on it. That is very good.
So it will save us floor space, energy, and money on maintenance contracts.
Our development team is very happy with us, from an admin perspective. When they query us for more capacity, we are very quick to respond and provide them with resources. If they want to deploy new machines, for example, we can quickly assign new data stores that those VMs will rely on. We have saved a lot of time thanks to the PowerStore.
And because the performance of the PowerStore is very high, we can connect many servers on the same frame, instead of having to multiply frames, side-by-side, to get enough power to serve our IOPS. We are working on real-time applications, so we can't afford a response time of more than 10 milliseconds or 15 milliseconds as a maximum. We can't support a greater lag in a call center. The PowerStore now is less than a millisecond, and that is with more load on it. On one VNX we have two or three VMware clusters with four or five ESXis per cluster. On the PowerStore I have, say, 10 clusters and each has about eight ESXis.
Hitachi Virtual Storage Platform 5000 Series: IOPS
TolgaErgul says in a Hitachi Virtual Storage Platform 5000 Series review
Hardware Architect at a comms service provider with 5,001-10,000 employees
Hitachi Virtual Storage Platform 5000 Series exhibits good performance. It has a good IOPS (input/output operations per second), particularly 300 IOPS, so this is what I like about it.
Hitachi Virtual Storage Platform E Series: IOPS
Carlos Villegas Martinez says in a Hitachi Virtual Storage Platform E Series review
Consultant at Telcel
It's for our internal network SAN for about 100 virtual machines. It's about the IOPS. We were looking for a model that would have enough IOPS for these types of virtual machines. IOPS and latency are very important for us because we are running an application, on these virtual machines, that deals with building and milliseconds are very important.
The object storage feature is wonderful. With traditional storage, you have a cost per gigabyte that is extremely high or related to the number of disks. With Zadara Storage Cloud, you have a cost per gigabyte that you can cut and tailor to your needs independent from the number or size of the disks.
We have a lot of tenants, so there is a lot of core and memory under pressure in this service. The good thing is that every single tenant is isolated and defined into their computer engine. This means that a customer is not able to create a problem for another customer, even if they get attacked, spoofed, or run malware.
It is absolutely important that the solution provides drive options such as SSD, NL-SAS, and SSD cache because we have a lot of customers. As managed service providers, we have all kinds of solutions. We have a customer that only has five servers, which means very few I/O disks. However, we also have a system with a cluster of databases that requires high IOPS, which means SSD, NVMe, and all the latest, fastest technologies.
reviewer1754853 says in a Zadara review
Chief IT Architect at a tech services company with 10,001+ employees
Zadara is all-flash so it has a very high IOPS. The speed of the box and what you call the IOPS, the IO operations per second, works very well. The processing is much faster with this product.
Technical support has been great.
The scalability is very good.