The list price of AFF is too expensive. But we have a good connection with NetApp and we can get a very big rebate and that makes the price similar to the competitors' pricing. But I would tell NetApp that they need to be careful with the pricing of the new NVMe disks. They are way too expensive.
All-Flash Storage Arrays NVME Reviews
Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term NVME
NetApp AFF (All Flash FAS): NVME
Subodh Jaiswar says in a NetApp AFF (All Flash FAS) review
AIX and Storage Specialist at a computer software company with 1,001-5,000 employees
It is scalable. I can grow my data. When it comes to NVMe, it is also scalable in terms of capacity and scaling horizontally. For example, we can add multiple nodes in a cluster as well as multiple expansions. I feel the box is very capable in terms of scalability.
Pure Storage FlashArray: NVME
reviewer1177155 says in a Pure Storage FlashArray review
Enterprise Account Executive at a computer software company with 11-50 employees
The pricing of the product is very competitive to others in the market for Flash and NVMe storage. That covers the cost of hardware and support.
TolgaErgul says in a Pure Storage FlashArray review
Hardware Architect at a comms service provider with 5,001-10,000 employees
Pure Storage FlashArray is applicable for virtual environments, e.g. CSS, VR, and YouTube platforms. It's an NVMe data storage platform.
Atwood Cheung says in a Pure Storage FlashArray review
IT Contractor at a financial services firm with 51-200 employees
We're quite happy with eh solution overall. I can't recall coming across any features that were lacking.
There was some complexity in the initial setup.
While they've improved a lot, many features have been released recently and they are not that mature just yet. My understanding is they just released some features, for some transport services over the NVMe and then the file service. However, the file service is not so mature. I had some problems with the file service when we used it.
Other new features, such as the active clustering over the FC, and the verification over the FC feature, we didn't use. We have to have a trial on it first before commenting on it.
HPE Nimble Storage: NVME
Competitors, such as Dell EMC, make use of NVMe storage. They also use the STM storage module. Primera only has a maximum of eight. In contrast, HPE Nimble Storage does not use NVMe and this makes it challenging for us to convince the customer, who is sometimes aware of this technology, to go with it.
HPE does not have sufficient storage. Overall, the customers in need of the solution have sufficient network data storage. There is also SAN storage in one solution, in a single product. I have made three requests that the storage be capable of being samples in a single bundle. We have the protocol as an example when it comes to the product. Yet, if the customer needs to configure NAS storage, this necessitates the purchase of separate software. As the competitors have already launched the solution with NVMe or STM storage, we have also done so with Primera. The same holds true in respect of Dell EMC. In Indonesia we encounter customers who are knowledgeable about the storage and infrastructure and enquire about the solution as it concerns HPE storage.
HPE 3PAR StoreServ: NVME
reviewer1471356 says in a HPE 3PAR StoreServ review
Service & Infrastructure Manager at a tech services company with 201-500 employees
The most important things are availability, scalability, reliability, stability, and performance. We are service providers, and the customers want availability. You must focus on these things before buying storage. I advise going for All-Flash Storage to all people because spinning disks take too much space and electricity and provide less performance. That's why NVMe is better.
I would rate HPE 3PAR StoreServ a seven out of ten.
reviewer1585665 says in a HPE 3PAR StoreServ review
Solution Architech at a consultancy with 51-200 employees
Cloud integration could be better. They can also add an NVMe to port to that. I would like to see NVMe in the next release. That's the future or the near future for storage. That will give us a real high throughput and some performance.
IBM FlashSystem: NVME
It is scalable. All projects in my company use the IBM FlashSystem. I am working on high-end storage, not mid-range. I can scale out or scale up. IBM has introduced FlashSystem 9200 to the market in which I can scale SAS disk, NVMe disk, and SSCM disk. I have three options on one box, which are not available with EMC or Pure Storage.
You can also scale out storage in EMC. In Pure Storage, there are issues in scaling. Pure Storage has different boxes like X70, X90, X50, and if I need to scale or upgrade the box, I need to change our controllers. Every Pure Storage box has limited capacity, whereas, for IBM storage, the capacity of the box is not limited.
C. says in an IBM FlashSystem review
Cloud Engineer at a tech services company with 51-200 employees
We used the solution exclusively for block storage. Over time, it added compression features and now even NVMe.
It's perfectly suited for an on-premise solution or for providing a base for cloud solutions, VMware workloads, IBM i-series, IBM AIX, IBM Power, Linux, and Windows compute. In other words, the complete server stack. It is something others actually can't offer. All of this can be operated from within the same solution.
It definitely has a strong plus in environments where you actually have such different server solutions in place.
Dell Unity XT: NVME
reviewer1318731 says in a Dell Unity XT review
DIrecteur Commerical at a tech vendor with 51-200 employees
I have a problem because between the Unity XT and the PowerMax, sometimes we need another product between these two products. There could be better integrated and the capacity of the size could be larger.
In the future, if NVMe disc could be used on the Dell EMC Unity XT it would be beneficial.
Huawei OceanStor Dorado: NVME
Dell SC Series: NVME
There's always room for improvement in the operating code; minor improvements here and there. We haven't had any real requests for them lately. I can't really think of anything.
Support for Non-Volatile Memory Express (NVME). Most of the newer storage systems support it, but this one doesn't because of its age. Support for that was something we had looked at, but they said, "No, we're not offering that with this. It's going to be in the next product, not in this product because the architecture is just too old."
I'd look for that — support for NVME. That's really the only supportable or new feature that I'd really be looking for.
Lenovo ThinkSystem DM Series: NVME
HPE Primera: NVME
reviewer1375641 says in a HPE Primera review
Associate Vice President - IT at a transportation company with 1,001-5,000 employees
One of the drawbacks of the model we purchased is that it is not running NVMe drives. Even though they say that it is NVMe-ready, it is still on the SSD drives. The model that we purchased has only eight hard drives, and only the ones on the top could work on NVMe. The rest of them are still on the SSD. Its competitors, such as EMC and Pure Storage, are moving or have already moved to NVMe. HPE should improve this solution for NVMe.
HPE should also improve IOs in this solution. IOs in HPE are weaker than Hitachi and Pure Storage.
Dell PowerMax NVMe: NVME
reviewer1510488 says in a Dell PowerMax NVMe review
Senior BDM at a tech services company with 51-200 employees
We actually use the PowerStore 3000 and 1000 products.
I would definitely recommend this solution to other organizations. We've been very happy with it.
I would advise people to make sure that you introduce the features and benefits of NVMe and the power and speed and articulate that well to management or the customer.
I'd rate the solution at a nine out of ten. It's not perfect. It's evolving. However, it's almost perfect.
Feisal Anooar says in a Dell PowerMax NVMe review
VP Global Markets, Global Head of Storage at a financial services firm with 10,001+ employees
We did previously use a different solution.
We switched to take advantage of certain feature sets. Our previous competitor, whilst they did offer deduplication and compression to some degree, could not match the availability nor performance and didn't have the same guaranteed efficiency ratios. They also couldn't perform inline compression without significant performance penalties. This would have to happen at rest and offline. Therefore, we'd need to write the data first, then compress it. The PowerMax solution enabled us to do that inline, without a read or write penalty. Basically, there was no performance impact, and we still saw all the benefits from a reduced physical footprint, such as, cost savings, reduced power requirements, and fewer components to fail (number of drives required being 66 percent lower).
Reviewer593747 says in a Dell PowerMax NVMe review
Storage Team Manager at a government with 10,001+ employees
We have been using Dell EMC PowerMax NVMe ever since it was brought to market, so it's been about three or four years.
We currently use PowerMax NVMe for our file server and all our VMs. It is a SAN, so all of our storage or data sits on it. It is just a great storage appliance.
As a service provider, we have to deliver the best possible service that is backed by SLAs. The NVMe performance is fantastic for our customers and the features of the PowerMax are fantastic. We have seen improvements in performance, which means less customer support tickets. The ease of management frees up resources for our storage teams so they can focus on other problems with other platforms, etc. This is such a self-sufficient beast of a platform that it has really freed up a lot of time so they can focus on other stuff besides storage.
There is no management overhead involved in optimizing performance. It does it so well on its own. We don't have to manage much at all. It really is like a set it and forget it solution. My storage engineers love the system. It is a lot less work than our previous systems, which weren't bad by any means. There is not nearly as much management as before. So, we are saving dozens of hours per month for our storage team, and that is a real cost in our business.
There are different ways to look at security and availability. We take advantage of array level encryption, but that is a behind-the-scenes thing. We tend to focus on the availability part, because high uptime and performance are important to us. In regards to data security and availability, the data is secure if it is encrypted. The availability means that it is always up. We have very good opinions of the security features in both single-tenant and multi-tenant deployed to the security.
There is also the security concept regarding access to data. What we are seeing is that the PowerMax is so consistently dependable that it gives us a very solid comfort level in terms of level of trust. There is data security and protection, keeping your data from the bad guys. On the other hand, there is security knowing that your data is always available. PowerMax provides both of those.
PowerMax was deployed as a replacement/tech refresh for our existing VNX.
We were using XtremIO before this. We have all of the features that were available there. Relatively, there is nothing new that we are using.
We had some challenges with our core banking system. There were performance issues, which was the reason we went to XtremIO All-Flash. NVME has really helped us here because anything less than XtremIO would have caused us issues. So, PowerMax is the best replacement or fit right now. In fact, we have seen that it has really improved the performance as well.
We have been using Dell EMC PowerMax NVMe for around one year.
Vipindas K.P says in a Dell PowerMax NVMe review
Product Manager at a tech services company with 10,001+ employees
It is important for our clients that PowerMax provides NVMe scale-out capabilities. They are also getting great performance as compared to the old storage array model.
Provisioning is faster and immediate. We can do immediate allocation and configuration. As compared to the old storage array model where it used to take half an hour, in PowerMax, we can do it in 5 to 10 minutes. It doesn't take that much time, and there isn't much delay in the PowerMax array.
Our workload is reduced because we are not dealing with any issues. We are not facing many issues on the PowerMax side as compared with the previous one.
It is important that the product provides NVMe scale-out capabilities. We support many things with the product and we need to know what the architecture is. It makes things very simple for us.
The data security and availability are pretty good. We have many clients connecting to the box, which means security is very important. This is true when it comes to remote support. The compliance is very good.
The performance is very good on our servers. It's superior. And the QoS capabilities for providing work congestion protection are also important because about 99 percent of our servers are production servers.
We use the NVMe SCM storage tier feature, and that's how we're able to do the service level capability (SLA). We have storage class memory as a part of our deployment, and we have about 10% of our storage sizing allocated to storage class memory. With that, we are able to create different service levels for the disk groups or loans provisioned from this storage.
It most definitely helps in improving storage-related performance in our environment. The way our core banking solution works is that we have what we call ODS blocks. So, for leveraging that SLA, we were able to implement some kind of priority for those ODS blocks. Oracle had said that this is something for which their Exadata has a special way of doing, but based on my own assessment, we are able to achieve relatively similar levels of performance by using PowerMax.
Before we deployed this solution, we used to struggle with processing about 100,000 transactions in 10 minutes. We are now able to process about 350,000 or more transactions. These are conservative figures. We did hit much more than that, but conservatively, we are able to see about 300% performance improvement as compared to the SSD storage that we had previously from IBM. We have metrics to show that. The performance is different, and it is better than what we were used to.
We are in our ideal environment in which the storage double acts as our UAT and our test environment. So, we've seen remarkable deduplication in that environment because we are able to expand the footprint much more than what we are able to do in production. The production environment is a bit more controlled, but in our DR UAT environment, we are able to stretch those capabilities. The metrics that we see and the number of environments that we're able to create is quite remarkable.
It provides NVMe scale-out capabilities, which is pretty awesome. We currently have a plan to scale up. We started off with about 100TB. Based on the performance that we've seen, we're consolidating more workloads on the storage. We need to scale up a bit, and we find it very valuable to be able to do that. The ability to scale out and scale up marginally depending on what you want is quite valuable to us.
With the NVMe technology, performance in terms of IOPS has improved. Things are generally faster, although there are some bottlenecks with the integration of IBM servers.
The biggest way that PowerMax has improved the way our organization functions is through an increase in performance. The business of pharma is complex and the IOPS demand is huge. In the past, we used VMAX storage, and there was a big issue with the performance. Everybody complained about performance, servers, and storage, saying that they didn't have enough space. We tried many different solutions in an attempt to solve the performance issue.
For example, we tried reducing the data that was stored on disk, and we tried removing unused data. We turned to development and asked that some programs have fewer features. Finally, management made the decision to implement the PowerMax solution, and it solved the issue. As soon as we migrated from VMAX to PowerMax NVMe, the performance increased and everybody felt better.
The security is good. We enabled DSE for our encryption.
CloudIQ has made our lives better. It provides notifications, where you receive an email to let you know about your storage and your SAN. It is a powerful tool, although we have had to upgrade it a few times. Overall, it is a good monitoring tool that gives us a powerful and easy way to monitor our servers.
The SRDF site-to-site replication for the volumes is the most important feature for us. That enables us to do site recovery and replication for our VMware infrastructure.
Along with that, the NVMe response time is very good. We used to have a VMAX 20K but we have just upgraded, and moved two or three generations ahead to PowerMax, and the response time is great. Because we are coming from a hybrid storage scenario, the performance of NVMe is a huge upgrade for us. The 0.4 millisecond response time means our application works great and we are seeing huge performance improvements in our VMware and physical environments.
Regarding data security, EMC has introduced CloudIQ solution with the PowerMax environment, and that enables live monitoring of the telemetry and security data array of the PowerMax. CloudIQ also has a feature called Cybersecurity. That monitors for security vulnerabilities or security events that are occurring on the array itself. That feature is very helpful. We have been able to do some vulnerability assessment tests on the array, which have helped us to resolve issues regarding data security and security vulnerabilities. We are not using the encryption feature of the PowerMax, because we didn't order the PowerMax configuration for it.
CloudIQ helps the environment and lets us manage the respective connected environments. A good feature in CloudIQ is the health score of each connected infrastructure. It gives you timely alerts and informs you when a health issue is occurring on the arrays and needs to be fixed. Those reports and health notices are also sent to Dell EMC support, which proactively monitors all the infrastructure and they will open service requests themselves.
In terms of efficiency, the compression we are currently receiving is 4.2x, which is very good efficiency. We are storing 435 terabytes of data in just 90 TB. In addition to what I mentioned about the NVMe performance, which is very good, we were achieving 150k IOPS on the VMAX, but on the PowerMax the same workload is hitting 300k-plus IOPS. That is sufficient for the workload and means the application is performing as required, according to the SLAs as defined on the PowerMax.
When it comes to workload congestion protection, we have not faced any congestion yet in our environment. We have some spikes on Friday evenings, but they are being handled by PowerMax dutifully. It can beautifully handle up to 400k IOPS, even though it is only designed for 300k IOPS. That is another illustration of its good performance.
We removed the need to observe whether we ran into issues with the performance of disks or number of IOPS. Previously, our Oracle Database would throw us performance errors. Now, with PowerMax, everything runs smoothly.
I would access the solution’s built-in QoS capabilities for providing workload congestion protection at 10 (out of 10), as we are using the highest, platinum-level minimum response time from the system. The NVMe SCM storage tier feature offers crazy speeds. When we were looking for a storage solution, we were looking for the most reliable, high performance, latest solution to delay end-of-life. Our PowerMax setup everywhere enables the diamond-level setting with enabled monitoring. Until this day, we have not experienced any anomalies. We simply don’t experience workload congestion. Our primary requirement was the reliability of PowerMax, then the rest of the features, like NVMe SCM, were a nice add-on
I have been using Dell EMC PowerMax NVMe for one and a half years.
Pure FlashArray X NVMe: NVME
reviewer1052424 says in a Pure FlashArray X NVMe review
Chief Infrastructure & Security Office at a financial services firm with 51-200 employees
We needed a flash array to support our core databases for maximum performance. We use SQL. We were using vSAN before, but we were having some problems with it. So, we wanted to isolate the databases with dedicated storage. Rather than using a vSAN solution using servers, we tested a couple of solutions, and we figured out that Pure FlashArray X NVMe was giving us the best performance.
reviewer1530492 says in a Pure FlashArray X NVMe review
Senior Administrator/IT Systems & Cloud Operations at a comms service provider with 10,001+ employees
This is a solution that I would recommend.
I would rate Pure FlashArray X NVMe an eight out of ten.
Being able to have broken files on-site on the same appliance is quite useful.
The newer version of NVME has a really noticeable difference in quality versus the last generation. It's better in terms of latency. It allows for so much more input.
The initial setup was extremely simple and straightforward.
The stability is quite good.
We've found the scalability to be excellent.
The price of the product isn't too high.
reviewer1560501 says in a Pure FlashArray X NVMe review
Cloud Architect at a computer software company with 51-200 employees
We haven't deployed anything too large yet. That said, just based on the design, that two-controller design, we're not going to have any of the scale problems that we had with SolidFire. They do scale it differently as it's a two-controller design. However, you can easily upgrade by upgrading your drive sizes due to the fact that it's all NVMe. The performance is top-notch, as it's all NVMe based.
Flake Sherrill says in a Pure FlashArray X NVMe review
Storage Engineer at a computer software company with 201-500 employees
Its speed is superior to our existing Unity x00 model. There are three different models of Unity. There is x00, which is the original model for Unity. There is x50, and now you have x80s. It has performed substantially better than our x00 model and a little bit better than our x50 model. I cannot rate it against the x80s on the Unity class, but from what we've got, it has beaten those two models performance-wise. This is bearing in mind that those x00 models were there before they had their own X-series with the NVMe flash.
IBM FlashSystem 9100 NVMe: NVME
Dell PowerStore: NVME
- Easy of use
It also has some very good compression capabilities.
We were looking for a solution that was easy to install in our VMware environment, that was flexible. PowerStore X is a type of a VMware cluster that you install inside your environment. If you have a VMware environment, like we have in production, it's easy to install and use.
It enables us to add compute or capacity independently. We have also deployed some apps on PowerStore, even though the PowerStore we have is not the biggest one you can buy. One of the main characteristics of PowerStore is that it is like another piece of VMware, so you can run applications on top, applications that have direct access to the storage. The ability to add compute or capacity independently is great because it adds more flexibility to our environment. You are not adding only storage, but you're adding some not-so-big computing capability. You have the possibility of adding some virtual machines, running NVMe storage, and that is a real plus for this solution.
In addition, PowerStore's built-in intelligence for helping to simplify IT operations is incredible. When we approached PowerStore, we had an idea that it was a normative platform, but we were impressed by the capability of the solution. It's probably one of the best pieces of storage that we have installed here.
I don't know what the names were because I was not 100 percent involved in the selection process, but there were two other vendors we looked at.
One of the main differences was the way they do inline deduplication. In the backend, PowerStore does all kinds of smart things. The result is that with less physical storage being used, we are able to host more data. Also the fact that PowerStore is completely NVMe-based means performance is great, thanks to the technical architecture.
Jacques DUVERNOIS says in a Dell PowerStore review
Technical Team Leader for Servers and Storage at Orange
My advice would be don't hesitate. It's a good frame. It's doing what it is designed for. It serves IOPS very well. The data savings are very important and the response time is very short. There are always tricky situations that come up, but honestly, since our PowerStore went live, I don't have to worry about the storage for this environment. The VMware guys are independent. They don't need me anymore.
We accepted the risk, due to the fact that it was a relatively new platform, when we went with PowerStore. We were totally aware of that fact. That is why we put the first one into our development area, and not production. Even if we have more than 100 developers working on it, any problems would affect developers, not production. We understood there could be costs because having 100 developers not doing anything during a day costs money. But PowerStore didn't disappoint us. We are very happy with it. We now have four in production.
We are a Dell partner, so we also resell PowerStore to our end-users. When we initially built this frame, we wanted, say, 100 terabytes, but they persuaded us to only buy 40 terabytes of SSD or NVMe drives because of the savings that they said we would see from the data reduction efficiency. The program they gave us was that if we didn't achieve that kind of data efficiency, they would provide us some disks for free.
NetApp NVMe AFF A800: NVME
I have been using NVMe AFF A800 for approximately one month.
I don't use NetApp NVMe AFF A800, I download it. I am installing and implementing it for customers.
reviewer1347297 says in a NetApp NVMe AFF A800 review
Engineering Manager, R&D at a healthcare company with 10,001+ employees
NetApp NVMe AFF A800 is mainly used for data center storage. It is very good. It can handle applications well, such as Oracle, or any other application. It is good at keeping all types of data secure. It has a great security layer.
Pavilion HyperParallel Flash Array: NVME
JohnLaw says in a Pavilion HyperParallel Flash Array review
Network Manager at a transportation company with 1,001-5,000 employees
It comes down to the performance that they offer as well as the flexibility of bringing your own disk and replacing them on your own cycle. Those are the benefits that we get.
We have been able to consolidate storage into Pavilion. Pavilions are our only SANs because it is a bring your own disk solution. When new drives come out, we are able to take out half of the drives in the system, put in new drives, move our VMs over to the new drives, take the other drives out, and populate those with new drives. Then, we are suddenly twice as dense as we were before. NVMe flash is only going to get denser and cheaper so we can make use of that every couple of years by just throwing newer disks into it at a fraction of the cost of a new SAN.
We have been able to run a tremendous number of VMs on our Pavilion system. We haven't seen a change in staff. I wouldn’t consider any solution that I have to bring on additional staff to support. It is mostly about cost savings in hardware, and a happiness factor for all our users that everything will work so quickly.
reviewer1534224 says in a Pavilion HyperParallel Flash Array review
Manager of Production Systems at a media company with 10,001+ employees
The solution's performance and density are excellent.
Typically, there is a trade-off. You can have incredibly dense storage in a small footprint sometimes, but the trade-off to that is you need a lot of horsepower to access it, which ends up counterbalancing the small footprint. Then, sometimes you can have very fast access to a storage array, but that usually requires a more comprehensive infrastructure.
This kind of balance, to somehow fit it all into one chassis, in a 4U server rack, is unheard of. You have the processing proxy accessing the data and almost a petabyte of flash accessible.
It's a very small footprint, which is important to our type of industry because we don't have massive servers.
We have benefited from this technology because we were able to centralize a lot of workflows. There is normally a trade-off, where you can have very fast local storage on the computer, but in a collaborative environment that's counterproductive because it requires people to share files and then copy them onto their system in order to get the very fast local performance. But with Pavilion, basically, you get that local NVMe performance but over a fabric, which makes it easier to keep things in sync.
We have been able to consolidate storage and as part of a multi-layer storage system, it plays a very important part. For us, it cuts down on costs because we essentially get an NVMe tier that's large enough to hold everyone's data, but the other thing for us is time and collaboration. Flexibility is worth a lot to us, as is creativity, so having the resources to do that is incredibly valuable.
If we wanted to do so, Pavilion could help us create a separation between storage and compute resources. It's one of those things where, in some environments, such as separation is natural and in other environments, there's an inclination to minimize the separation between compute and data. But to that point, Pavilion has the flexibility to allow you to really do whatever you want.
In that sense, you have some workloads where compute is very close to the data, such as iterative stuff, whereas we have some things where we simply want bulk data processing. You can do any of that but for us, that type of separation is not necessarily something we are concerned with, just given our type of workflows. That said, we have that flexibility if necessary.
This system has allowed us to ingest a lot of data in parallel at once, and that has been very useful because it's a parallel system. It's really helped eliminate a lot of the traditional bottlenecks we've had.
Pavilion could allow for running additional virtual machines on existing infrastructure, although in our case, the limitation is the core densities in our hardware. That said, it is definitely useful for handling the storage layer in a lot of our VMs. The problem is that the constraints of our VM deployments are really in just how many other boxes we have to handle the cores and the memory.
reviewer1536714 says in a Pavilion HyperParallel Flash Array review
Manager of Platform Software at a healthcare company with 51-200 employees
We've evaluated other solutions but the shortcomings of the others were that they did not scale capacity and performance as easily as the Pavilion solution did. The competitors also used SSDs and NVMe over fabric.
The object storage feature is wonderful. With traditional storage, you have a cost per gigabyte that is extremely high or related to the number of disks. With Zadara Storage Cloud, you have a cost per gigabyte that you can cut and tailor to your needs independent from the number or size of the disks.
We have a lot of tenants, so there is a lot of core and memory under pressure in this service. The good thing is that every single tenant is isolated and defined into their computer engine. This means that a customer is not able to create a problem for another customer, even if they get attacked, spoofed, or run malware.
It is absolutely important that the solution provides drive options such as SSD, NL-SAS, and SSD cache because we have a lot of customers. As managed service providers, we have all kinds of solutions. We have a customer that only has five servers, which means very few I/O disks. However, we also have a system with a cluster of databases that requires high IOPS, which means SSD, NVMe, and all the latest, fastest technologies.