Coming October 25: PeerSpot Awards will be announced! Learn more

All-Flash Storage Arrays Cache Reviews

Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term Cache
NetApp AFF (All Flash FAS) logo NetApp AFF (All Flash FAS): Cache
AWS Solutions Architect at a pharma/biotech company with 10,001+ employees

I have found the following features of NetApp AFF most valuable: Snapshot, snap clone, deduplication, and compaction. 

These features help with data protection. We host an exchange, so protecting our data and workloads is of prime importance.

This solution helps accelerate demanding enterprise applications. VMware workloads, the database, and Oracle Solaris are hosted on AFF, which means that our primary priority workloads are on AFF and that the secondary ones are on FAS. That includes the SAN national cloud.

Initiating Snapshot is not time consuming, and it is not tedious. That's the reason why FlexClone and FlexCache help us with our protection care strategy.

View full review »
Dell XtremIO logo Dell XtremIO: Cache
Professional 2: Application Developer at a tech services company with 10,001+ employees

If you are looking at flash storage solutions, XtremIO doesn't offer any unique features. Most of my customers are migrating their workload from XtremIO to other formats because of this. If you look at Hitachi or IBM, they have the VSP G series or FlashSystem, these products have many features available. We can scale up and scale out, add multiple nodes, use a global cache, and we don't have the same kinds of features in XtremIO. Because of the lack of unique and key features, most customers nowadays don't want XtremIO. 

XtremIO needs to have a global cache. Internal architecture should also be redefined and existing architecture sectioned off. Additional unique features should be added, rather than just common features like replication. Right now, XtremIO is an all-flash array, which is costly. I would like to see them come up with a hybrid model, one that is more cost-effective and may offer more benefits to customers. 

Since XtremIO is all-flash, it doesn't currently have NAND support. I would like to see interface support from XtremIO, and at least NAND or SD card support. If they supported a combination of SSDs and SDs, that could be beneficial to some small and medium businesses. 

Dell should also provide a data analysis tool, in the case of any issues with internal components like controls, cache, backup drive, etc. It would be helpful to have a tool to troubleshoot performance issues. 

A last feature is that XtremIO should have a cloud mobility option, in addition to flash. XtremIO has no data migration features, so these features should be implemented without needing to purchase an additional license or application. XtremIO needs some fine-tuning and these are where I would start. 

View full review »
Pure Storage FlashArray logo Pure Storage FlashArray: Cache
Jason Devine - PeerSpot reviewer
Cloud Solutions Architect at a tech services company with 10,001+ employees

We previously used Dell EqualLogic. It was going under life, and it was just a legacy spinning disk with an SSD cache. So, the main reason for switching was just a tech refresh and an upgrade.

View full review »
Hitachi Virtual Storage Platform F Series logo Hitachi Virtual Storage Platform F Series: Cache
Engineer at Secretaria de Educacion del Gobierno del Estado de Mexico

For the support windows to work, maybe they have to upgrade the firmware of the VSP. They changed the hardware or the disk. I don't know if it was the port blade they changed or a VM for a memory cache. Also, replacing the old target with the processor target would be fine. The old equipment is very easy to manage, and I don't have any bad commentary.

View full review »
IBM FlashSystem logo IBM FlashSystem: Cache
Cloud Engineer at a tech services company with 51-200 employees

We have never ever had an incident with it making the infra go on its knees, nor did we have any datacorruption ever.

All storage solutions have bugs and all have their correctives that might not address an issue on the first occurance or under all circumstances. When the system is stressed and under some specific (nonetheless rare) conditions , the developped code might trigger a reboot of a controller node  to avoid data corruption. A system with 2 controllers is resilient enough on its own and a reboot of a node to prevent e.g. cache merge problems or alike is not harmfull.

To me, the need to evict a controller node and warm boot it is actually intentionally a safety precaution that avoids data corruption, something we all want to stay out as much as possible. It's as reliable as any other product in that respect. All solutions that I know off like DELL EMC Unity, Fujitsue DX-series or 3PAR Storeserv respond in the same manner to avoid datacorruption. I've seen it the most on the SVC (code 7.1/7.2 around 2012/2013  , but not on the  Storewize v7000 Gen1 to Gen2+ solutions, though I have to admit they all had about 40 to 50% of the load of the SVC.


View full review »
Dell Unity XT logo Dell Unity XT: Cache
Cloud Engineer at a tech services company with 51-200 employees

The UEMCLI is not an object-oriented CLI and the more object-rich PowerCLI has been discontinued. Only people with bash experience possibly can operate it. Still nowadays, feeding object from one command into another is still a burden with such CLI. When adding a few disks to a cluster, the CLI is actually standing in the queue for one disk to be added to all, requiring multiple scans on each membering host, before proceeding with the second...and scanning all hosts once again.  One could add all disks at once and stand in the queue once for a rescan all.

There isn't a means to add volumegroups , nor hostgroups. A feature that any solution I worked with so far has. Its a burden to assure each host has the same LUN ID on each host in this manner. As of the june 2021 releae , code OE 5.1 it seems to offer the option to have hostgroups in the end !

The integration with vCenter comes with a sideeffect, in that it will take control of the vSphere scan process, moreover every esx host is scanned multiple times. It takes easiliy a few hours to add a few LUNS to a few hosts. Rather Painfull. Even when adding LUNs using the unisphere GUI , you can keep up with the pace of your script.

Support Responsiveness & time to fix bugs should be improved. Over the past 1,5 yrs we had occassional controller reboots and we went all the way from OE 4.5 over 5.02 to 5.03 and eliminated the  most common causes. We still face a stress triggered cache merge issue and though we provided the dumps and engineering acknowledged the bug, it has been told that addressing the bug requires substantial code rewritting and the problem will be fixed in the next major code release (OE 6.x) . We are now a year later, still no fix, but furtunately faced the considition once on one out of 5 arrays during that year. 

View full review »
DIrecteur Commerical at a tech vendor with 51-200 employees

The most valuable feature is the dynamic cache of this product. It is very important. We have the physical cache and we can boost this cache using disks. All the products are mainly flash now and this is one of the main characteristics which our customers like.

View full review »
Dragan Knezevic - PeerSpot reviewer
Senior Presales System Engineer at Oblak Tehnologije doo

The most valuable feature is the fast cache with functionality rewrite. 

View full review »
Lenovo ThinkSystem DE Series logo Lenovo ThinkSystem DE Series: Cache
Solutions Developer at Next Dimension Inc.

You can not buy a lot of options for these devices. There are a lot of things that it does not do. Some things that it does not do that we would like it to do include easy tiering. If you have got spindles and you want to cache a couple of terabytes of storage on SSD, that would be something we would like to see that, currently, it does not have the capacity to do.  

The thing it comes down to is that Lenovo needs to add some more of the software features that would allow the ThinkSystem line to compete with other products that we sell. Other than that, it is what it is.  

View full review »
HPE Primera logo HPE Primera: Cache
Service Delivery Manager at a tech services company with 11-50 employees

I work with an HPE authorized partner in Malta and we offer storage solutions for customers. HPE Primera is one such product that I have experience with.

We have noticed that these days, most of the customers are implementing a solution that is a hybrid between Nimble and Primera All-Flash. There are both spinning disks and flash, where flash is used as the cache, which makes the price more competitive.

The customers are primarily using it for disaster recovery. They have their cluster and they are replicating one another to provide business continuity and disaster recovery applications.

View full review »
Dell PowerMax NVMe logo Dell PowerMax NVMe: Cache
Jeff Dao - PeerSpot reviewer
Infrastructure Lead at Umbra Ltd.

With the SCM memory, it has been set it and forget it. It is being used as a cache drive. There is very little configuration for us to do. We just know that it is working.

PowerMax NVMe's QoS capabilities give us a lot of visibility into taking a look at what could be a potential performance issue. However, because it is so fast, we haven't really noticed any slowdowns from the date of deployment even until today.

It is a very good storage appliance for enterprise-level, mission-critical IT workloads because of its high redundancy, parity drives. It gives us the ability to not worry about our data. Or, if something were to go wrong, e.g., a drive pops, then we have our mission-critical warranty. We get a drive the same day, then get it swapped by the next business day at the latest.

PowerMax NVMe has made it a lot easier to understand how much we are able to provision. It has made it a lot faster to provision new things. 90% of my time for provisioning has been reduced. Also, it has made it very easy to understand and see everything behind it versus the older heritage, where Dell EMC was very convoluted and hard to get working. Things that used to take an hour, probably now take five to 10 minutes.

View full review »
Solution Architect at Sybyl

We brought up this question to the implementation engineer. We were comparing use cases where a customer is using RecoverPoint, then goes to PowerMax. In our previous setup with XtremIO, we were using RecpverPoint and keeping snapshots for 30 days, every few seconds. With PowerMax, I requested this for every 15 minutes, keeping it for a week. The engineer's answer was, "There will be too many snapshots. It might slow down the system." This is specifically for the use cases where there is RecoverPoint. While PowerMax works with RecoverPoint, and you can use it, there should be some way where you can have even more snapshots and not to worry about performance and system cache.

View full review »
Haseeb Sheikh - PeerSpot reviewer
Assistant Manager IT Infrastructure at ufone

The NVMe scale-out capabilities were a factor we had in mind when we were evaluating the PowerMax against competitors, including IBM and Huawei. The scale-out capabilities are very important. We have 4 TB of cache with four directors right now, and we can add capacity in the future. If that capacity is met and we need to add more engines for our workload, we can do that very easily.

We are not currently using the NVMe SCM storage tier feature, but that is in the pipeline. If there is a high-demand workload in the future, we will consider the SCM storage.

View full review »
Zadara logo Zadara: Cache
Steve Healey - PeerSpot reviewer
CTO at Pratum

One of the most valuable features is its integration with other cloud solutions. We have a presence within Amazon EC2 and we leverage compute instances in there. Being able to integrate with compute, both locally within Zadara, as well as with other cloud vendors such as Amazon, is very helpful, while also being able to maintain extremely low latency between those connections. We have leveraged 10-Gig direct connections between them to be able to hook up the storage element within Zadara with the cloud platforms such as Amazon EC2. That is one of the primary technical driving factors.

The other large one is the partnership and the managed service offering from Zadara. That means they have a vested interest and are able to understand any issues or problems that we have. They are there to help identify and work through them and come to solutions for us. We have a unique workload, so problems that we may have to identify and work through could be unique to us. Other customers that are just looking to manage a smaller amount of data would not ever identify or have to work through the kinds of things we do. Having a partner that is interested in helping to work through those issues, and make recommendations based on their expertise, is very valuable to us.

Zadara's dedicated cores and memory provide us with a single-tenant experience. We are multi-tenant in that we manage multiple organizations and customers within our environment. We send all of that data to that single-tenant management aspect within Zadara. We have a couple of different virtual, private storage arrays, a couple of them in high-availability. The I/O engine type we're leveraging is the 2400s.

We also have disaster recovery set up on the other side of the U.S. for replication and remote mirroring. Being able to manage that within the platform allows us to add additional storage ourselves, to change the configuration of the VPSA to scale up or scale down, and to make any changes to meet budgetary needs. It truly allows us to manage things from a performance standpoint as well. We can also rely upon Zadara, as a managed-services provider, to manage those requests on our behalf. In the event that we needed to submit a ticket  and say, "Hey, can you add additional storage or volumes?" it's very helpful to have them leverage their time and expertise to perform that on our behalf.

It is also very important that Zadara provides drive options such as SSD, NL-SAS, and SSD cache, for our workload in particular. We require our data to not only be accessible, but to be fast. Typically, most stored data that is hotter or more active is pushed onto faster storage, something like flash cache. The flash cache we began with during our first year with Zadara worked pretty well initially. But our workload being a little unique, after that, the volume of data exceeded the kind of logic that can be used in that type of cache. It just looks at what data is most frequently accessed. Usually the "first in" is on that hot flash cache, and our workload was a little bit more random than that, so we weren't getting as much of the benefit from that flash cache

The fact that Zadara provides us with the ability to actually add a hybrid of both SSDs and SATA allows us to specifically designate what volumes and what data should be on those faster drives, while still taking into account budget constraints. That way, we can manage that hybrid and reduce the performance on some of the drives that are housing data that is really being stored long-term and not accessed. Having that hybrid capability has tremendously helped with the flexibility to manage our needs from a performance standpoint as well as a cost perspective.

As far as I know, they also have solid support for the major cloud vendors out there, in addition to some others that I hadn't heard of. But they certainly support Amazon EC2 and Google and Rackspace, among others. Those integrations are very important. Most organizations have some sort of a cloud presence today, whether they're hosting certain servers or compute instances or some other workload out in the cloud. Being able to integrate with the cloud and obtain data and store data, especially with all these next-generation threats and things like ransomware out there, is important. Having backups and storage locations that you can push data to, offsite, or integrate with, is definitely key.

View full review »
Mauro Razzetti - PeerSpot reviewer
CEO at Momit Srl

The object storage feature is wonderful. With traditional storage, you have a cost per gigabyte that is extremely high or related to the number of disks. With Zadara Storage Cloud, you have a cost per gigabyte that you can cut and tailor to your needs independent from the number or size of the disks. 

We have a lot of tenants, so there is a lot of core and memory under pressure in this service. The good thing is that every single tenant is isolated and defined into their computer engine. This means that a customer is not able to create a problem for another customer, even if they get attacked, spoofed, or run malware.

It is absolutely important that the solution provides drive options such as SSD, NL-SAS, and SSD cache because we have a lot of customers. As managed service providers, we have all kinds of solutions. We have a customer that only has five servers, which means very few I/O disks. However, we also have a system with a cluster of databases that requires high IOPS, which means SSD, NVMe, and all the latest, fastest technologies.

View full review »
Nick Barron - PeerSpot reviewer
Chief Technology Officer at Harbor Solution

Our initial application was probably the simplest one. We were sunsetting a product, but we needed to do some movement and we needed some additional storage, but we knew that what we needed was going to change within six months as we got rid of one product and brought in another. To handle this, we started deploying Block storage with Zadara, which we then changed to Object storage and effectively sent back the drives related to the Block storage as we did that migration. This meant that we did not have to invest in new technology or different platforms but rather, we could do it all on one platform and we can manage that migration very easily.

We use Zadara for most of our storage and it provides us with a single-tenant experience. We have a lot more customer environments running on it and although we don't use the compute services at the moment, we do use it for multi-tenant deployment for all of our storage.

I appreciate that they also offer compute services. Although we don't use it at the moment, it is something that we're looking at.

The fact that Zadara provides drive options such as SSD, NL-SAS, and SSD Cache is really useful for us. Much like in the way we can offer different deployments to our customers, having different drive sizes and different drive types means that we can mix and match, depending on customer requirements at the time they come in.

With available protocols including NFS, CIFS, and iSCSI, Zadara supports all of the main things that you'd want to support.

In terms of integration, Zadara supports all of the public and private clouds that we need it to. I'm not sure if it supports all of them on the market, but it works for everything that we require. This is something that is important to us because of the flexibility we have in that regardless of whether our customers are on-premises, in AWS, or otherwise, we can use Zadara storage to support that.

I would characterize Zadara's solution as elastic in all directions. There clearly are some limits to what technology can do, but from Zadara's perspective, it's very good.

With respect to performance, it was not a major factor for us so I don't know whether Zadara improved it or not. Flexibility around capacity is really the key aspect for us.

Zadara has not actually helped us to reduce our data center footprint but that's because we're adding a lot more customers. Instead, we are growing. It has helped us to redeploy people to more strategic projects. This is not so true with the budget, since it was factored in, but we do focus on more strategic projects.

View full review »
Platform and Infrastructure Manager at a tech services company with 1,001-5,000 employees

We use Zadara as a multi-tenanted experience and it is key to us that we have dedicated resources for each tenant because it maintains a consistent level of performance, regardless of how it scales.

The fact that Zadara provides drive options such as SSD and NL-SAS, as well as SSD Cache, is very important because we need that kind of performance in our recovery environments. For example, when the system is used in anger by a customer, it's critical that it's able to perform there and then. This is a key point for us.

At the moment, we don't use the NFS or CIFS protocols. We are, however, big users of iSCSI and Object, and the ability to just have one single solution that covers all of those areas was important to us. I expect that we will be using NFS and CIFS in the future, but that wasn't a day-one priority for us.

The importance of multi-protocol support stems from the fact that historically, we've had to buy different products to support specific use cases. This meant purchasing equipment from different vendors to support different storage workloads, such as Object or File or Block protocols. Having everything all in one was very attractive to us and furthermore, as we retired old equipment, it can all go onto one central platform.

Another important point is that having a single vendor means it's a lot easier for us to support. Our engineers only need to have experience on one storage platform, rather than the three or four that we've previously had to have.

It is important to us that Zadara integrates with all of the public cloud providers, as well as private clouds because what we're starting to see now, especially in the DR business, is the adoption of hybrid working from our customers. As they move into the cloud, they want to utilize our services in the same way. Because Zadara works exactly the same way in a public cloud as it does on-premises, it's a seamless move for us. We don't have to do anything clever or look at alternative products to support it.

It is important to us that this solution can be configured for on-premises, co-location, and cloud environments because it provides us with a seamless experience. It is really helpful that we have one solution that stretches across on-premises, hybrid, and public cloud systems that looks and works the same.

An example of how Zadara has benefited our company is that during the lockdown due to the global pandemic, we've had a big surge in demand for our products. The ability of Zadara to ramp up quickly and expand the system seamlessly has been a key selling point for us, and it's somewhat fueled our growth. As our customer take-up has grown, Zadara's been the backbone in helping us to cope with that increased demand and that increased capacity.

It's been really easy to do, as well. They've been really easy to work with, and we've substantially increased our usage of Zadara. Even though we've only been using it for just about five months, in that time, we've deployed four Zadara systems across four different data centers. Their servicing capacity has been available within about four weeks of saying, "Can you do this?" and them saying "Yes, we can."

With respect to our recovery solutions, using Zadara has perhaps doubled the performance of what we had before. A bit of that is because it's a newer technology, and a bit of that is also in the way we can scale the engine workload. When the workload is particularly high, we can upgrade the engine, in-place, to be a higher-performance engine, and then when the workload scales down, we can drop back to a lower-performance one. 

That flexibility in the performance of not only being able to take advantage of the latest flash technology but also being able to scale the power of the storage engines, up and down as needed, has been really good for us.

Using Zadara has not at the moment helped to reduce our data center footprint, although I expect that it will do so in the future. In fact, at this point, we've taken up more data center footprint to install Zadara, but within six months we will have removed a lot of the older systems. It takes time to migrate our data but the expectation is that we will probably save between 25% and 30%, compared to our previous footprint.

This solution has had a significant effect on our budgeting. Previously, we would have had to spend money as a capital expense to buy storage. Now, it's an operational expense and I don't need to go and find hundreds of thousands of pounds to buy a new storage system. That's helped tremendously with our budgeting.

Compared to the previous solution, we are expecting a saving of about 40% over five years. When we buy new equipment, our write-down period is five years. So, once we've bought it, it has to earn its keep in that time. Using Zadara has not only saved us money but it will continue to save us money over the five years.

It has saved us in terms of incurring costs because I haven't had to spend the money all upfront, and I'm effectively spreading the cost over the five years. We do see an advantage in that the upfront capital costs are eliminated and overall, we expect between 30% and 40% savings over the lifetime if we'd had to buy the equipment.

View full review »
Chief Information Officer at a tech services company with 201-500 employees

The fact that we have offsite storage that is provided to us using iSCSI as a service has allowed me to offload certain storage-related workloads into Zadara. This means that when I have a planned failover, if I need to maintain the local storage that I have in my data center, I simply shift all of the new incoming traffic into Zadara storage. None of my customers even know that it has happened. In this regard, it allows us to scale in an infinite way because we do not have to keep adding more capacity inside our physical data center, which includes power, networking, footprint, and so on. The fact that Zadara handles all of that for me behind the scenes, somewhere in Virginia, is my biggest selling point.

With its dedicated cores and memory, we feel that Zadora provides us with a single-tenant experience. This is important for us because we are aware that in the actual physical environment, where Zadara is hosting our data, they have other clients. Yet, the fact that we have not had any kind of performance issues, and we don't have the noisy neighbor concept, feels like we are the only ones on that particular storage area network (SAN). It's really important for us.

Zadara provides drive options such as SSD and NL-SAS, as well as SSD cache, and this has been important for us. These options allow us to decide for different volumes, what kind of services we're going to be running on them. For example, if it happens to be a database that requires fast throughput, then we will choose a certain type of drive. If we require volume, but not necessarily performance, then we can choose another drive.

A good thing about Zadara is you do not buy a solution that is fixed at the time of purchase. For instance, if I buy an off-the-shelf storage area network, then whatever that device can do at the time of purchase, give or take one or two upgrades, is where I am. With Zadara, they always improve and they always add more functionalities and more capacities.

One example is that when we became customers, their largest drives were only nine terabytes in size. A year or so later, they improved the technology and they now have 14 terabyte drives available, which is good at almost a 50% increase. It is helpful because we were able to take advantage of those higher densities and higher capacities. We were able to migrate our volumes from the nine terabyte drives to the 14 terabyte drives pretty much without any downtime and without any kind of interruption to service. This type of scalability, and the fact that you are future-proofing your purchase or your operations, is another great advantage that we see with Zadara.

As far as I know, Zadara integrates with all of the public cloud providers. The fact that they are physically located in the vicinity of public cloud regions is a major selling point for them. From my perspective, it is not yet very important because we are not in the public cloud. We have our own private cloud in Miami, and not part of Amazon or Azure. This means that for us, the fact that they happen to be in Virginia next to Amazon does not play a major role. That said, they are in a place where there is a lot of connectivity, so in that regard, there is an advantage. We are not benefiting from the fact that they are playing nice with public clouds, simply because we are not in the public cloud, but I'm sure that's an advantage for many others who are.

Absolutely, we are taking advantage of the fact that they integrate with private clouds.

Zadara saves me money in a couple of ways. One is that my operational costs are very consistent. The second is that the system is consistent and reliable, and this avoids a lot of the headaches that are associated with downtime, reputation, and all of that. So, knowing that we have a reputable, reliable, and consistent vendor on our side, that to me is important.

It is difficult to estimate how much we have saved because it wouldn't be comparing apples to apples. We would be buying a system versus paying for it operationally and I don't really have those kinds of numbers off-hand. Of course, I cannot put a price tag on my reputation.

View full review »