Their backup software could be improved.
In the next release, I would like to see a complete S3 protocol. Also better compatibility and integration with VM-ware.
Their backup software could be improved.
In the next release, I would like to see a complete S3 protocol. Also better compatibility and integration with VM-ware.
The admin tools and the integration with other products and clouds can be improved.
It should also be easier to identify and troubleshoot problems in this solution. It takes a long time, and it should be improved.
Its integration could be improved.
Get yourself acquainted with the product and see what it can do. Many people may run into the issue of thinking that it can do way less than it can actually do.
We do not use their cloud backup services at the moment, because there hasn't been a strong enough business case. I would not call it priority, but we are definitely highly aware of the cloud backup services if an opportunity or business case arrives.
We don't work that much with SAN. Basically, we mostly use the solution for its NAS functionality. We do not have that many SAN cases.
Since our StorageGRID is really new, we haven't gotten the full effect of it yet. The native integration, where we can seamlessly move onto another media, is great. It is very intuitive and easy to work with.
Biggest lesson learnt: Keep it simple.
I would easily rate it as 10 out of 10, because it works like a charm. When you have a problem, it does exactly what it is supposed to do, with little to no effort.
We host NSS as a part of a cluster. We use AFF to support data analytics, machine learning, cloud integration, and SAP workloads as well.
Before, retrieving data or searching for something on the application would take some time. But since we migrated to NetApp, retrieving of the data happens quickly. It's fast.
In addition, we can easily manage the volumes on the NetApp application. We are getting very good, high performance and it has simplified our data management jobs, such as creating volumes. If our hard drive fails, we can reinitialize the process, and do many other things. It's very helpful.
NetApp has helped to reduce support issues due to performance or troubleshooting as we do not have such issues. We have not faced any performance issues since installing this device.
In addition, the ONTAP data management software has simplified our operations. We use it for high-availability of our file system. If any hard drive goes down, it will automatically be recovered.
We use the NetApp AFF to support cloud integration and SAP Oracle. It has made the Oracle WebLogic site very fast and we can deploy the machines very easily. We can assign storage to the server visually, and use it to manage the storage.
The first use case is having normal CIFS and NFS shares use Active Directory integration with antivirus integration. Another use case is for VMware VCF in a TKG environment using NFS and a SAN protocol.
I am implementing the NetApp product for customers. I deploy CIFS and NFS shares for file access purposes and block access for VMware infrastructures.
We've seen an overall boost in performance, going from a combination of solid-state and spinning disks to all solid-state. That has increased our ability to provide more performance and throughput for the services that we're hosting. That's the biggest deal for us. We do what we did before, but now we can do it on all-flash. It's just faster.
It accelerates virtualization and databases, which goes back to the performance. All-flash gives us the ability to provide the performance as it's needed and makes it easy to do and instantly observable.
The use of AFF with Oracle has made it much faster. It all comes back to how fast it is. And with SnapCenter, the backup piece is much better than it was before. We were using NetBackup, but SnapCenter allows us to back up with snapshots, which is something NetBackup did not allow us to do.
Also, the dedupe and compression reduce how much disk space we require. All of that really makes a big difference for us.
An extra benefit is that NetApp AFF All Flash FAS has really reduced support issues related to performance. When everything is going at solid-state speeds, it's a lot easier to find the problems, where there's slowness.
With all of it being in one software package, the ONTAP data management software has simplified our operations. We have the Enterprise licensing and that means we get all the tools that come with it. All of those tools, and their integration, make backup and restore very simple and very efficient.
The deployment itself, compared to other platforms, should be a lot easier. We don't find it all that complicated because we have been doing it for such a long time, but it should be a bit easier. They can improve that.
When it comes to the connectivity on the back end, where the hardware is concerned—the cabling and the like—it could also be simplified to ease the communication between the nodes and between the other components of the infrastructure. I still find that a little bit complicated. I know that SAN, itself, is quite complicated. It's not the same approach as the hyper-converged solutions, but there are always ways to improve. NetApp's engineers should try to tackle that so that integration between devices, including the cabling at the back, is simplified.
Another thing that could be simplified is the Service Processor setup. That is something that requires you to perform a lot of tasks before it is completed.
Also, joining clusters should be a lot easier. With one or two commands you should be able to complete that.
The replication of Dell EMC XtremIO could improve. In the newer versions they have improved, however, the replication can be improved further where we can include concurrent or cascaded methodologies.
In the next release, the solution could have better integration and if we can host assets on the cloud, such as NetApp has the NetApp volumes, which we can host on the cloud directly called NetApp CVO (cloud volume ONTAP). Dell EMC should come up with something purely on the cloud rather than manage services.
The performance with the QoS is its most valuable aspect.
The integration with VMware is excellent. There are different plugins to manage the SolidFire storage from the vCenter level. That I really appreciate.
SolidFire even as a standalone storage platform is excellent.
I would say in terms of architecture and in terms of functionality, the product is quite good.
It's block access storage, however, for block access storage we have the guarantee of performance.
We have the duplication and we have the encryption with this solution. We have almost all the standards needed for storage with SolidFire. In terms of protection, with the level of protection we can set between the SolidFire nodes, it's very good.
The integration capabilities could be improved.
The solution is not cheap. It's much more expensive than DataCore. It costs much more.
The improvement I would expect from them is maybe more if there is integration with VMware. We are also using Amazon Cloud to provision snapshots or to move or to copy snapshots to Amazon. I would expect more integration within Amazon. Amazon has tree storage or last tier so we have that as an option instead of keeping it in Pure Storage as it costs a lot of money. If they offered a hybrid cloud, for example, it would be very helpful.
The solution needs to ensure they have good integration with VVol. VVol is the future of VMware. I have spoken with Pure Storage engineers and they have an integration with vVol. They have a kind of plug-in for VMware to work with VVol, however, it's not mature enough. It's my understanding they're working on it to get it done on that side. More integration with the Windows Server for snapshots would also be helpful.
One year ago I found that instead of having the new Pure Storage FlashArray on-prem, you can have it in Tokyo or you can have it in Virginia - it depends where you are. You can just pay a certain amount per minute and you can have a Pure Storage that you manage from your prem, but have it on Amazon. That may be in production. It will be a useful attribute.
The integration and migration features have been really good.
We're getting good performance, and the compression ratio is also very good in Pure Storage FlashArray.
It has an Evergreen model and always maintains the controllers, so the controllers never let you down.
Data reduction is an area that needs improvement. There is a garbage collection service that runs but during that time, system utilization increases.
Integration with VMware tools can be improved.
The reporting can be better.
The experience has been very good so far for the company. It's very fast and it's very easy to configure the storage.
It's very efficient.
The snapshot feature is great. It's very easy to apply. You can see present error applications by going back using snapshot. It's very good for the company.
We haven't had any problems with it in the time we've used it.
I like the user interface.
Support has been helpful.
We have no issues with pricing.
Integration is easy.
The initial setup is simple. It's easy to deploy using the whitepapers on offer.
The compression is quite good compared to other options.
I would like a feature to integrate with external or cloud solutions. For example, if I want to use this storage for a backup from the cloud, I want to have integration with the cloud vendors, such as Microsoft, Oracles, or Amazon. It could be available as an API to allow seamless integration. Additionally, the solution could improve by having native integration with a cloud provider, such as VMware or Microsoft, this would reduce the need to use third-party solutions to complete the task.
A valuable feature for me is the architecture which is pretty good. We have a good throughput and we like that they have a lot of IOPS and low latency is also pretty good. And we also like the verification features because we get some good things with the method Nimble takes to duplicate. If you have a hybrid secondary storage, it allows you to do remote copy, data recovery and business continuity with the solution. And the integration with some other solutions is also good. For now, Nimble has a way of integrating with each solution, it's very good.
Technical support is good. Their responses are interactive and they work with us. We try to keep open communication with our customers and I know that in internal support the issue has arisen of how long it takes to be escalated from L2 to L3 support.
I feel that it takes too long. When it comes to performing integration with InfoSight, it is helpful that we can then segment the issue or possibly check the connection.
HP has several integration elements that work with other vendor storage products. I'd like to see a greater expansion on that so that a customer can do a more seamless migration from other vendor products. The migration of data to their platform could be better.
Primarily they don't have a lot. They have several EMC elements that they can migrate data from, however, there are many more controllers out there and it'd be good to see a more seamless integration so that that could occur.
I'd like to see 3PAR have some integration with Cloud services.
We had help from a consulting integration company for the deployment.
We also have a production team of 34 admins and engineers to deploy and maintain this solution.
Cloud integration could be better. They can also add an NVMe to port to that. I would like to see NVMe in the next release. That's the future or the near future for storage. That will give us a real high throughput and some performance.
It's a very convenient product. I find it easy to use.
It's very flexible in terms of management.
It's an enterprise tool.
The integration is good. We've integrated it with other UNIX platforms and Windows platforms.
The solution is easy to install.
The pricing is okay.
HPE 3PAR StoreServ could have better integration into the cloud and converged infrastructure.
It's not a very good solution. We found EMC and other storage solutions can help the client a bit more in terms of reaching their target, having the flexibility they need, and offering better backups and integration.
It's not high-end storage. It's not as strong as it needs to be. You need to have two nodes, or two controls to be able to effectively protect from data loss.
It's not really a complete solution. It has some limitations with software integration as well as backup and restore integration. It does not protect the data stored on 3PAR. Some products should be protected with another brand if there's a level of security required.
With 3PAR, there is remote copy software which isn't very good. It should be improved also. Many times we have had data issues and we have had to re-initiate the application and reconfigure.
The integration is already included in the license cost of IBM FlashSystem. The integration is very easy. You get the IBM storage core with all software, firmware, and upgrades. EMC provides the features in the box, but they are not free for customers. There is a licensing cost for features.
We have yearly licensing, but IBM has also provided a new option where you pay as you go. They provide a big box, and I pay, for example, for 10 terabytes. If I exceed 10 terabytes, IBM will charge for the new storage after 10 terabytes. It is a good opportunity in the market for using the storage as a cloud and paying as you go.
The stability of the solution isn't great. We have had a lot of issues with discs over the years.
There should be better integration with utilization platforms.
The pricing needs to be more competitive.
The price is very costly, which could be improved. In the next release, I would like to see some integration with VMware so that storage can be managed directly from a single pane.
It's a really reliable, powerful platform. It's a mature product. It's like a BMW that evolves consistently.
There is no need to change or buy another company's solution. It came with storage virtualization and options to move/migrate volumes around and migrates easily even before you actually have svMotion on VMware.
It can be stretched. There is a Site Recovery Adapter. It has backup integration using flash copies. You can build a disaster recovery solution around it. IBM has its famous Redbooks where you can enter in the best practices. You name it, they've got it!
All-Flash is made by Solid State Disk, it's not like HDD or spinning disk.
The price is important, and we would like to have it less expensive.
Better integration with other brands is important so we would like to see it easier to integrate.
Its performance is most valuable. This solution is much faster than other as well as older storage solutions. The performance of the system is very good. We are getting 50 times better experience than the older storages. We are using AFF 300. It also has native cloud integration and most of the features.
Overall, I've had a very good experience with the solution so far.
Integration is easy with this product.
The UEMCLI is not an object-oriented CLI and the more object-rich PowerCLI has been discontinued. Only people with bash experience possibly can operate it. Still nowadays, feeding object from one command into another is still a burden with such CLI. When adding a few disks to a cluster, the CLI is actually standing in the queue for one disk to be added to all, requiring multiple scans on each membering host, before proceeding with the second...and scanning all hosts once again. One could add all disks at once and stand in the queue once for a rescan all.
There isn't a means to add volumegroups , nor hostgroups. A feature that any solution I worked with so far has. Its a burden to assure each host has the same LUN ID on each host in this manner. As of the june 2021 releae , code OE 5.1 it seems to offer the option to have hostgroups in the end !
The integration with vCenter comes with a sideeffect, in that it will take control of the vSphere scan process, moreover every esx host is scanned multiple times. It takes easiliy a few hours to add a few LUNS to a few hosts. Rather Painfull. Even when adding LUNs using the unisphere GUI , you can keep up with the pace of your script.
Support Responsiveness & time to fix bugs should be improved. Over the past 1,5 yrs we had occassional controller reboots and we went all the way from OE 4.5 over 5.02 to 5.03 and eliminated the most common causes. We still face a stress triggered cache merge issue and though we provided the dumps and engineering acknowledged the bug, it has been told that addressing the bug requires substantial code rewritting and the problem will be fixed in the next major code release (OE 6.x) . We are now a year later, still no fix, but furtunately faced the considition once on one out of 5 arrays during that year.
We used an integrator for the deployment. They had great knowledge of the product. We also did some additional integration with them in other parts of the storage, including backup and data domain devices, because we were very fond of the work they did.
We are a system integration company and we are working with Huawei OceanStor products. It is mainly used for virtualization.
The solution could improve by having better integration.
CFS integration is one area where OceanStor has some room for improvement. The CFS on OceanStor is dedicated, so it's not integrated with the existing active directory domain. That's a big issue with this service. Huawei could also improve the power unit on this storage solution. When we conducted some tests, we noticed that the autonomy for the BBU is not really long. Within 15 minutes, it totally went down. OceanStor is out of support and off the market, so Huawei is moving now to Dorado.
Also, the interface is in Chinese in some places. It would be helpful if they could release some versions without Chinese.
OceanStor could use improvement with configuration and synchronizing with other vendors. There is some latency in the migration due to the amount of data.
I would like to see more iterations and direct integration with solutions like Docker and Kubernetes. Also, if the customer needs to make a special solution or have an additional process for analyzing data directly on the storage using data technology, they should be able to do that.
Huawei OceanStor Dorado could improve the integration, there have been some compatibility issues with some operating system functions.
It had a lot of integration and supported all the platforms, so I was happy with what they were offering. The biggest selling point was when the vendor upgraded it. Dell upgraded the software licenses sold with that. Whenever the hardware is end-of-life, we could upgrade the controller or add a new disk or whatever. There was no lifecycle of three years or five years. We could carry on just by upgrading. It had many features, like a snapshot, replication, on-the-file RAID levels, mix-and-match files, those kinds of things.
What I understand is that this is a 13 year old architecture, so it has lived its life and they're phasing it out. Honestly, we were initially struggling with the integration with VMware (but it was fixed with the VMware 6.5) and, then, it was around a 10GB network. At that time, it had the longevity to go to 100GB as well. It got us thinking about, when we go into the containerized architecture, what do we need to do to fix the infrastructure?
At the moment, I can't think of anything that needs to be improved; however, the feature that we're waiting on is better integration with the cell services. I know Pure has a company that's working on the cell system, but it's still not completely there yet.
There could be improvements in public cloud integration.
In the future, one innovation could be for them to make an embedded backup system.
The integration with S3 needs some improvement.
What FlashBlade can do with S3 buckets needs to be improved. Other things, such as NFS, are simple to implement, but S3 can benefit from improvements.
One of the most valuable features is the ease of deployment.
Integration with the compute capacity is good, as this is just the storage component.
Orchestration and management are good.
The performance and capacity-based costs are also good.
Another advantage is that HPE sells everything. This includes all of the capabilities of the hardware, like replication, snapshot, and other specific features. They are all included from the get-go, as opposed to everything being separate and in another budget. When you buy it, you can do whatever you have to be able to do with it out of the box.
The most valuable feature is the ease of deployment.
The integration with the compute capacity because this is just the storage side.
The orchestrations and management, and the performance and cost per capacity.
Another advantage is that HPE sells everything. All of the capabilities of the hardware such as replication, snapshot, and the specific features are all included from the start as opposed to being on another license.
When you buy it, you can do whatever you have to, to be able to do with it out of the box.
There really isn't any aspect of the solution that needs improvement for the customer other than its price.
It is a very good solution, but the Georgia Republic is a very small country and customers in both the government public sector and in the private sector do not have money to purchase enterprise or high performance solutions. They are looking at mid-range or mid-class solutions.
I can say that they need to simplify the solution. In SimpliVity, they need a lot of integration with virtualization technologies. For example, putting some add-ons or plugins in vCenter. vCenter is a management software of VMware virtualization.
Secondly, it would be better if they cold simplify the deployment of Primera. Thirdly, if you have already purchased Primera and you need to scale your infrastructure and you are thinking of buying more hardware disks, you will need to purchase the Rebalance Service from HP Enterprise. They need to improve that methodology.
The customers need solutions that do not require a lot of administrative tasks.
We really like the HPE infosight. it is an AI driven interface for hybrid cloud. it gives some more insights into any virtualized infrastructure such as performance issues, proactive recommendations etc. for example, get alerts and items of that nature It's been quite helpful so far.
The performance of the solution is excellent. It performs far better than you would expect.
The initial setup is very straightforward. It's not overly complex.
It outperforms on latency. It's very fast. We're talking about 0.4 milliseconds. The faster, the fast data works, the faster you'll get the response. This is really low latency; it's a great performance.
The dashboards are very good. It has a very user-friendly kind of dashboard that is easy to understand. There isn't any complex stuff or too much information. That said, it has deep integration in to HPE Infosight. It gives you so much information, even more than you want.
The customization capabilities are excellent.
Implementation was mainly done by a local resource, because we are not a deployment partner. The resource connected to somebody remotely from a site in Egypt. We managed to deploy it in half a day for each site. The first time that we did the provisioning, it took time, but it was a relatively straightforward process.
We had some requirements, like SRM integration, where we needed some guidance. Dell EMC has suggested that we use CloudIQ, so we want to explore that option. However, we are not using it right now.
With the NVMe technology, performance in terms of IOPS has improved. Things are generally faster, although there are some bottlenecks with the integration of IBM servers.
The biggest way that PowerMax has improved the way our organization functions is through an increase in performance. The business of pharma is complex and the IOPS demand is huge. In the past, we used VMAX storage, and there was a big issue with the performance. Everybody complained about performance, servers, and storage, saying that they didn't have enough space. We tried many different solutions in an attempt to solve the performance issue.
For example, we tried reducing the data that was stored on disk, and we tried removing unused data. We turned to development and asked that some programs have fewer features. Finally, management made the decision to implement the PowerMax solution, and it solved the issue. As soon as we migrated from VMAX to PowerMax NVMe, the performance increased and everybody felt better.
The security is good. We enabled DSE for our encryption.
CloudIQ has made our lives better. It provides notifications, where you receive an email to let you know about your storage and your SAN. It is a powerful tool, although we have had to upgrade it a few times. Overall, it is a good monitoring tool that gives us a powerful and easy way to monitor our servers.
I would absolutely recommend using it. I would also suggest negotiating and testing it. I bought a very small system of 10 terabytes that I put in one of our labs for testing so that my team can learn it, and I could play with it. We tested it, and after we were comfortable with the capabilities of the system and building things in VMware, which is a really critical part of the whole integration, we tested three different solutions from HP, Dell, etc. After the testing, it was clear to us that the Pure FlashArray X NVMe was the easiest to manage and configure and had the best performance that we had seen in all the arrays. We are not testers, but we could tell. We could see the speed at which the databases came up and everything else. After testing, you will be convinced that Pure FlashArray X NVMe is probably the best box or right there in terms of performance. We tested in early 2019. There might be another solution that is doing better today.
I would rate Pure FlashArray X NVMe a nine out of ten. The only reason I won't give it a ten is the price. Its feature set is pretty complete. I'm pushing it right now. It is like you buy a sports car and then you complain that you don't have a big trunk to put a lot of luggage. You are complaining about the wrong thing here. You bought the thing because it is fast. Similarly, we bought it because it is fast. From that perspective, whether they can address NAS or other things like that is just icing on the cake for me. Its price is a little high right now. Otherwise, I would have given it a ten.
The initial setup is very easy and very quick. It's not too complex. We found it to be rather straightforward.
The advantage of FlashSystem is the stellar visualization of data integration. We deployed the solution very quickly due to the fact that, when the system was implemented, the migration was transparent. The tools make everything very clear for the user.
We're not satisfied with the deduplication and compression for our volumes. When we enable those features, we assess issues like virtual machine performance which is quite a complex thing to do with FlashSystem. We'd like to have better technical support because they ask lots of questions when we could just have a remote session and resolve the issue. I'd like to see application level integration with Microsoft Hyper-V integration with the IBM Storages in the next upgrade.
We replaced an older, high-performance storage device that was very expensive. With PowerStore, we were able to achieve the IOPS, and we were also able to get a data compression rate significantly above what we had expected. We were able to retire that older, very expensive piece of storage by bringing in the PowerStore. It's been faster and cheaper than we had expected, per terabyte.
Another reason that we were after this machine was PowerStore's VMware integration. We're a very large VMware customer. Some 98 percent of our workload runs on VMware.
In the first weeks, we had some problems with the dedupe. According to the warranty, we should have had a dedupe rate of at least two and we had not reached this value. We got an additional hard disk to match the planned capacity of the system and this helped a lot. We got to a dedupe rate of 1.9, and this was very good.
What we are missing is the monitoring. We cannot implement the health check of the system in our monitoring system. We have to open the PowerStore GUI every day.
Also, we have tried to install a separate virtual machine to integrate PowerStore to vCenter. VMware then provides a virtual machine with Photon OS. We have done this integration two times and it has run for some weeks. Then it stops working and I don't know why. We have not used it again. It has nice features and has saved a lot of time and creates a good integration, but it needs to be more stable.
Overall, they need to make the system stable. Again and again, we have problems with upgrades. The upgrades themselves are running fine, but after the upgrade is when we have a problem. With the update to 1.4, we had a head crash. They told us, "This is a known issue. Please upgrade to 2." We upgraded to 2 and, one week later they told us, "Yeah, there are some issues in 2.0.0. You can lose data. Please upgrade to 2.0.1." Overall, they need to make the system stable.
I try to avoid updates for such important, central systems. They require downtime for the whole company, as this is our only storage. It's not good to do so many upgrades. I have used other storage systems and, with them, it was never necessary to do so many upgrades in one year. Last year, I did four upgrades for the PowerStore but I have never done four upgrades over the lifetime of other storage systems. They have run four, five, or six years, sometimes more. I have never patched so often as I have with PowerStore.
VMware integration with the product is good. We use vSphere, which is one of the main VMware functions. In addition, we use vSAN for storage, and we have to use the SIEM.
This solution has helped us to simplify our storage operations, although we have a lot of storage devices and too few administrators. Because of this, we're looking for a software solution to assist us with administration. This is something that we will be doing next year.
Overall, we're quite happy with the product because we can move the data that is stored on more than 10 of our current storage devices to a single PowerStore. In terms of efficiency, this solution is the best choice for us.
PowerStore is easy to use. All the drives use soft encryption. To upgrade it, you download the app, and it runs by itself. It's very easy to deploy, share, and create volumes. It's active, so you can have two nodes on one appliance. If Node A goes down, you still get node B at the bottom running.
I would rate PowerStore's machine learning and AI eight out of 10 because customer automation is very easy. It's just a click of the button, You can also use what they call Cloud IQ, which is an online storage and monitoring software. If you log on to the internet, you can check on your plans to see how much space is left. Cloud IQ analytics software is free as long as you have an account with Dell.
Dell's built-in intelligence is the best because it can also calculate how much data is needed for storage beforehand and if you need to add more drives or anything. The built-in intelligence can adapt quickly to changing workload requirements. We were able to migrate from IBM storage by uploading an image. With other devices, it's sometimes hard to migrate from different forms of storage, but PowerStore was very quick. We didn't have any downtime because once we were able to create the image, we just had to do a cut-over on the other side.
Pretty soon it's going to be Meditech certified, so it's going to be able to run Meditech. Right now we are using a different solution to run Meditech, but once it gets certified, we'll be able to move from the other appliance. VMware integration is very easy too. PowerStore gives us leverage, we can tell how much space is allocated to the VM and what's happening on a VM.
PowerStore helps to simplify IT operations. At the site where it is installed, we have consolidated two tiers with the high-IOPS and lower tiers. We have enough capacity with lower power consumption and enough performance to handle the required overload.
It gives us the capacity and the performance we need. Before, things were on 10K disks, while this is flash. There is a very big difference. Previously, we were connected directly, with a back-to-back connection between servers and storage. Now, we have multiple servers connected to SAN switches and those switches are connected to the storage. For sure, the performance of the system is sky-high. In terms of IOPS we are fully satisfied by the PowerStore.
We use the solution’s built-in VMware hypervisor to run VMs and virtualized applications, directly on the storage appliance. We manage multiple sites and we don't have enough teams to allocate support at all sites. So our support team handles all our sites. It's very important for us to have a consolidated infrastructure that we can manage remotely, without needing someone available locally to do the patching/power-up/creation and life cycle management tasks. Having this box, along with the integration with VMware, and VMware's capabilities, gives us what we need.
The most valuable feature is that it is easy to use this frame. I am a SAN administrator, but I was able to train my colleague, who had only been a VMware administrator, on the PowerStore in about half a day. Now he's autonomous in assigning volumes and creating data stores, et cetera. I don't have to help him anymore. That is the beauty of this unit and it's due to the effort Dell EMC put into the GUI.
The VMware integration is very good. It integrates all the vSphere interactions when you create your data store, directly from the PowerStore GUI, into your VMware cluster. My colleague who was the VMware administrator is now able, in one shot, to provision his storage and automatically create a data store relying on this storage. That has freed up some of his time.
Another important feature is the power of this frame. It's very powerful. We have almost less than a millisecond of response time, all the time, even during backup windows. That's very good compared with the VNX, of which we have two. We also have a Unity connected on this same SAN for the same kind of application. We did a comparison among the three models of frames, the VNX, which is rather old, the Unity full flash, which is not so old, and the PowerStore. PowerStore is really on top of all of them.
Of course, it enables us to add compute and capacity independently. We add a lot of VMware clusters in our SAN thanks to the PowerStore. We are going to decommission the old VNXs because it's better adding capacity on the PowerStore than keeping the old models.
We did the integration and installation in conjunction with an Intelliflash support engineer. They're good. They're above average. Originally, when Intelliflash was Tegile, the support engineers were knowledgeable of whatever they ran, and they had the patience to assist and were very helpful.
One of the most valuable features is its integration with other cloud solutions. We have a presence within Amazon EC2 and we leverage compute instances in there. Being able to integrate with compute, both locally within Zadara, as well as with other cloud vendors such as Amazon, is very helpful, while also being able to maintain extremely low latency between those connections. We have leveraged 10-Gig direct connections between them to be able to hook up the storage element within Zadara with the cloud platforms such as Amazon EC2. That is one of the primary technical driving factors.
The other large one is the partnership and the managed service offering from Zadara. That means they have a vested interest and are able to understand any issues or problems that we have. They are there to help identify and work through them and come to solutions for us. We have a unique workload, so problems that we may have to identify and work through could be unique to us. Other customers that are just looking to manage a smaller amount of data would not ever identify or have to work through the kinds of things we do. Having a partner that is interested in helping to work through those issues, and make recommendations based on their expertise, is very valuable to us.
Zadara's dedicated cores and memory provide us with a single-tenant experience. We are multi-tenant in that we manage multiple organizations and customers within our environment. We send all of that data to that single-tenant management aspect within Zadara. We have a couple of different virtual, private storage arrays, a couple of them in high-availability. The I/O engine type we're leveraging is the 2400s.
We also have disaster recovery set up on the other side of the U.S. for replication and remote mirroring. Being able to manage that within the platform allows us to add additional storage ourselves, to change the configuration of the VPSA to scale up or scale down, and to make any changes to meet budgetary needs. It truly allows us to manage things from a performance standpoint as well. We can also rely upon Zadara, as a managed-services provider, to manage those requests on our behalf. In the event that we needed to submit a ticket and say, "Hey, can you add additional storage or volumes?" it's very helpful to have them leverage their time and expertise to perform that on our behalf.
It is also very important that Zadara provides drive options such as SSD, NL-SAS, and SSD cache, for our workload in particular. We require our data to not only be accessible, but to be fast. Typically, most stored data that is hotter or more active is pushed onto faster storage, something like flash cache. The flash cache we began with during our first year with Zadara worked pretty well initially. But our workload being a little unique, after that, the volume of data exceeded the kind of logic that can be used in that type of cache. It just looks at what data is most frequently accessed. Usually the "first in" is on that hot flash cache, and our workload was a little bit more random than that, so we weren't getting as much of the benefit from that flash cache.
The fact that Zadara provides us with the ability to actually add a hybrid of both SSDs and SATA allows us to specifically designate what volumes and what data should be on those faster drives, while still taking into account budget constraints. That way, we can manage that hybrid and reduce the performance on some of the drives that are housing data that is really being stored long-term and not accessed. Having that hybrid capability has tremendously helped with the flexibility to manage our needs from a performance standpoint as well as a cost perspective.
As far as I know, they also have solid support for the major cloud vendors out there, in addition to some others that I hadn't heard of. But they certainly support Amazon EC2 and Google and Rackspace, among others. Those integrations are very important. Most organizations have some sort of a cloud presence today, whether they're hosting certain servers or compute instances or some other workload out in the cloud. Being able to integrate with the cloud and obtain data and store data, especially with all these next-generation threats and things like ransomware out there, is important. Having backups and storage locations that you can push data to, offsite, or integrate with, is definitely key.
Our initial application was probably the simplest one. We were sunsetting a product, but we needed to do some movement and we needed some additional storage, but we knew that what we needed was going to change within six months as we got rid of one product and brought in another. To handle this, we started deploying Block storage with Zadara, which we then changed to Object storage and effectively sent back the drives related to the Block storage as we did that migration. This meant that we did not have to invest in new technology or different platforms but rather, we could do it all on one platform and we can manage that migration very easily.
We use Zadara for most of our storage and it provides us with a single-tenant experience. We have a lot more customer environments running on it and although we don't use the compute services at the moment, we do use it for multi-tenant deployment for all of our storage.
I appreciate that they also offer compute services. Although we don't use it at the moment, it is something that we're looking at.
The fact that Zadara provides drive options such as SSD, NL-SAS, and SSD Cache is really useful for us. Much like in the way we can offer different deployments to our customers, having different drive sizes and different drive types means that we can mix and match, depending on customer requirements at the time they come in.
With available protocols including NFS, CIFS, and iSCSI, Zadara supports all of the main things that you'd want to support.
In terms of integration, Zadara supports all of the public and private clouds that we need it to. I'm not sure if it supports all of them on the market, but it works for everything that we require. This is something that is important to us because of the flexibility we have in that regardless of whether our customers are on-premises, in AWS, or otherwise, we can use Zadara storage to support that.
I would characterize Zadara's solution as elastic in all directions. There clearly are some limits to what technology can do, but from Zadara's perspective, it's very good.
With respect to performance, it was not a major factor for us so I don't know whether Zadara improved it or not. Flexibility around capacity is really the key aspect for us.
Zadara has not actually helped us to reduce our data center footprint but that's because we're adding a lot more customers. Instead, we are growing. It has helped us to redeploy people to more strategic projects. This is not so true with the budget, since it was factored in, but we do focus on more strategic projects.