Try our new research platform with insights from 80,000+ expert users

NetApp Cloud Volumes Service for Google Cloud vs Zerto comparison

Sponsored
 

Comparison Buyer's Guide

Executive SummaryUpdated on Jan 1, 2025

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

IBM Turbonomic
Sponsored
Ranking in Cloud Migration
5th
Average Rating
8.8
Reviews Sentiment
7.4
Number of Reviews
205
Ranking in other categories
Cloud Management (4th), Virtualization Management Tools (4th), IT Financial Management (1st), IT Operations Analytics (4th), Cloud Analytics (1st), Cloud Cost Management (1st), AIOps (5th)
NetApp Cloud Volumes Servic...
Ranking in Cloud Migration
15th
Average Rating
9.4
Reviews Sentiment
8.4
Number of Reviews
3
Ranking in other categories
Cloud Storage (18th), Public Cloud Storage Services (18th)
Zerto
Ranking in Cloud Migration
3rd
Average Rating
9.0
Reviews Sentiment
7.2
Number of Reviews
306
Ranking in other categories
Backup and Recovery (2nd), Cloud Backup (2nd), Disaster Recovery (DR) Software (2nd)
 

Mindshare comparison

As of May 2025, in the Cloud Migration category, the mindshare of IBM Turbonomic is 4.0%, down from 5.0% compared to the previous year. The mindshare of NetApp Cloud Volumes Service for Google Cloud is 1.6%, down from 3.0% compared to the previous year. The mindshare of Zerto is 5.3%, up from 3.3% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Cloud Migration
 

Featured Reviews

Keldric Emery - PeerSpot reviewer
Saves time and costs while reducing performance degradation
It's been a very good solution. The reporting has been very, very valuable as, with a very large environment, it's very hard to get your hands on the environment. Turbonomic does that work for you and really shows you where some of the cost savings can be done. It also helps you with the reporting side. Me being able to see that this machine hasn't been used for a very long time, or seeing that a machine is overused and that it might need more RAM or CPU, et cetera, helps me understand my infrastructure. The cost savings are drastic in the cloud feature in Azure and in AWS. In some of those other areas, I'm able to see what we're using, what we're not using, and how we can change to better fit what we have. It gives us the ability for applications and teams to see the hardware and how it's being used versus how they've been told it's being used. The reporting really helps with that. It shows which application is really using how many resources or the least amount of resources. Some of the gaps between an infrastructure person like myself and an application are filled. It allows us to come to terms by seeing the raw data. This aspect is very important. In the past, it was me saying "I don't think that this application is using that many resources" or "I think this needs more resources." I now have concrete evidence as well as reporting and some different analytics that I can show. It gives me the evidence that I would need to show my application owners proof of what I'm talking about. In terms of the downtime, meantime, and resolution that Turbonomic has been able to show in reports, it has given me an idea of things before things happen. That is important as I would really like to see a machine that needs resources, and get resources to it before we have a problem where we have contention and aspects of that nature. It's been helpful in that regard. Turbonomic has helped us understand where performance risks exist. Turbonomic looks at my environment and at the servers and even at the different hosts and how they're handling traffic and the number of machines that are on them. I can analyze it and it can show me which server or which host needs resources, CPU, or RAM. Even in Azure, in the cloud, I'm able to see which resources are not being used to full capacity and understand where I could scale down some in order to save cost. It is very, very helpful in assessing performance risk by navigating underlying causes and actions. The reason why it's helpful is because if there's a machine that's overrunning the CPU, I can run reports every week to get an idea of machines that would need CPU, RAM, or additional resources. Those resources could be added by Turbonomic - not so much by me - on a scheduled basis. I personally don't have to do it. It actually gives me a little bit of my life back. It helps me to get resources added without me physically having to touch each and every resource myself. Turbonomic has helped to reduce performance degradation in the same way as it's able to see the resources and see what it needs and add them before a problem occurs. It follows the trends. It sees the trends of what's happening and it's able to add or take away those resources. For example, we discuss when we need to do certain disaster recovery tests. Over the years, Turbo will be able to see, for example, around this time of year that certain people ramp up certain resources in an environment, and then it will add the resources as required. Another time of year, it will realize these resources are not being used as much, and it takes those resources away. In this way, it saves money and time while letting us know where we are. We've saved a great deal of time using this product when I consider how I'd have to multiply myself and people like me who would have to add resources to devices or take resources away. We've saved hundreds of hours. Most of the time those hours would have to be after hours as well, which are more valuable to me as that's my personal time. Those saved hours are across months, not years. I would consider the number of resources that Turbonomic is adding and taking away and the placement (if I had to do it all myself) would end up being hundreds of hours monthly that would be added without the help of Turbonomic. It helps us to meet SLAs mainly due to the fact that we're able to keep the servers going and to keep the servers in an environment, to keep them to where (if we need to add resources) we can add them at any given time. It will keep our SLAs where they need to be. If we were to have downtime due to the fact that we had to add resources or take resources away and it was an emergency, then that would prevent us from meeting our SLAs. We also use it to monitor Azure and to monitor our machines in terms of the resources that are out there and the cost involved. In a lot of cases, it does a better job of giving us cost information than Azure itself does. We're able to see the cost per machine. We're able to see the unattached volume and storage that we are paying for. It gives us a great level of insight. Turbonomic gives us the time to be able to focus on innovation and ongoing modernization. Some of the tasks that it does are tasks that I would not necessarily have to do. It's very helpful in that I know that the resources are there where they need to be and it gives me an idea of what changes need to be made or what suggestions it's making. Even if I don't take them, I'm able to get a good idea of some best practices through Turbonomic. One of the ways that Turbonomic does to help bring new resources to market is that we are now able to see the resources (or at least monitor the resources) before they get out to the general public within our environment. We saw immediate value from the product in the test environment. We set it up in a small test environment and we started with just placement and we could tell that the placement was being handled more efficiently than what VMware was doing. There was value for us in placement alone. Then, after we left the placement, we began to look at the resources and there were resources. We immediately began to see a change in the environment. It has made the application and performance better, mainly due to the fact that we are able to give resources and take resources away based on what the need is. Our expenses, definitely, have been in a better place based on the savings that we've been able to make in the cloud and on-prem. Turbonomic has been very helpful in that regard. We've been able to see the savings easily based on the reports in Turbonomic. That, and just seeing the machines that are not being used to capacity allows us to set everything up so it runs a bit more efficiently.
JW
Tools and dashboard enable us to view our peak loads and to tune the system as we go, reducing costs
Confidential Computing is really the key for us because of the security requirements for HIPAA compliance. With HIPAA compliance, there are policies and rules in place on the ability to look at a patient's data. There are rules around security, encryption, and decryption on any part of that data. When you put in the data, it is encrypted when it goes to storage, and when you pull the data back, it has to be decrypted. And you have to have two-phase authentication built into that. The Confidential Computing adds another layer of security to the storage infrastructure, which is pretty slick stuff. The NetApp service's high availability is very important when it comes to upscale and downscale. Our system is a digital system so it requires immediate response for telemedicine. When your patients are going through a telemedicine session, you need the video to work properly and respond in a timely manner, and the doctors are actually taking notes regarding that specific patient session. In terms of its storage snapshot efficiencies, the service is highly efficient. We are only doing things in small batches right now because we have not converted all of the data, but we have tested them in the Google Cloud and they work efficiently.
Sachin Vinay - PeerSpot reviewer
Leverage disaster recovery with reliable support and cost-effective future-proof features
Zerto is straightforward to implement because it only requires the installation of an agent on the VMs designated for migration. A service, typically a VM, must also be deployed at the disaster recovery location. This entire process is simple and can be completed within three days. Zerto's near-synchronous replication occurs every minute, allowing for highly granular recovery points. This means that even if interruptions or malware disruptions occur within that minute, Zerto can restore to the last known good state, effectively recovering the entire setup to the latest backup. This capability ensures high data security and minimizes potential data loss. One of the main benefits of implementing Zerto is its data compression, which significantly reduces the load on our IPsec VPN. Zerto compresses data by 80 percent before transmitting it across the VPN, minimizing the data transferred between geographically dispersed locations. This compression and subsequent decompression at the destination alleviate the strain on the VPN, preventing overload and ensuring efficient data synchronization. Zerto simplifies malware protection by integrating it into its disaster recovery and synchronization features. This comprehensive approach eliminates the need for separate antivirus setups in virtual machines and applications. It streamlines our security measures and removes the need for additional software or solutions, resulting in an excellent return on investment. Zerto's single-click recovery solution offers exceptional recovery speed. Through the user interface, a single click allows for a complete restoration from the most recent backup within two to three minutes, enabling rapid recovery and minimal downtime. Zerto's Recovery Time Objective is excellent. In the past, if a virtual machine crashed, we would recover it from a snapshot, which could take one to two hours. With Zerto, the recovery process takes only five minutes, and users are typically unaware of any disruption. This allows us to restore everything quickly and efficiently. Zerto has significantly reduced our downtime. When malware affects our data, Zerto immediately notifies us and helps us protect other applications, even those not yet implemented with Zerto. By monitoring these applications, we can quickly identify and address any potential malware spread, minimizing downtime across our systems. Zerto significantly reduces downtime and associated costs during disruptions. Our services are unified, so in the event of a disruption without Zerto, even a half-day disruption would necessitate offline procedures. This would lead to increased manpower, service delays, and substantial financial losses due to interrupted admissions and other critical processes. By unifying service processes, Zerto minimizes the impact of outages. Zerto streamlines our disaster recovery testing across multiple locations by enabling efficient failover testing without disrupting live services. Traditionally, DR testing required downtime of critical systems, but Zerto's replication and failover capabilities allow us to test in parallel with live operations. This non-disruptive approach ensures continuous service availability while validating our DR plan, even in scenarios like malware attacks, by creating a separate testing environment that mirrors the live setup. This comprehensive testing provides confidence in our ability to handle real-world incidents effectively. This saves us over 60 percent of the time. Zerto streamlines system administration tasks by automating many processes, thereby reducing the workload for multiple administrators. This allows them to focus on other university services that require attention and effectively reallocate support resources from automated tasks to those requiring more dedicated management. Zerto is used exclusively for our critical services, providing up to a 70 percent improvement in our IT resilience.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"We have a system where our developers automate machine builds, and that is constantly running out of resources. Turbonomic helps us with that, so I don't have to keep buying hardware. The developers always say, "They don't have enough. They don't have enough. They don't have enough," when they just configured it improperly. Therefore, Turbonomic helps us identify configuration issues on their side so it doesn't cost me money on the other end to buy resources that I don't really need."
"With over 2500 ESX VMs, including 1500+ XenDesktop VDI desktops, hosted over two datacentres and 80+ vSphere hosts, firefighting has become something of the past."
"We have seen a 30% performance improvement overall."
"Turbonomic can show us if we're not using some of our storage volumes efficiently in AWS. For example, if we've over-provisioned one of our virtual machines to have dedicated IOPs that it doesn't need, Turbonomic will detect that and tell us."
"The feature for optimizing VMs is the most valuable because a number of the agencies have workloads or VMs that are not really being used. Turbonomic enables us to say, 'If you combine these, or if you decide to go with a reserve instance, you will save this much.'"
"We have VM placement in Automated mode and currently have all other metrics in Recommend mode."
"With Turbonomic, we were able to reduce our ESX cluster size and save money on our maintenance and license renewals. It saved us around $75,000 per year but it's a one-time reduction in VMware licensing. We don't renew the support. The ongoing savings is probably $50,000 to $75,000 a year, but there was a one-time of $200,000 plus."
"It became obvious to us that there was a lot more being offered in the product that we could leverage to ensure our VMware environment was running efficiently."
"In terms of its storage snapshot efficiencies, the service is highly efficient. We are only doing things in small batches right now because we have not converted all of the data, but we have tested them in the Google Cloud and they work efficiently."
"High availability is very important to us because we have a production environment. High availability is the highest priority for us to continue keeping our systems running."
"Storage was taking up maybe 10 to 20% of my life at the startup, and now it takes up zero. I was personally running all the infrastructure for the company. Now that we've moved to NetApp, I don't have to worry about making sure it's up and running. It's made my life personally much better."
"This product is impressively easy to use. It's dummy-proof, once it's set up."
"The instant recovery at DR locations is the most valuable feature. We're required to do periodic DR tests of critical databases, including Oracle and Microsoft SQL. We have recovery point objectives set for specific databases and we need to be able to achieve them. Zerto helps solve that business problem."
"We now have zero downtime using this product."
"Zerto also improves application availability as our business continues to increase our lifespan."
"Zerto's greatest strength is its speed."
"The dashboard was easy and the UI was simple."
"The most valuable feature is how quickly it powers down the original source VMs and the speed at which it powers up the new VMs. The amount of time it takes to put up the operating system is valuable. The speed is what I like the best."
"I found the very easy VPG setup, the easy recovery, and failover testing to be the most valuable features."
 

Cons

"Since the introduction of a HTML 5 based interface, our main - but minor - criticism of a less than intuitive operation managers' GUI would be the area of improvement."
"There is room for improvement [with] upgrades. We have deployed the newer version, version 8 of Turbonomic. The problem is that there is no way to upgrade between major Turbonomic versions. You can upgrade minor versions without a problem, but when you go from version 6 to version 7, or version 7 to version 8, you basically have to deploy it new and let it start gathering data again. That is a problem because all of the data, all of the savings calculations that had been done on the old version, are gone. There's no way to keep track of your lifetime savings across versions."
"They could add a few more reports. They could also be a bit more granular. While they have reports, sometimes it is hard to figure out what you are looking for just by looking at the date."
"Turbonomic doesn't do storage placement how I would prefer. We use multiple shared storage volumes on VMware, so I don't have one big disk. I have lots of disks that I can place VMs on, and that consumes IOPS from the disk subsystem. We were getting recommendations to provision a new volume."
"The automation area could be improved, and the generic reports are poor. We want more details in the analysis report from the application layer. The reports from the infrastructure layer are satisfactory, but Turbonomic won't provide much information if we dig down further than the application layer."
"Before IBM bought it, the support was fantastic. After IBM bought it, the support became very disappointing."
"Enhanced executive reporting standard with the tool beyond the reports that can be created today. Something that can easily be used with upper management on a monthly or quarterly basis to show the impact to our environment."
"We don't use Turbonomic for FinOps and part of the reason is its cost reporting. The reporting could be much more robust and, if that were the case, I could pitch it for FinOps."
"The user interface has room for improvement. We would like this service to be more integrated with Azure, which is very easy to manage and use. It was easy to create volumes and add capacity pools in Azure, but in Google Cloud, we can only create separate volumes. We need more management or configuration options in the user interface."
"It would help if they increased the area in which they employ artificial intelligence, by starting to do assessments on the environments, to project those. They're not using any AI tools, currently, on the administrative side."
"I would like for the sales team to get in contact more often and let me know what I should be doing next, what we should be doing about new features. So it would be nice if I heard a little bit more from him. From a technology perspective, I have no complaints."
"There's one feature that SRM had that Zerto doesn't have, and it's one that we've been asking for. With the orchestration part of the failover, with our DR and our primary sites, the IP addresses are almost identical. The only difference is one octet. With SRM, we could say during a failover change. With Zerto, we keep hearing that it's coming, but we haven't received it yet. It's a feature that would be very beneficial. It would reduce the time a little bit more."
"The pricing could be a little bit lower."
"There can be a bit more logging. It seems a bit harder to find logs for test restores and all that. If they had a way to email the results of a test restore, that would be excellent."
"I had a couple of questions after deployment, but nothing major, about a couple of ways I could tweak it."
"We have had issues with licensing, where the license we've been given by Zerto support doesn't include VSS replication, which was a pain at the time."
"While going in, we were looking at the backup tool so that we had a DR tool and a backup tool, but they stopped developing their backup solution built into it. That was a bummer for us, so now, we have a DR solution, and we have a backup solution."
"Zerto should add the capability to replicate the same VM to multiple sites."
"There is a need to allow the source vCenter Inventory to be imported with a single click."
 

Pricing and Cost Advice

"I'm not involved in any of the billing, but my understanding is that is fairly expensive."
"We felt the pricing was very fair for the product. It is in no way prohibitive for larger deployments, unlike other similar product on the market."
"You should understand the cost of your physical servers and how much time and money you are spending year over year on expanding your virtual farm."
"I consider the pricing to be high."
"When we have expanded our licensing, it has always been easy to make an ROI-based decision. So, it's reasonably priced. We would like to have it cheaper, but we get more benefit from it than we pay for it. At the end of the day, that's all you can hope for."
"I don't know the current prices, but I like how the licensing is based on the number of instances instead of sockets, clusters, or cores. We have some VMs that are so heavy I can only fit four on one server. It's not cost-effective if we have to pay more for those. When I move around a VM SQL box with 30 cores and a half-terabyte of RAM, I'm not paying for an entire socket and cores where people assume you have at least 10 or 20 VMs on that socket for that pricing."
"I have not seen Turbonomic's new pricing since IBM purchased it. When we were looking at it in my previous company before IBM's purchase, it was compatible with other tools."
"Everybody tells me the pricing is high. But the ROIs are great."
"We don't need so much space, and there is no option to pay as we go or use just what we need. Also, the only way to increase performance is by increasing the level of the service."
"I have heard that it is expensive, but that is not my world."
"The licensing costs are not cheap. It is kind of an expensive product. However, I am a get-what-you-pay-for kind of person. After using this product, I can understand why the licensing costs are high."
"Zerto does a per-workload licensing model, per-server. It is simple and straightforward, but it is not super flexible. It is kind of a one size fits all. They charge the same price for those workloads. I feel like they could have some flexible licensing option possibly based on criticality, just so we could protect less important work. I would love to protect every workload in my environment with Zerto, whether I really need it or not, but the cost is such that I really have to justify that protection. So, if we had some more flexibility, e.g., you could protect servers with a two-, three-, or four-hour RPO at a certain price point versus mission-critical every five minutes, then I would be interested in that."
"Zerto is not cheap but is an invaluable asset."
"The pricing is very reasonable."
"It is expensive."
"Having backup and DR is somewhat moderately important to us. The problem with us, and a lot of companies, is the issue with on-prem Zerto. It utilizes whatever you have for a SAN. Or, if you are like us, we have a vSAN and that storage is not cheap. So, it is cheaper to have a self-contained backup system that is on its own storage rather than utilizing your data center storage, like your vSAN. While it is somewhat important to have both backup and DR, it is not incredibly important to have both. I know Zero is trying to heavily dip their toes in the water of backup and recovery. Once you see what Zerto can do, I don't think anyone will not take Zerto because they don't necessarily specialize in backup and recovery 100 percent. They do replication so well."
"It was pretty appropriate. It was not too cheap, not too expensive. It was just about right."
report
Use our free recommendation engine to learn which Cloud Migration solutions are best for your needs.
850,028 professionals have used our research since 2012.
 

Comparison Review

it_user159711 - PeerSpot reviewer
Nov 9, 2014
VMware SRM vs. Veeam vs. Zerto
Disaster recovery planning is something that seems challenging for all businesses. Virtualization in addition to its operational flexibility, and cost reduction benefits, has helped companies improve their DR posture. Virtualization has made it easier to move machines from production to…
 

Top Industries

By visitors reading reviews
Financial Services Firm
15%
Computer Software Company
14%
Manufacturing Company
9%
Insurance Company
7%
Educational Organization
56%
Computer Software Company
10%
Manufacturing Company
10%
Financial Services Firm
8%
Computer Software Company
23%
Financial Services Firm
11%
Manufacturing Company
8%
Healthcare Company
7%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
No data available
 

Questions from the Community

What is your experience regarding pricing and costs for Turbonomic?
It offers different scenarios. It provides more capabilities than many other tools available. Typically, its price is...
What needs improvement with Turbonomic?
The implementation could be enhanced.
What is your primary use case for Turbonomic?
We use IBM Turbonomic to automate our cloud operations, including monitoring, consolidating dashboards, and reporting...
Ask a question
Earn 20 points
What advice do you have for others considering Oracle Data Guard?
Ik fluister:VM Host Oracle en DataGuard hebben we per toeval vervangen door Zerto :-) tijdens de Zerto implementatie ...
What do you like most about Zerto?
Its ability to roll back if the VM or the server that you are recovering does not come up right is also valuable. You...
What is your experience regarding pricing and costs for Zerto?
The setup is somewhat expensive. I'd rate the pricing seven out of ten.
 

Also Known As

Turbonomic, VMTurbo Operations Manager
CVS for Google Cloud, NetApp CVS for Google Cloud, Cloud Volumes Service for Google Cloud, Cloud Volumes Service for GCP, NetApp Cloud Volumes Service for GCP
Zerto Virtual Replication
 

Interactive Demo

Demo not available
Demo not available
 

Overview

 

Sample Customers

IBM, J.B. Hunt, BBC, The Capita Group, SulAmérica, Rabobank, PROS, ThinkON, O.C. Tanner Co.
Atos, Bandwidth, Wuxi NextCode
United Airlines, HCA, XPO Logistics, TaxSlayer, McKesson, Insight Global, American Airlines, Tencate, Aaron’s, Grey’s County, Kingston Technologies
Find out what your peers are saying about NetApp Cloud Volumes Service for Google Cloud vs. Zerto and other solutions. Updated: April 2025.
850,028 professionals have used our research since 2012.