Our primary use case is disaster recovery.
We have Zerto deployed on-premises at both our primary and DR locations.
Our primary use case is disaster recovery.
We have Zerto deployed on-premises at both our primary and DR locations.
In general, Zerto helps us with our DR plan because it makes things so easy.
Zerto does a very good job of providing continuous data protection. I've been very impressed and would rate it an eight out of ten. Especially when it comes to DR testing, it is very easy to work with and we are able to recover our infrastructure at our DR site within a matter of minutes.
When we have to failback or move workloads, Zerto decreases the time it takes and the number of people involved in the process. I've used other solutions and I haven't seen anything that compares to what it can do. It is difficult to estimate the exact time saving because it depends on the workload.
Realistically, you could have a single admin responsible for the restores, whereas with other solutions, depending on how big your environment is, it would take more people. In our environment, it would take upwards of five people to restore our core infrastructure and by using Zerto, it reduces the number of people by about half.
With respect to DR management, Zerto has reduced the number of people involved in the process. I wasn't at the company when they used the previous product, so I'm not sure by exactly how much.
The most valuable feature is real-time replication, where we have the ability to recover things in near real-time.
It's very helpful for DR testing because we can recover VMs in an isolated bubble and prove our DR methodology.
Zerto is very easy to use, which is one of its big selling points. It takes just a few clicks to restore a VM, which means that it's easy to train somebody to help in a DR situation.
We have had some issues with trying to get certain parts of the backup or restore functionality to work. However, I cannot recall the specific details.
I have been using Zerto since I started with the company two years ago. In total, the company has been using it for approximately four years.
The stability has been very good. I can't think of a time that we've really had any issues that we didn't cause or something maybe with an update. We're running it 24/7 and I can't think of a time that we've had any major issues with it.
From my perspective, Zerto's scalability is very good. We have 60 VPGs and 122 virtual machines that we're using Zerto to replicate to our disaster recovery site. This will grow with any new infrastructure that we build. Any new servers, depending on their RTO or how soon we need to recover them, would be put into Zerto. Potentially, we will add some, although I'm not aware of any major growth at this time.
Our operations department monitors the dashboard just to confirm that our RTO and the VPG health look good. There are perhaps five people in that team, who are watching the dashboard.
In my team, there are three of us that use it, although we don't look at it daily. It would only be if we get reports of an issue or we need to adjust a setting or something like that.
Overall, on a day-to-day basis, there are probably about five people that use it.
I have not personally dealt with the technical support but I know about the experience that my coworker has had. They are typically very helpful and provide good responses compared to other companies in the industry.
Zerto was recently acquired by HP and there is some concern in our organization about what might happen to the technical support, seeing as they were bought out by a bigger company. We're hoping that it doesn't negatively impact the support that we receive.
I have used Veeam in the past and it's similar to Zerto in certain aspects. They both have their pros and cons, but I would say from what I've seen, I like Zerto. It just seems to be a little bit more user-friendly in the UI. Functionality-wise, they're similar. I think Veeam would be one of their main competitors.
I have also used the older product by VMware called Site Replication Manager. It really doesn't compare to Zerto.
I have been involved in some of the Zerto setups and from what I have seen, they go very well. It seems that it is pretty easy to perform the initial setup.
It takes about an hour per site, or per server to upgrade it.
We deploy and upgrade the solution in-house.
It's usually two people that are responsible for maintenance but it could be four people on the team.
I believe that we have seen a return on our investment. The return comes from time saved in manpower, for example. From what I've seen, it's worth the cost.
I've also heard comments from my coworkers to say that it's an expensive product but it definitely makes you feel more comfortable in a DR situation.
I have not been directly involved in the pricing and licensing. My understanding is that it's expensive but worth the price.
We don't use Zerto for long-term data retention and I'm not aware that there are any plans to do so. We use Zerto in tandem with our backup solution, just to be safe. That said, we have used Zerto for recovery in scenarios where we couldn't find the particular data that we were looking for in our other backup solution.
We have not experienced any ransomware incidents or other situations where we needed disaster recovery. However, Zerto would definitely save us time for that. Depending on the situation, it could save the number of people involved as well.
Although we have not had to use it in an actual event, it helps us in terms of regulatory and audit compliance. If we had a real event, we would all feel more comfortable that we'd be able to restore or be in a better position to have our infrastructure restored in a small amount of time.
We have not yet looked into using Zerto for DR in the cloud but in the future, we're going to look at the option of doing so.
My advice for anybody who is considering Zerto is to do a proof of concept or a trial. I'm not sure what the vendor has available in this regard but I would advise trying it out with a small number of virtual machines.
I would rate this solution a nine out of ten.
Zerto runs on a Windows Virtual Server and we have it installed at two sites. There is the production site, as well as the failover DR site.
We use this product almost exclusively for disaster recovery. It is responsible for the automated recovery of what we deem to be our mission-critical servers.
In terms of its ability to provide continuous data protection, this is a product that I trust. We test it quarterly to make sure that what the dashboard is telling us is correct. But, I've used it long enough to know that when I see the dashboard telling me that the virtual protection groups (VPGs) health are all green, then things are working correctly. Our average RPO is usually somewhere between three and 10 seconds.
We used to perform a disaster recovery test once a year, and it was painful because everything was manual. Now that we do it quarterly, we're able to provide management with reports of the tests, which not only makes management happy but also makes various governing bodies happy. We're a financial advisory firm, so it's the SEC that oversees us. That said, I'm sure this holds true in many industries. It allows you to have the reports to prove that you've done the tests. We don't have to ask them to take our word for it.
When we need to failback or move workloads, Zerto has absolutely decreased the time and number of people that are required to do so. For example, if I just want to test and prove that the network is up, it's something that I can do by myself. If I want to have people log in and test applications and stuff like that, I would need additional people. However, it has a built-in test function, so it will create a complete test network that you can run workloads on to show that the tests are successful. Afterward, you can delete the network and you're back just running, waiting for the next time you want to do that. In a situation like this, using Zerto saves eight hours or more and I can set it up and test it on my own unless I want people actually testing applications.
Thankfully, we have not had to use this product to recover from a ransomware attack or other disaster, but it would absolutely work in that case. By replicating the data, if ransomware were to hit the production side, it most likely would not also lock the disaster recovery side. This means that we would certainly be able to bring it up from there. Alternatively, it lets us pick points in time, so we can just go back to the moment in time before the ransomware happened. In a situation like this, I can't say that it would take fewer people but it would take fewer hours.
The most valuable feature is the automated failover, as it allows us to get the essential servers up at our DR site with little intervention.
Zerto is extremely easy to use. You set it and forget it.
It has a nice graphical interface.
The reporting could be improved in terms of the reports that you can show to auditors to prove that you have done the testing. I provide the reports that it generates now but, it would be great if, at the end of a DR test, it would generate a report of everything that Zerto did.
This would include details like what systems were up. Currently, that's not how the report reads. You would have to be an IT person to read the current reports that it produces. I would like for them to be the type of reports that I can put in front of an auditor or the president of our firm that would make sense to them, without me having to interpret and explain the results.
We are in our seventh year of using Zerto.
Stability-wise, this solution is rock-solid. If it fails, it's not going to be Zerto that fails. It's going to be either that your storage has failed or the bandwidth, or connectivity, is not there. I don't see a way where Zerto would be the culprit in a failure-type instance.
Our company is fairly small and the entire firm relies on it. That said, only one person actively uses it. We have three or four IT staff but Zerto has always been my responsibility.
In terms of scalability, I bet it would be no issue whatsoever. It's licensed according to the virtual machines that you want to protect. The only limitation of the scalability would be how deep your pockets are because it's going to be license costs.
We're a registered financial advisory firm, and we are growing. In the past year to 18 months, we have grown from approximately 52 employees to 70 employees. Everybody relies on it because if we have a disaster recovery type of situation, then everybody is going to expect to be able to work.
It is still a very small number of IT staff, so I can see that as we hire more IT staff to support a larger user base, we will certainly have more users. At least, I hope not to be the only one responsible for this solution as we grow.
I would rate the technical support a ten out of ten.
Prior to Zerto, we used VMware Site Recovery Manager (SRM). We switched because it requires a lot of manual upkeep, and there is no automation involved unless you write the scripts. There are lots of freeware sites where you can download scripts, but aside from that, we were spending a lot of time manually writing scripts and maintaining everything. This was really counterproductive for the amount of time we had available in a day.
Essentially, SRM was replaced because of better interface automation and ease of use.
The setup is very easily done because you tie it into your VMware vCenter. When you put in your credentials, it will recognize everything on your networks. It will recognize storage, whether it be cloud-based or as in our case, at another data center. Once you have those defined, it's just a matter of creating groups that you want to recover, server-wise.
The reason that you would want to do it in groups is that you can set it up in the automation such that it will bring up groups in a certain order. That way, you have a network where the domain controllers come up in the first group, and you can automate stuff from there.
Seven years ago, when I first started to use it, I found it more difficult. I wouldn't say that it was complex but they have certainly made improvements over the years. Where it stands now, if I had to set it up from scratch, I could probably do it in about an hour. Of course, that is because of the way I know the application but in terms of how they have changed the setup, it is certainly more user-friendly than it was compared to where it started.
I remember running into a couple of issues during the deployment, and I contacted their support. They were fantastic and helped me get through it. They made sure that all of my questions were answered, and that it was up and running how we intended it to be used. A lot of it probably had to do with me being a novice at that point, in terms of using the application.
It was a multi-site deployment, with a production site and a DR site, with dedicated storage for each. We have changed the storage that it uses over the years and if I had to do it again, I would use another vendor for storage. A lot of the issues that we ran into were related to the initial storage that we used, as opposed to Zerto issues, even though it was Zerto support that helped me fix them.
Overall, the deployment was fairly easy. Not because everything went great, but because of the combination of the application being pretty well-written and the support. I would rate the deployment an eight out of ten.
I deployed Zerto with the help of a consultant, contacting support as we needed to. The consultant was NetGain Technologies and they're based out of Lexington, Kentucky. Their service was phenomenal and I would use them again in a heartbeat for this type of deployment. Ultimately, any issues that we ran into boiled down to some issues with the storage we chose to run it on.
I am responsible for the maintenance.
We have absolutely seen a return on investment in terms of the manhours that have to be put into maintaining and testing this type of product. Thankfully, we have never had to use it in a true DR situation. However, I can guarantee that if something were to happen, even beyond the manhours and ease of automation, that it would pay for itself.
Our network infrastructure runs pretty smoothly most of the time. That said, Zerto has helped us to reduce downtime by approximately 20%. It is difficult to equate this with a monetary value because we have to consider what happens when a client misses a trade or cannot get a hold of their portfolio manager.
If it were an outage of a couple of hours then the person might pay a little more or a little less for a stock that they were trying to purchase. Overall, however, it is difficult to estimate. We aren't a day trading-type firm, so ultimately, I'm not sure that a short outage has any effect on our revenue stream whatsoever.
As a small company, we own the smallest license that Zerto offers, which is 15 VMs. I've not had to contact them or my reseller about purchasing additional licenses or to find out how much they cost.
We spoke with VMware to see what their pipeline was for upgrades or changes to Site Recovery Manager and we also looked at both Cohesity and Rubrik.
I like the separation of the software and the storage, whereas some of those other products are all-in-one. You're buying the software and storage together on the same platform. This means that the scalability would be different.
Sometimes, this is a case of adding shelves for storage. In that situation, for example, you have to start taking the data center rack space into account. Whereas with Zerto, it lets us build upon hardware we already had, even though we use dedicated storage.
Version 9 of this product is out. However, we have not yet upgraded. We're not leveraging the cloud the way a lot of companies do these days, and I know from the release notes that I've read that most of the new features are related to the cloud. There's not a lot of research and development being done on physical data centers anymore.
At this point, I'm very happy with where the product sits for my network. We are now just starting to move things to the cloud, which will take place over the next couple of years, so my assessment in this regard may change in perhaps a few years.
At the moment, we don't have plans to use it for long-term retention. We keep about three days' worth of data in Zerto and then it rolls off. We have other systems in place for long-term retention.
My advice for anybody who is looking into implementing Zerto is to do your homework. In the end, this product checks all of the boxes and it's the one that I would go with.
In the way that we use this solution, which I know is not how everybody uses it, we have storage that is specifically used for Zerto and two data centers. The way it works in that scenario, as long the bandwidth is there, meaning some sort of dedicated circuit between the two sites, it's flawless in my opinion.
The biggest lesson that I have learned from using Zerto is that disaster recovery doesn't have to be a giant pain. I certainly used to look at it that way in the past.
I would rate this solution a ten out of ten.
We use Zerto to enable our hot site configuration. We have two data centers. One of them is in one of our corporate buildings, which is our primary, and then we have a co-location center rack that we rent for our hot site backup or app. We use Zerto to replicate our servers and our VMs between those two sites. So, primarily, it is there in case of a disaster or malware attack, etc.
We also use it to restore files on the fly for users if they accidentally delete the wrong file or something like that. From a restoration standpoint, it is closer to the frontline of our security posture. We would first look to restore items. For removing the threat and everything like that, it obviously wouldn't be involved, but from a restoration standpoint, it would be frontline.
We have not yet used the cloud with Zerto. We just use on-prem physical servers.
The primary focus of Zerto was to give us the ability to fail over in the event of a disaster. We've gotten pretty close to using it a couple of times, but fortunately, the disaster didn't quite hit us. So, there is the peace of mind to know that we can fail over at any time and keep our operations. We're spread out over a good chunk of the state of Nebraska, and if there is a disaster in one part of the state, our other branches will still be operating in the event of that disaster. So, our primary focus was just to get something that can keep our other branches running in case a disaster happens to a different branch or one of our data centers. So, that peace of mind is what we wanted out of it, IT-wise and management-wise.
It has improved our ability to restore files rather quickly. Previously, we had to use hard backups that we had to pull from nightly backup jobs, which used to take an hour or two, whereas now, we can restore them in minutes and get people working again. So, that's one clear metric that we've done in terms of improvement from the file restoration standpoint, but its primary focus is just a disaster recovery capability.
It provides continuous data protection very well. I have no complaints. It replicates, and we can easily maintain 10 to 15 seconds failover time and replication times. So, we can fail back rather quickly, and when we've done it, it works flawlessly.
When we need to fail back or move workloads, Zerto decreases the time it takes. In the times that I've restored back servers to previous points in time, usually, I'm doing upgrades on those servers in the evening or the middle of the night when nobody is using them. I basically restore those servers back myself. I get the replication process started again and the reverse protection done on my own without any help. I can fail it over and fail it back before the next business day. It is a very easy and one person's job.
It has helped in reducing downtime in those instances where server upgrades go wrong and we can just fail back the server to a previous state before we did the upgrade. It would save probably a good day's worth of downtime on that particular software. We have a server that runs all of our loan processing software. If the upgrade that went wrong broke that software, fixing it would have taken at least a day. So, by being able to restore back to a previous version, we saved the downtime that probably would have costed us thousands of dollars. There would also have been a lot of unhappy customers who couldn't get their loans. It would also have led to bad public relations and things like that.
The file restoration is very helpful. They've improved it over the years to make it a lot more user-friendly and easy to do, which I appreciate. So, we use that quite a bit. The failover process is quite simple and intuitive. Even the configuration and setup are pretty easy to do. It is pretty easy to use. I've done the restoration of servers several times, not as a disaster. When an upgrade on a server goes wrong and it messes things up, I can just fail back to a previous version and try it again. So, that has been really helpful.
It is very easy to use. By using their training materials and their site, it doesn't take long to get up to speed as to how the software works and how to configure it. Once you get into the process, it probably takes just four or five hours to get your sites up on-prem, at least for more simple configurations, and get the data replicating between different VPGs. So, it is very easy, all things considered.
We did look at the long-term retention backup feature of Zerto a few years ago, and at that time, it was limited. I can't say what it is right now, but at the time, its functionality was limited in terms of basically where we could save it and how we could save it. Offsite air gapping our backups is important to us to help protect against ransomware, and at the time, it couldn't do that. That would be one area that would be important before we consider using the long-term retention again. I haven't looked at it recently, and they may have addressed this in the meantime, but if not, this would be an area of improvement.
I have been using Zerto for about six years.
It seems pretty stable. We really haven't encountered any serious bugs or issues. It is doing its main job of replicating our servers. We can pretty much count on it to be there ready and waiting if something should happen. So, it has been pretty good.
We just use the on-prem version. So, as long as we have the capacity to keep up with it, I feel comfortable scaling it up. We don't have a lot of VMs. We probably have about 20 to 30 VMs. We don't push it too hard, but I feel pretty comfortable in growing our infrastructure. It will be able to grow with us.
We just use it on the 30 servers we have. We actually maintain IT infrastructure for two banks, and we have it at both banks in the same configuration with two individual VMware hosts that we replicated in between. We just do it on-prem at the moment. We probably will maintain that structure for the foreseeable future for the next couple of years. We may look into the cloud features a little later on to see what those can offer us. We will then also see moving our infrastructure into the cloud and seeing what we can do with that.
In terms of users, we have an IT staff of seven. Probably four to five of them use Zerto in some fashion. Two to three of us maintain it and set it up and configure it. Others use it to restore files for users on help desk functions. So, at least 75% of our staff uses it on a regular basis. I'm pretty sure all of our staff have touched it at some point to pull reports, help users, fail servers over, or do things like that.
I probably had to use their tech support three or four times over the past six years, and usually, they're pretty good after we supply them with the logs and the stuff that they need to get to the root cause of the issue and then getting it fixed. They're good. I would rate them a nine out of 10.
We used traditional backup with a software called vRanger back in the day before we got Zerto. I don't know if that software exists anymore. We basically use Zerto to do replication between sites. It is for quick and instant disaster recovery, but we still do the physical backups with Veeam. So, it has basically augmented our restoration capabilities rather than truly replacing our backup solution. It hasn't saved us any costs in managing our legacy solutions. Adding Veeam and Zerto together, I know we're paying more than what we paid before we added them or before we had either of them. The additional features are what we really wanted out of it, so the extra cost is worth it.
So, we use Zerto to give us a quick replication and file restoration ability, but we also use Veeam to do traditional physical backups that we can store offsite. If something happens to a server, we can quickly fall back on the replicated snapshot and restore the server quickly. We use the physical offsite backups with Veeam to store those on portable hard drives that we store in the vaults. So, there are air gaps so that ransomware can't get to them.
Zerto provides both backup and DR in one platform, but because we don't use Zerto's backup features, this feature is not super important for us at this time. We may look at that again to see how they've evolved that product over the past few years to see if it is more valuable to us, but as of now, it is not critical because we don't use it. In our eyes, Veeam and Zerto do two different things. So, we use both products to accomplish separate goals.
Zerto is easier to configure and set up than Veeam. Veeam can be a little tricky to make sure you have all the settings correct. From a restoration standpoint, they're probably both on par with each other. It is pretty easy to restore things in Veeam. It is just the initial configuration of getting everything lined up that is a little tougher.
It has been pretty straightforward. Initially, when we first got out of the gate with Zerto, we did have a third party to help us set it up, but we rebuilt it about a year later. We did that on our own, and it was surprisingly easy. It pretty much took a quick and free training course on their site. After that, I was up to enough speed to get it set up for us. It took four or five hours of training, and it was very easy. It took a day when I implemented it.
In terms of the implementation strategy, basically, we just wanted to get two sites set up, one on each data center. So, we set up two sites there with the appliances, and then we set up an individual VPG for each VM server. After that, we got them replicating. We set up our retention time and all that, and we were done.
For its deployment, there are two people at most. Usually, there is one. Zerto is easy enough to use, and one person can usually do whatever task is necessary to do in Zerto, whether it is setting up configuration or servers or restoring files. Usually, it is only a one-person job. If it is a more in-depth configuration, then you might need one more person for another pair of eyes to make sure everything looks right.
We initially had a third party to help us set it up, but now, we do it on our own. They are probably called The Integrators now. Our experience with them was not too bad. Once I learned how to set it up and how much work was involved and stuff like that, we probably overpaid for what it was at the time, but we weren't 100% familiar or comfortable with it at the time. So, it was a good experience. Obviously, they knew what they were doing, and they got it set up correctly. There was nothing wrong from a technical standpoint. Only the pricing standpoint was probably a little off but not too bad.
We have not done a return on investment. We aren't planning on doing one at this point. We know what we've got out of it, but we have not done a formal ROI.
Its licensing is yearly. You can do multi-year contracts, which is what we did. You pay per VM, and you replicate a license per VM. So, we bought about 20 licenses. We paid somewhere between $5,000 and $10,000.
There is an initial upfront cost. Basically, you buy the license, and then you have a maintenance cost on top of that. So, the upfront cost is somewhere between $5,000 to $10,000. The maintenance is $5,000 to $10,000 over a three-year period.
We did look at other options. VMware has a replication software capability as well. We did take a look at that. Zerto was an easier and cheaper solution to accomplish what we were looking for, and it has been pretty good.
It is a really good product for creating yourself a DR site that you can basically fire up in an instant. If you're looking for getting a hot site for your company and you are looking for something that in the event of a disaster or ransomware can quickly restore files for you, Zerto is a good product for that. I don't think it is terribly expensive for what it does, and it is really easy to use. I would definitely recommend Zerto if you're looking for a hot site setup.
We have not had to use it for ransomware yet. We've been fortunate. That was actually one of the reasons we did get it back in 2015. At the time, we were getting hit by ransomware. We've invested heavily into security measures since then and haven't gotten hit with ransomware. So, we haven't had to use it for data recovery in situations due to ransomware, but it is a part of the incident response plan in case we do have to use it that way.
We do not use Zerto for long-term retention. We will probably evaluate the idea, but right now, we're pretty happy with the long-term retention product that we use. At this time, there is no firm commitment to switch over.
Zerto has not particularly reduced the number of staff involved in a data recovery situation. It has probably reduced the manhours required to maintain, but we're a Jack of all trades staff, so everybody has their hands in everything. So, it really hasn't reduced the number of staff, but it has reduced the overall hours of maintenance a little bit. It has also not reduced the number of staff involved in overall backup in DR management. There is still a decent amount of staff involved in the overall process, but the overall hours for maintenance have been reduced.
The biggest lesson that I have learned from using Zerto is that having a good DR configuration setup doesn't have to be a painful process. Zerto is a good software for just giving you that capability without you having to have a deep background and a lot of complicated software. The ability to restore and the ability to have a DR site on the fly is really valuable to our company. So, that's what we've been doing.
I would rate Zerto a nine out of 10.
Our primary use case is for our Tier 1 application environment, we're an SQL environment. We have around 25 VMs that are replicated to a hot site or warm site. And we're a VMware shop and we use Pure Storage as our SAN, but that doesn't matter because Zerto's agnostic.
We're a small shop. I am the only Zerto user and my official title is Senior Systems Engineer. I handle anything data center-related as far as information stack, the blades, networking, VMware Hypervisor, and Pure Storage. We also have a Citrix environment as well we have to support. I do all of the data center work.
Zerto is a set it and forget it kind of thing. At least it's more of an insurance policy for us. We don't have a good DR plan, but the peace of mind knowing that the data is replicated off-site, like a repository or offsite environment, there is value to that. We just haven't been able to fully embrace the actual testing of the failover and failback process. The testing has worked, but we haven't done a full production failover yet. We've been planning for around a year to do one but it keeps getting pushed back.
Being hardware agnostic is nice in that we don't really need a 15 second recovery time. It's easy to use. It's always doing updates behind the scenes. These are the positive things. The setup is pretty easy. Building out the VPGs is pretty easy. And it works like it's supposed to.
Zerto does what it says it will do when it comes to providing continuous data protection. It gives me all my recovery points up to 15 seconds or less. So if need be, we could recover to that point in time that it says it can do.
Zerto is easy to use for the most part. It's pretty simplistic. The UI is pretty simplistic. There are some things that I'm waiting for newer releases to address some functionality that I'm curious to see has been fixed or not in the new version.
There are still some pieces in testing that aren't automated. There are still some built-in scripts or workflows I wish Zerto would do out-of-the-box, versus having to PowerShell or have a vendor create it, or create it myself. We haven't done a full failback yet of production so I couldn't really say. The failover process is a lot of manual steps, but Zerto is a mechanism that gets the data there. In that aspect, it does what it's supposed to do. But I wish they would expand on their out-of-the-box functionality for the VM. When you fail it over, there are DNS and SQL changes and there are reboots. There are some things I wish that Zerto would facilitate with a checkbox that would do some of these things for me versus having to PowerShell it and put the scripts in a certain place and have support run it. I want it more automated if possible.
The issue I have with ransomware is if I don't know I have ransomware in all my recovery points, and if it goes three months, I wish Zerto somehow either bought a company or could tell me that we're infected with ransomware. If I don't know how ransomware and everything gets encrypted, there's nothing to restore back to if all my recovery points have been corrupted. So I wish Zerto somehow had a mechanism to alert me of suspicious activity.
We have a Trend product that does that for us. We can get alerts of things that Trend finds, but it's always nice to have layers for your security. We have alternatives, but it would be nice if Zerto had a mechanism to alert me as well.
Alerting has also been a pain but it was supposed to be fixed in the newer version and that's. I would like to have more granular alerts.
I have been using Zerto for about four years now.
When it comes to stability, it does what it says it's going to do.
I do some babysitting because the alerts are relentless. My biggest pain point is the endless amount of alerts that are just noise. I have to log in and see what actually is an issue because the alerts are just endless. There's not much maintenance I have to do besides logging in and babysitting from time to time.
We keep wanting to test it. It's our main DR strategy, but we just haven't had a window to vet full failover and failback. As far as increasing, I think we're pretty stagnant at the point with what we're backing up with it.
I would rate Zerto support a seven out of ten.
The initial setup was straightforward. You could deploy the VRAs pretty simplistically as long as you set an IP via the UI, so that was pretty easy. We were up and running in a day.
Our implementation strategy was rushed. We were doing a data center move and we just wanted an extra copy of the data. So this was a stop-gap solution that we stuck with.
We used a reseller for the deployment. They met our expectations. They provide the product, but outside the product, we have to get a stronger resource. If it goes above and beyond like if it's broken, they call Zerto support. If I want some PowerShell scripts and some cool stuff to be done, they need to find a resource. They provide the basic service, which is great. Above and beyond that, they're average or below average.
We pay monthly for the CPU, memory, disk space, the Zerto replication, and then there's a Microsoft charge as well on top of that for the operating system. We pay month to month and we go year to year.
There are additional VM resource costs.
My advice would be to think about the large VMs that you're backing up. Think about the wasted disk space and wasted resources on your production environment, and if you replicate that to a hot or warm site, you have to pay for those resources. The Zerto price is what it is, so you need to work with the business and ensure your Tier 1 or most critical VMs are what you're backing up or want to back up, not just everything. Then scale that to something manageable for replication and find out if you can have minimum resources while replicating and then scale up in a true DR scenario and only pay for the resources as you need them.
It's not really Zerto's fault, but you don't have full visibility on the protected site so you have to rely on your vendor for visibility if an issue arises.
I would advise asking a lot of questions. If you're an SQL environment, make sure you failover all the key components in the correct way. If you want it fully automated, make sure you buy some extra hours to get professional support.
I would rate Zerto an eight out of ten.
We have two primary use cases. One would be to use it in reaction to a cyber-terror event, particularly ransomware, because Zerto has point-in-time backup. If we find an area that needs to be restored, as long as we figure it out within 24 hours, which is approximately the amount of time we have replicated, we can go back to a point in time. Let's say the files got encrypted at 9:30 AM. We can say restore our 9:29 AM copy of what the data looked like at that point. We have not needed to use that, thankfully, because we've been educating our users very well.
The other case that we would use it for is because we're in a hurricane area. Our particular office is actually in an evacuation area, typically, meaning that we're close enough to the coast that should a hurricane event come through, they generally force us out of the area. What we would do if we needed it, and thankfully we haven't yet, would be to shut down our primary on-prem services to make them a little bit more resistant to water damage. Obviously, if they're not running, they're a little bit less likely to get zapped if there is some water damage. Then we can bring up the copies that we have at our data center and run remotely from that if. It doesn't have a full copy of our entire environment, but it does have a copy of our ERP system, as far as sales are concerned. We wouldn't be able to ship anything, but we could look at orders and help our customers. We could even take orders if we needed to, although we wouldn't be able to process them.
Zerto is a replication solution. It copies our setup which is on-prem to our data center, which is also somewhat local, about 15 miles away. It doesn't really do anything in the cloud other than move data across it. We're not replicating to any cloud-based services like Amazon or Azure. Essentially, we're using it at two on-premises locations: Our primary location, which is what is being replicated, and the replicated copy is being stored at another on-premises location, nearby.
Zerto is purely a business continuity and disaster recovery tool. We don't want to have either one of our primary use case events happen, but if they should happen, it gives us an extra layer of protection. I've got Amazon backups with stuff in completely different regions, but Zerto is more for those two specific scenarios I mentioned. In addition, if somebody deletes a file and it's really important that they have the latest copy of it, Zerto gives us that option. But it really comes down to the ransomware reactions and the hurricane support, because hurricanes are fairly common in this area. The last hurricane event here was before I had Zerto and we had to shut everything down. We really couldn't do much while the hurricane came through. The business wanted something that would give us some protection in that scenario. That's the business continuity aspect. At least we can provide some business capabilities this way. With Zerto, they'll also be able to access a limited functionality version of our system. It definitely provides upper management with a little bit of comfort that we won't be completely down in either a ransomware or a hurricane event.
We're a smaller company. We're owned by a portfolio company, and they're the ones who made a lot of these extra layers of protection happen. Zerto provides me with the comfort of one of those layers. It enables me to make a strong case to my board of directors that, "Yeah, we're good." There's a guy in our board of directors who's something of a tech guy. I can look him in the eye and say, "Hey, we're not bulletproof. We can never be bulletproof, but we're about as close to bulletproof as we can be, especially for a company our size." That's important to the board because they have other companies that aren't as well-protected as we are. I've had conversations with a couple of the guys at those other companies, because they're interested in looking at something like Zerto, and I have been highly recommending it to them. It's a reasonable cost, it provides several layers of protection from ransomware, and if necessary, against natural disasters. I'm very happy to say that Zerto is one of those layers and provides us with very good protection for what it specifically does.
We had two ransomware events prior to being owned by the company that owns us now. Both were events where somebody clicked on a link that they shouldn't have and something ran and encrypted some of our stuff. Now we're much more solidly protected from that, and Zerto is definitely one of the big things protecting us.
If we had to deal with a ransomware event, Zerto would be one of the first things I would use, because it is going to be the fastest to restore data to a certain point. If there were a fire in our building, Zerto would be a big thing too, because we would shut down everything that's in our building. In most cases, Zerto is definitely one of the front lines. It's definitely going to be one of our prevalent DRBC layers of protection.
When you need to fail back or move workloads, Zerto significantly decreases the time it takes. As long as it's one of those scenarios in which we foresee using it, it's great. When we did our two failover tests, it was easy to failover to the other location where we have the replicated copies. The last time we had an actual ransomware event, which was before we had Zerto, it took me 30 hours to restore all the data that I needed to restore. I would imagine Zerto would take 10 to 20 percent of that time.
In terms of saving staff time, I only have three people on my staff, so I'm not going to save human resources by using Zerto. That being said, what it does save me is the trouble of having to use another solution that would take a lot of time. I only engage my guys who work on Zerto for six to 10 hours a year, versus having somebody on staff. That's a significant savings for us, because we don't need somebody on staff who knows how to do things with it. It is pretty easy to use for somebody who's familiar with it and uses it on a regular basis. For example, when I do the upgrade, I'll pay guys to do it because that's what they do.
We have secondary, older equipment where our Zerto backup copy resides. We moved that old hardware to our secondary location and got new stuff in our primary location. Our primary location now copies, via Zerto, to the other location that has our old equipment. It's not quite cloud, but it is in a different location. And it's definitely saving us money in the long-term by not having to pay for cloud storage because we have put it on our older stuff, and it works fine for that scenario.
We did a test turning it back on and rethinking it, and that took a reasonable amount of time. A lot of that is not actually limited by Zerto. It's more limited by your pipeline between your backup location and your primary location. Zerto was very helpful, and it's very easy to use from that standpoint. Once you reconnect the two sites, it does a snapshot check of where it left off and then it copies back over the changes that happened while it was running on your secondary site. That's not automatic. You have to touch it. But it's certainly not super-technical.
Our two primary use cases are Zerto's biggest features for us. That's what we use it for. There may be other situations where it can come in handy, but those are our two primary scenarios.
In terms of providing continuous data protection, Zerto has been great so far. Thankfully, we haven't had an event where we have really needed to rely on it, but we have done a couple of tests prior to hurricane season where we disconnect our primary facility. We then go over to our secondary facility where the replicated data is and we bring everything up and then we do remote access to that location to make sure that it's all working properly. It's very capable. I've also done a couple of test scenarios on ransomware reaction where I'll go in and restore a folder, after hours when nobody is in the system, to emulate a situation where we might need to go back and restore kit encrypted files. So far, it's been great.
Regarding the ease of use, there's a web portal that I use to verify that everything is working well. It has a lot of notifications and it emails me if there's anything going on that's out of the ordinary. For example, if the connection to one of my other sites goes down, it will let me know that it's not able to reach that site. I'm very happy with the way the interface works for my needs, and my tech has been pretty happy with it. Our installation is rather small. We only have 14 servers on it. I'm sure there are other companies that have hundreds, but I imagine that they would see the same capabilities. The web portal is pretty well organized and easy to navigate.
I have been using Zerto for just about three years. We just re-upped the maintenance on it and, until that point, we had a three-year plan for it.
We're on version 7.5, but I've had some discussions with my partner where we have the replication stored. We're doing an upgrade, but we ran out of time before hurricane season so we decided to hold off until after that. We're going to move up to 8.5 as soon as hurricane season is over, but I didn't want to risk getting into a situation where we didn't have Zerto working at all. And version 7.5 has been working fine. There was no real need to do the upgrade, other than to stay current.
We're about a version behind. I generally stay at least a half-a-version behind to let everybody else do all the sorting out of anything that they might find with the new versions, and then I jump on the second-oldest version, once that's a little more mature.
The stability of Zerto itself has been fine. We have some network instability that affects it, but I see the alerts come through and it's not Zerto that is having trouble. Zerto itself is very stable.
Scalability is not really applicable to our situation, but I would imagine that it would be easy to scale if needed. We would just buy additional licenses and strategize a little bit about how we were going to add them.
We only have 15 server licenses, but I expect we'll have some growth as we have a couple of projects upcoming. We're probably going to need another server, so even though we're retiring a few servers, I'll leave the licenses. I won't probably be in any position to make it bigger any time soon, but you never know. Our company always has its eye out for acquisitions. Last year we picked up two companies, although they didn't really result in any major increase in our infrastructure. Maybe we'll pick up somebody similar in size to us and all of a sudden we'll need to protect 10 more servers. I don't have any plans for downsizing Zerto.
If I were to nitpick, I would say that I wish I had a better account manager. Our sales guy has changed a couple of times. I would like a little more responsiveness from our account manager. I've had a couple of issues where getting in touch with him has been a little difficult, and I end up just going around him and dealing with support and support has handled it right away.
I've only had to deal with their tech support a few times, and I would give them a nine out of 10. They've been pretty responsive. They've answered my questions. They've gotten things taken care of.
Before Zerto, I just restored backups. We have Veeam as our primary backup system. Veeam is a traditional backup system. In those ransomware events I mentioned, I literally had to go through and restore a bunch of stuff from different servers from our backup repository, which is onsite. I had to go back and restore this folder and that folder and this folder and that folder. I would sort of have to do that with Zerto, but it would be a lot easier. I could just pick the folder and pick the time, and say, "Go." With Veeam it was definitely a much more complicated process.
We didn't "switch" to Zerto, we added it. We still have our other solution. While there is disaster recovery where you're recovering from a disaster, business continuity is how fast you recover from disaster; how quickly you get the business going again. Zerto reduces our RPOs. It was more a case of added protection and it reduces our recovery times—even though thankfully we've never had to use it—compared to the last time we had to recover.
Zerto came highly recommended from our primary VAR, which is Presidio, the place we bought it from. They said it would do exactly what we needed it to do, and the price was reasonable. I took the recommendation, did some research on them. It's possible I looked at reviews on IT Central Station and someone there said, "Oh yeah, Zerto is great." That's good enough for me. I didn't need to spend a ton of time on it. As long as they've got good reviews from multiple sources, which I did find when I researched Zerto, and it came highly recommended from our VAR, those were two pros and I didn't need to go looking for a con.
I didn't actually set up the software. I had to pay somebody else to do that because it was a little beyond my team's capabilities. Our deployment took about six hours from start to finish. The guy that I worked with said that he was pretty happy with it. He had to send in a couple of help tickets, and they were very responsive and were able to help him get through the issues that he had.
He's from an IT support firm called Creative Network Innovations, and they also have an onsite data center, so they offer data center support services in addition to regular IT support services. In this case, Zerto is data-center related. We use Creative Network Innovations because it's related to what they do for us, and they have people on staff who are comfortable working with it. Even though they don't necessarily do Zerto all the time, they were able to step in, take a look at it, and sort it out for us, so that was good.
I wouldn't necessarily say that they're experts in the software, because they learned it for us. They're not typical Zerto implementers, but that speaks to how easy Zerto is to use. They had never really used it, but they were able to pick it up, plug it in, and get it working for us.
In terms of deployment time, the Zerto piece didn't take long. It was about six hours. What we had to do in terms of setting up networking, that was a little different. The whole project, including Zerto, took about 12 to 16 hours. And when we did our first failover test, that probably took another six hours, because we had to figure out all the nuances of how to make it handle the various servers that we have.
It depends on the size of your installation. Because we're fairly small, it didn't require a lot of involvement. Most of it is Windows-based, so it's not that hard to install and set up. Things like getting access through the VPN, which weren't necessarily Zerto-specific, are what took a little time, but the Zerto piece was pretty fast.
It's a backup piece for us. We didn't really get that fancy. We basically identified the servers we needed to replicate offsite. Then we installed Zerto on our primary location and we installed Zerto on our secondary location. And we created the communication.
In terms of users, I'm the only one in our organization who monitors Zerto, and I do very little of that.
I would estimate that if I had to recover from a scenario like the last one that affected us, it would take me 10 to 20 percent of the time it took me at that time. That reduces the amount of time that our system is down and, therefore, the amount of money we're losing because our system is down. It does provide some cost-benefit, but it's hard to quantify, because nobody has said, "In our company, we lose $X an hour when we're down." But when things are down, people are not happy. If nothing else, it means I hear less griping, and that makes me happier.
They have an enterprise-type of licensing scenario, which we didn't qualify for because we don't have enough. Ours is pretty straightforward. It is site-based, but the payment concepts are based on the number of servers. In our case, we have a quantity of 15. When we bought it, there was an initial purchase amount plus maintenance. When it came up for renewal, we did three more years, and it was under $10,000 for my 15 servers.
It's very reasonably priced. It's a little more than $3,000 annually. That works out to about $20 per server per month.
Our backup recovery software, Veeam, is working on a product that will compete with Zerto. But it's still very new. It has not been out for very long, so I don't anticipate us going away from Zerto any time soon. That being said, when our renewal comes up with Zerto, I might reevaluate and look at Veeam and see if their solution is going to cover what Zerto does, because then I have one vendor to deal with, not that I dislike dealing with Zerto. It's just sometimes it's nicer to put all of your stuff into one package because the interfaces are uniform.
At this point in time, Zerto is safe with us. We've got them for three more years, and it does exactly what we need it to. Is it going to be our daily backup and our long-term retention? At this point in time, no. I'm pretty happy with what Veeam does and how it integrates with VMware, not that Zerto does a poor job. Zerto covers a different area.
It's kind of like if you were wearing armor, as a knight of old, but you were missing a piece on your back. If somebody stabbed you in the back, if you had armor there, you wouldn't worry about it. Zerto covers our "back." It covers stuff that Veeam doesn't. It handles point-in-time backups and it gives us a faster recovery in certain scenarios. It's not going to necessarily protect us from a full on-premises failure, because I don't have it doing that. I bought it specifically to defend us from certain types of attacks. We have Zerto handling 14 servers but we have a total of 20 servers. It's not backing up the other six, Veeam is, but that's because I don't need those to be protected from ransomware. I need them to be protected from system failure or catastrophic disaster where our primary location is under 20 feet of water from a hurricane, or the whole thing burns down. Zerto is not going to protect us from that, although it possibly could. We just don't use it for that.
It provides us some niche protection and we're happy with the niche that it protects.
Because we're a smaller company, I would never need a full-time person to do disaster recovery, whereas a company with several thousand employees and multiple billions of dollars of revenue would probably have a team for that. I would imagine those guys would save people if they had Zerto, but that's just me imagining that, rather than it being fact.
If I had 1,000 servers, it might require more of my time, but we have 14. We have a board of directors that wants things to be bulletproof, and they're willing to pay for it. Do we need Zerto? Probably not. Is it nice to have? For sure. But we certainly don't use it in the typical use environment, which I'm sure is a lot more servers than we have. That being said, we still use it, and I highly recommend it, even for companies of our size, although it's probably not the sweet spot for a lot of companies like ours. It's kind of pricey for smaller companies, but for what it does, I think the value is exceptional.
For companies of our size, if you don't have somebody on staff who can use Zerto, you want to find the right help. Your VMware person should be able to help you with it. Make sure that you're comfortable with what you're trying to accomplish. I thought it was a pretty smooth implementation, as you can tell from the time that it took. That might be in part due to the people we enlisted to help us. I can't say that everybody's installation will go that smoothly, but I would imagine that if you have a pretty solid VMware-type person, you should be pretty good with the Zerto piece. It's really a matter of working on the VMware side of it. There is also a little bit of networking, depending on where you're backing up to.
If you're backing up to the cloud, you obviously need somebody who is cloud-savvy who can get the proper connections to your AWS and secure them.
Overall, make sure you have somebody who is VMware-savvy. You don't necessarily need somebody who is specifically Zerto-savvy. The guys that I worked with said that it was pretty easy to work with, even though they hadn't worked with it before. But again, ours was a smaller installation. A Fortune 500 company is going to need a little bit more capability. They're going to want to look for a Zerto-certified implementer, which I presume there are. We didn't bother with that because we're smaller and we didn't really have anything particularly difficult in our implementation.
In terms of preventing downtime, in our situation Zerto hasn't helped reduce that, but it's not because Zerto is not capable of doing so. It's just that we haven't had a situation like that in which it has needed to be used. We haven't had any incidents that required the use of the Zerto fallback.
The biggest lesson I've learned from using Zerto is that I wish I had known about it six years ago. I wish that I had known about its capabilities. Given that it's on version 8.5, it's been around for a while. I really wish we would have had it when we actually had a need for it.
If we ever need it, we're confident in it, given the test scenarios we've gone through where it's been great. It's a nice "warm blanket," and it's good to "cuddle" underneath it, because I don't have to worry about it. If I have an event, I'm pretty confident that I can get us back up and running quickly. Is it going to be instantaneous? Of course not. But it's going to take significantly less time than it would take if I had to react via a manual backup.
Zerto is "the bomb." I'm definitely happy we got it. Overall, it's reasonably priced. It's one of the less expensive business continuity and disaster recovery layers that we have. That being said, it doesn't do everything. We're a smaller shop. There are only three people on my IT team, including me. It's definitely been a very helpful tool and comforting to know that we have it in place. It makes it easier to sleep at night. For what we need it for, it does everything we need.
Zerto is a nine out of 10 and maybe even close to a 10. It's solid. It's a good product. It does what we need it to do. Since we haven't actually had a live event, I can't say that it's perfect, but in the tests we have run it through it has been great. The only blemish has been dealing with the account manager, which could be situational. I've only had to deal with him a few times. The last time he didn't even respond to me. That being said, it's been three years, and maybe he's moved on and nobody is monitoring his email box. And when I reached out to support, they took care of me right away. So the account management is a minor blemish. Everything else, as far as the product and support go, has been fantabulous.
It is for real-time data protection and, if needed, for the ability to recover within seconds at a point in time. It is deployed on-premise and multi-cloud on Azure and AWS.
It just gives us extra peace of mind. We can backup and recover critical information not only on-premise but also off-premise at multiple places. So, we have that additional place for recovery if Azure or AWS is having problems.
When we need to fail back or move workloads, Zerto decreases the time and the number of people involved. It definitely speeds up the process of recovery for us. We essentially need only one person for the recovery process. In other solutions that we had in the past, we had to involve quite a few of our team members in the recovery process. We haven't had to do fail back a lot, so I can't give a real numeric number of how much it has saved us. If we had to do a big fail back, I can see where it could have saved us.
It has reduced the number of staff involved in a data recovery situation. The number of staff involved is less than what it used to be. We can basically do that with one person. It also reduces the number of staff involved in overall backup and DR management.
It has saved us money by enabling us to do DR in the cloud rather than in a physical data center. We don't have to buy another SAN, so it has saved somewhere in the $150,000 to $200,000 range.
The reliability of the solution and ease of upgrades are most valuable. Support has also been really good on it.
It works very well in terms of it providing continuous data protection. It does what it says it is going to do. We have been using it for several years, and once or twice, we had to recover a machine or files. It didn't have any problems in doing what it is supposed to be doing.
It is easy to use once you have gone through the online training class to learn the basics about it. We have been able to get a couple of our folks in the IT department up to speed on how it works and how to utilize it within basically a day or less. It is relatively easy for us to get staff trained and get going.
We would like some of the real fine or granular things. We've submitted a few minor things for enhancements such as being able to control bandwidth utilization for each facility you replicate to versus overall. We just need a little bit more granularity on some of the things, but there is not a whole bunch that is in need of tweaking.
We've been using this solution for three years.
It has been very stable for us. We haven't had any issues with it. Even upgrades have been relatively seamless for us. If anything, it is just that you miss something on the upgrade release note and you need to open a port or something else, but there is nothing critical.
It seems to be very scalable. We're not that big, but it seems to scale out for us and give us more scale for what we are size-wise. It could be very beneficial for a bigger organization.
We started out protecting roughly 30 terabytes of data, and that's roughly where we're right now. We have 30 terabytes of data and 250 employees. We are just trying to keep them all functioning 24/7.
At the moment, we don't have any plans to increase the usage. We're utilizing everything we can at the moment. The only thing that we might consider down the road is the backup functionality long-term, but that's something we just keep evaluating versus what we currently have. What we currently have works so well, and we don't really want to change it.
Their support has been really good. They've been very proactive in helping resolve issues, and you get quick callbacks or contact with them. I would rate them a 10 out of 10.
They used Avamar Data Domain before Zerto. It had a very complicated process, and the price was also very high. It did not have a similar granularity of recovery points.
It was pretty straightforward. We had it fully installed and started implementing it within the first couple of hours of the process. We worked with the local rep for about an hour or two, and by then, we had the process down. After that, it was pretty straightforward, and we just replicated that for additional protection groups.
In terms of the implementation strategy, we knew what we needed. We wanted to get out in the cloud. We focused on Azure to start with and then came back and looked at AWS after the fact for a couple of use cases where Azure wasn't the best place for some big data sets.
For its day-to-day maintenance or administration, there is just me. We do have desktop admins that can get into it as well if they need to be, but generally, I take care of it all for them. They just holler out if they have a problem or a question about something, and I can take care of it for them.
We worked with the local rep for about an hour or two. Our experience with him was very good. He was very helpful and knowledgeable about the product and also about the ways other folks were using the product.
There is nothing that we can quantitatively define, but we are able to meet regulatory requirements.
It initially seemed a little pricey, but in the big picture, you're paying for peace of mind. It could always be cheaper and more competitive, which would make it an easier choice for people, but I can see both ways. They can say this cost is for the value they are providing. If anything happens, they can recover your data very quickly. You won't be losing it, so there is a win. It is a win-win.
We evaluated VMware Site Recovery Manager. I have used that in the past, and it is okay, but upgrades tend to break a lot of stuff, whereas Zerto hasn't had that kind of issue, which is great. It is never a good thing to do a minor update and then your whole system is dead for maybe a day or two until you figure out what caused the breakage. We also looked at SRM and Cohesity. Cohesity was more for just overall backup, not for full DR.
Zerto was very easy to use. We could use it for backup and DR, which was very important for us. That was one of the key driving factors for us.
I would advise others to just get the training before utilizing it so that you have a better understanding of the overall product. You should also have plenty of bandwidth for your providers so that replication works seamlessly.
It has helped us a little bit in reducing downtime in a couple of cases. It saved us a few hours here and there. It could save us time in a data recovery situation due to ransomware or other causes. We haven't had to use it currently for that. Its overall backup and DR management could also reduce the number of people needed.
We don't use Zerto for long-term retention. We have another solution in place for that. We will evaluate Zerto possibly down the road.
I would rate Zerto a 10 out of 10.
We have typical use cases for it: resilience and disaster recovery. They have some other functionalities that their software can help account for, but we are using its disaster recovery and resilience, which are kind of its core functions.
I have used it in many scenarios, including a temporary data center move in Europe. I had to move all my resources from Belgium to Budapest, and then back, once our data center was physically moved across town in Belgium. I am not sure how this would have been accomplished without Zerto.
With Zerto, the move was incredibly easy to do. It was click of a button, wait 10 minutes, and everything is up, then turn on the data center. Once the data center was relocated and rebuilt, click a button, and wait a few minutes, then it now runs back to the original site. It was that easy. The data center move part was obviously the hard part, as it should have been, not keeping the applications going at a secondary site during that time. That was a pretty big success with Zerto and our largest use case for it: a data center move.
We are currently using Zerto with some more modern databases, application servers, and tertiary systems to provide redundancy and resiliency to our crown jewel application. We have been doing a lot of DR testing scenarios, part of which relies on Zerto and part of which are other mechanisms. In general, when we have done our recent testing using the Zerto portions, once we say, "Okay, we are doing this now," it is taking less than three minutes on average for the systems to be fully back online at the new location once we start. That includes booting all the Windows VMs up. The actual VMs were ready to go and functional within 30 seconds. However, some of them are larger Windows machines and those take their time to boot, getting services online and connected to everything. So, the Zerto part was literally under a minute in these test scenarios to clear a total failure and initiate our disaster recovery function.
The near real-time replication is probably the biggest value of this solution. There are some other ways to get that done, but this seemed to be the easiest and cheapest way to get near real-time replication. In most instances, our RPO is about five seconds, which is pretty aggressive and not that taxing to achieve with Zerto.
The ease of use is pretty high. It really isn't very complex to use. They did a good job with the UI, and it is fairly obvious where you need to click, what you need to click, and what you are doing. There are good confirmation screens, so you are not going to accidentally take down or move loads that you are not trying to. It is fairly user-friendly, easy to use, and you don't need to read a manual for three weeks to start using it.
Previously, our main need for Zerto was actually database cluster servers running fairly old software, SQL 2008 on Microsoft Windows clusters with none of the advanced SQL clustering functionality. Our environment is all virtualized. The way we had to present the storage to our host machines in VMware was via raw device mapping (RDM). Technically, Zerto can do that, but not very well. We have gone to some different methods for our databases, which don't actually use or rely on Zerto because the solution wasn't that functional with RDMs. This is an old, antiquated technology that we are currently moving off of. I can't really blame them, but it definitely is something they thought they could do better than they could in practice.
They had a bug recently that has come up and caused some issues. They currently have a bug in their production versions that prevents their product from functioning in some scenarios, and we have hit a few of those scenarios. Aside from that, when it is not hitting a bug, and if we're not trying to use it for our old-style, old-school databases, it functions incredibly well.
I had an early Zerto certification from their first ZertoCON conference. I received a certification from them in May 2016, so I have been using it for at least five years. I would have been one of the initial users at my company, so they have been using Zerto for at least five years.
Stability is reasonably good, but I wouldn't say excellent. We have had some odd issues with vRAs, which are little VMs that hang off of every VMware host that we have. Those aren't consistent, but they do occasionally happen. As I referenced earlier, there is a bug in the system right now that can affect my VM recovery. It tries to put too many requests into VMware at once, and VMware will timeout those requests, which causes Zerto to fail. That has not been constant throughout our use of Zerto. It is usually a flawless operation, and that is why I can still say good to very good, even though they currently have a bug. It is very uncommon for them to have anything that affects the platform negatively.
Scalability hasn't seemed to be an issue. We started out with two sites connected in the same city. Now, we are running the connected infrastructure of Zerto on three different continents. Some of those continents have various cities and/or countries involved. That has not given us an issue with scalability at all. It seems to be fairly flexible in adding whatever you need it to do. As long as you have the bandwidth capability and reasonable latency between sites, Zerto seems to work quite well.
10 to 12 people are actively in Zerto, or even know what it is besides a word that an IT guy uses to say, "It is okay." Generally speaking, their titles would be network administrator, network engineer, or senior network engineer.
For all our sites, most of our IT staff wouldn't be allowed to mess with it. Because if you hit the wrong buttons in Zerto, you can take down an application. So, there is a fairly small list of folks who would be able to get into this. Only a few sites can actually access the management console. They are located in Louisville, Kentucky; Belgium, Budapest, and Melbourne, Australia.
I would rate the technical support as eight out of 10. They know the product very well. I have had a couple misfires at times, but they are pretty good in general.
One of the issues that we had early on was regarding some of the storage functionality, especially regarding RDMs. I had contacts and conferences with the Zerto development staff, whom I believe are in Israel, about the ability to ignore disks in Zerto for my virtual protection groups (VPGs). What they can do currently is mark them as temporary disks, then they will do a one-time copy, and that is it. However, some of those temporary disks are extremely large, so it wasn't a great answer for us. I would like the ability to ignore disks instead of still trying to replicate every disk on a VM as being protected by Zerto. The biggest thing that they can do right now is improve their product. This would have been much better a few years ago rather than now. Now, we are finding other ways around it.
We previously had some storage-based replication, which we are currently still using, but nothing that really fits the same mold that Zerto does.
Zerto's database storage replication is not good with RDMs. We are still doing storage-based replication for those.
Our new schematic is self-replicating. It doesn't require any type of Zerto replication or storage-based replication, so that was a need removed.
It was quite straightforward. You just install the software, point it to your vCenter instance, and then deploy your vRAs, which is done automatically. Updates have been the same, e.g., quite straightforward. The only challenge with updates is if you have multiple Zerto instances that are linked to each other. To be able to replicate to different sites, they can't be out more than a half a version. For instance, I am running version 8.5 on all my sites that are currently running Zerto, but I couldn't be running those if I was running 7.5 anywhere. That would have been too far out of appliance. That is more of a minor challenge than a problem. I don't consider that to be a shortcoming, but it is well-documented, easy to figure out, and also pretty straightforward.
The first site was also kind of a learning experience. That deployment took less than a day from, "Okay, let's start the download," to, "Look, it's doing something," and you need to stand up two sites to go from site A to site B. That took less than a day to get them up and functional in at least some capacity, protecting some machines and workloads.
We generally try to perform all functions in-house instead of bringing in a third-party or contractor service to help for deployments. That was the model that we followed. We read the documentation, had Zerto's number handy in case we ran into issues, and deployed it ourselves.
There are probably only five of us (out of the 12 who have access) needed for deployment maintenance. Their titles would be network administrator, network engineer, or senior network engineer.
It is fairly simple to deploy and maintain. We do product upgrades every six to 12 months.
We relocated all our virtual machines from Belgium to Budapest, Hungary. I am not sure how we would have done it without Zerto, because we were able to keep the data in sync. We would have needed to have a lot more expensive storage products online at the time that could have kept that replication. From what I have seen from other methods, that would have required a much higher amount of bandwidth as well, then the cost would have been extreme. The mechanisms available to us with a storage space replication would have been more labor-intensive and prone to error. It was much easier and more successful with Zerto than other ways at our disposal.
Zerto has reduced the time involved that staff would spend on a data recovery operation. We don't have dedicated resources for disaster recovery. It is a scenario where, "Everybody, stop what you are doing. This is what we are all working on right now." We haven't had a reduction in headcount because of Zerto, but we have reduced the use of existing headcount.
DR management is less time-intensive and resource intensive. Therefore, there are less staff hours involved because of Zerto, but not less headcount.
Zerto has helped to reduce downtime in any situation. The easiest one to point out was the data center move. It took minutes to move an application to a different country, then minutes once again to move it back. That would have been hours at best to days with the other solutions that we had at our disposal.
Even though we are on-prem, the licensing model was changed to more of a cloud licensing model. We pay for blocks of protected machines. You need to buy a block for use and pay for maintenance annually based on the block size that you have.
When they changed their licensing model, pricing might have gotten a little more expensive for some use cases, but it has been pretty straightforward.
It is a little easier to use than Cohesity or Rubrik, but we haven't really had another DR platform in place.
At the time of evaluation, we did not have a good snapshot-based backup platform, such as Cohesity and Rubrik, so that was not much of an option. The only thing we were aware of and investigating was VMware Site Recovery Manager (SRM), which is VMware's built-in system, SRM, and played around with it. In comparison to Zerto at that time, Site Recovery Manager is a nightmare. Zerto was definitely the easy button when we were last investigating solutions. Zerto was better in terms of ease of use, visibility, and costs. Frankly, these are all the metrics that we looked at, and Zerto worked better than SRM as well as it was easier and cheaper.
Do a PoC. Test it along with other solutions that you are looking at and make a decision. Our decision was easy, and it was Zerto.
We are changing the infrastructure supporting our primary crown jewel application and will be utilizing Zerto more heavily in that. We are expanding the amount of application servers as well as adding some database servers that Zerto will be responsible for, and currently aren't. We are expanding using Zerto because we are expanding the assets for our application. That is happening currently. We have been working on that switchover for the last 12 months. We are getting close to actually deploying all those changes in production, so that is a fairly recent and ongoing task.
We haven't had to deal with a data recovery situation due to ransomware or other causes. We have a combination of luck and some pretty good security measures in place to where we haven't had an impactful ransomware event, CryptoLocker event, etc. In that event, I don't think Zerto would probably be the first thing that we would try to utilize. We have some pretty good backup mechanisms as well. We would probably look to those first to restore from backups. We have a fairly aggressive backup schedule with many servers backed up once an hour or more, which contain critical data. That is probably where we would go first.
There is a need to have both DR and backup in one solution, but it is not important. There are better backup methodologies that we use and they cover more use cases.
We are not utilizing any cloud resources for DR at this point. Our applications are very CPU and memory intensive, which becomes very expensive to run in the cloud.
We have other mechanisms for long-term retention.
Biggest lesson learnt: Disaster recovery doesn't have to be the biggest challenge in your organization.
I would rate Zerto as eight out of 10. The rating may not sound great, but it is pretty high for me.
We use Zerto for real-time replication of our systems, company-wide. The main reason is disaster recovery failover.
We use the long-term retention functionality, although it is not deployed system-wide. We have a lot of critical systems backed up, such as our file servers. We utilize it to hold things for up to a year and we send our long-term retention to ExaGrid appliances.
When we need to failback or move workloads, this solution has decreased the time it takes and the number of people involved. The entire process is, realistically, a one-person job. We usually have an application specialist involved just to validate the health of the server. Whether it's an SQL server or application server, we have somebody that runs integrity checks on it. That said, the entire process is very painless and easily handled by one person.
I estimate that this product saves us hours in comparison to products like Veeam. Veeam would take several hours of time to fail something over.
Our company fell victim to a ransomware attack that affected between 50 and 60 servers. Until we knew for sure that the entire situation was remediated and that we weren't going to spread the infection, we restored the servers in an offline manner, which only took a matter of minutes to complete. Then, we pushed all of that data into Teams and OneDrive directly for people to start accessing it.
From the SQL server perspective, we failed those servers over, running health checks such as anti-virus scans, just to make sure that the failed over instance didn't contain the same situations. Thankfully, they did not. We probably saved ourselves several days worth of work in the grand scheme of things. In total, it potentially would have taken weeks to resolve using a different solution.
I wouldn't necessarily say that using Zerto has meant that we can reduce the number of staff in a recovery operation. However, I think it's probably mitigated the need to hire more people. Essentially, as we've continued to grow, we've avoided adding headcount to our team. Using Veeam as my problem child to compare against, if we were using it, it would have required a lot more management from us. It would have cost us more time to recover and manage those jobs, including the management of the ExaGrid appliances, as well as the VRAs, which are basically proxies.
Definitely, there is a huge saving in time using Zerto and although we didn't reduce any headcount or repurpose anything, we've definitely mitigated at least two people from the hiring perspective.
Zerto saved us considerable downtime when we experienced the ransomware attack. It may be hard to substantiate that just on the one situation but we saved at least a couple of million dollars.
The most valuable feature is the continuous recovery with the five-second checkpoint interval. Just having those checkpoints prior to when a situation arises, we're able to get the transactional data that occurred right before the server failed. That has been a blessing for us, as we are able to provide a snapshot with no more than five seconds of data loss. This means that we don't have to recreate minutes or hours worth of data for an industry that includes fulfillment, shipping, warehousing, et cetera.
Zerto is very good at providing continuous data protection. It does a very good job keeping up with the system and it creates five-second interval checkpoints. This has been helpful when it comes to needing to fail something over, getting that last moment in time that was in a usable state.
This product is impressively easy to use. It's dummy-proof, once it's set up.
The long-term recovery is a little bit weak in its granularity. Veeam is definitely superior in that aspect, as it's able to provide a granular view of files and databases, et cetera. However, it just kind of depends on what a business' recovery strategy is.
From our business perspective, it's really not impactful to us because our recovery strategy is not based on individual files. But, I could definitely see it being a challenge if there is a very large instance of individual files, as a subset, that need to be recovered. I think that if somebody has terabytes of data then Zerto will recover it faster but navigating through the file explorer to get to files is not as easy with Zerto.
One thing I don't like about the product, and I know this is where their claim to fame is, but whenever I have a VPG that has multiple virtual machines in it, and one virtual machine falls behind, it'll pause replication on everything else in that job until the one server catches up. The goal is to keep symmetric replication processing going, so the strategy makes sense, but for our business model, that doesn't really work and it has created a challenge where I have to manage each VM individually. It means that instead of having one job that would cover multiple servers, I just have one job to one server, which allows me to manage them individually.
We have been using Zerto for approximately five years.
From a company perspective, a few years ago, I would have said that it is very stable. It is a solution that is thriving and growing. At this time, however, HP is in the process of acquiring them. While I had assumed that was their long-term plan, I didn't quite anticipate HP being the one to pick them up. As such, I am a little bit worried about what will change in the long term.
Scalability-wise, it's a very painless product. As we continue to grow out our virtual environment, Zerto is able to, in a very nimble fashion, scale with us with very little effort or overhead involved.
I'm covering approximately 400 VMs currently, which is approximately 360 terabytes worth of data. That is between two separate data centers.
Rating the Zerto technical support is a little bit tough because I've had some experiences that were truly 10 of 10, but then I've had one or two experiences where it was definitely a two or a three out of 10. It really depends on who I've gotten on the phone and their level of, A, comfort with their own system, and B, comfort helping the customer.
Some people have said this isn't within their scope of work, where others have said, "No, let's absolutely do this." In that regard, it's been a little hit and miss, but it's usually been a decent quality in the end.
Overall, I would rate the technical support a seven out of ten.
I have worked with Veeam in the past and although I prefer Zerto, there are some advantages to using Veeam. For example, long-term recovery offers more features.
In-house, we had also used the Unitrends product, as well as a SAN-to-SAN replication using an old HPE LeftHand array.
The main reasons that we switched to Zerto were the management ability, as well as its ability to provide continuous replication. Veeam was a very cumbersome product to manage. There were a lot of instances to monitor and manage from a proxy perspective, whereas Zerto's VRAs are relatively transparent in their configuration and deployment. These are painless and I don't have to continually monitor them. I don't have to update them since they're not like standalone Windows instances. It's very low management for us.
Of course, continuous replication is critical because Veeam, even though when we had owned the product, it claimed 15-minute intervals were doable, it never seemed to actually keep up with those 15-minute snapshot intervals.
One final reason that we migrated from Veeam is that they were utilizing VM snapshots at the time. I know that they've moved away from that approach now, but it was very painful for our environment at the time. The VMware snapshots were causing some of our legacy and proprietary applications to fail.
The initial setup is very simple.
Our implementation strategy involved setting it up for our two data centers. We have a primary and secondary data center, and Zerto keeps track of all of the VMs at the primary site and replicates them to the other site.
In the future, we plan on looking into the on-premises to cloud replication. On-premises to Azure direct is on our roadmap.
I completed the setup myself without support or anybody else involved in the deployment.
It took approximately an hour to deploy.
I handle all of the administration and maintenance. As the senior manager of infrastructure, I oversee our work and server group. I have also retained private ownership over the disaster recovery plan and failover plan.
We have probably not seen a return on investment from using Zerto. We don't really have lots of situations where we have to use it and can substantiate any kind of financial claim to it.
I do not like the current pricing model because the product has been divided into different components and they are charging for them individually. I understand why they did it, but don't like the model.
Our situation is somewhat peculiar because when we bought into it, we owned everything. Later on down the road, they split the licensing model, so you had to pay extra for the LTR and extra for the multi-site replication. However, since we were using LTR prior to that license model change, they have allowed us to retain the LTR functionality at our existing licensing level, but not have the multi-site replication.
We have not evaluated other options in quite a long time. We very briefly evaluated Rubrik.
When we first decided to implement Zerto, it wasn't very important that it provides both backup and DR in one platform. In fact, realistically, even now, while we have it and we used it on a limited scope, I'm not sure that it's needed.
With respect to our legacy solutions, I'd say that the cost of replacing them with Zerto is net neutral in the end.
My advice to anybody who is considering Zerto is that it's an awesome product and it won't steer them wrong. That said, there are some issues such as the licensing model and the situations where VPGs falling behind suspends the replication. Overall, it is a good product.
I would rate this solution an eight out of ten.
We use Zerto to replicate data between our on-premises data centers, as well as for replicating data to the cloud. It is used primarily for disaster recovery, and we're not using it very much for backups.
Continuous data replication is the most important feature to us, and we use it for disaster recovery. We have very short RPOs in the event of a data center outage.
With respect to ease of use, I would rate Zerto an eight out of ten. It is very easy to set up and utilize. The only reason I wouldn't give it a ten is that I would like to see more export capability. Right now, you can export your VPG to a spreadsheet, but you don't have a lot of control over what data goes there. You just get everything and the formatting isn't the best.
When we need to failover or move workloads, Zerto significantly decreases both the time it takes and the number of people involved. It only takes a single person to activate a failover and we can pretty much automate everything else. Instead of a week to recover a major application, we can do it in a day.
Mostly, this solution protects us from data center outages. With ransomware, it gets a little more complicated because depending on what they're doing, you could be replicating the encryption that they placed on you. Then, depending on how large your journal is, how far back you can go and how long the malware has been sitting in your network, it might not save you from a ransomware attack.
That said, it's still a major plus because if you have enough tools in your environment where you can catch the fact that they've been there, then if you've got 14 days, just as an example, in your journal, then you can go back far enough before they place any kind of encryption on your file. But, if you don't have other tools to also help protect you from ransomware, Zerto by itself may not be sufficient.
It's very rare that you have a true disaster where you have to failover a data center. I see Zerto more often being utilized to deal with some sort of database corruption. You can restore your primary site back from before the corruption. We need this Zerto protection, but it happens so rarely that you would actually have a full data center failure that, I can't say that we have had any staff reductions because of it. We have no staff specifically set aside for data recovery.
Beyond your normal path for backup and recovery, and those daily backups and managing that stuff, whether you're using Zerto for your backups or another backup utility in addition to Zerto, it hasn't really changed our staff.
The most valuable feature is the quick RPO for replication, which is our primary use case.
Zerto should add the capability to replicate the same VM to multiple sites.
The export capability should be improved so that it is more customizable in terms of what fields are exported and what the formatting is.
I would like to see the ability for Zerto to handle physical servers, although that is becoming less important to us.
I have been using Zerto with my parent company for the past several months and had been using it at a previous company for two years before that.
Zerto is very stable.
The scalability is generally good.
We're on an older version so this may have changed, but when it comes to cloud DR, they haven't kept us with the Azure capability. For example, Azure used to have an eight terabyte limit on disk drives. Azure now has a 32 terabyte limit, but Zerto still has a limit of eight.
That said, when it comes to the number of VPGs and the number of instances, that has been sufficient for us. We have 646 VMs and 60 VPGs that are protecting 650 terabytes of data.
We have about four people who are managing it day-to-day. It is a shared role; our server engineering team is responsible for Zerto, and that team has approximately twelve people. They are all capable of utilizing Zerto, depending on their individual responsibilities, but there are probably no more than four people who currently use it on a daily basis.
We don't have one specific person to manage it but instead, we rely on the team. We're in the process of getting them all trained adequately.
I have been in contact with technical support and I would rate them a seven out of ten. They are similar to a lot of companies, where they're very quick to respond to simple issues that might be in a playbook, yet slow sometimes to get a more complex problem resolved.
This was our first true DR tool. Before that, we were just using backup solutions. The one that we were using most recently was IBM Spectrum Protect.
I have a lot of past experience in my previous company with RecoverPoint, as well as with CloudEndure. CloudEndure was used specifically for cloud DR with AWS.
Zerto is much easier to use than RecoverPoint. Both Zerto and CloudEndure are very easy to use.
The initial setup is pretty easy to do. I was not with this company when they implemented it, so I don't know how long it took them to deploy. However, in my previous company, we initially installed and set it up in a day. We didn't have much trouble.
At first, we only had a couple of small test instances. We started adding things that we needed, over time.
Using Zerto has saved us money by enabling us to do DR in the cloud because we did not have to purchase the infrastructure at the alternate site. It's difficult to approximate how much money we have saved because we never built a DR site for the applications that we now have replicated in the cloud. There has never been an on-premises solution for them.
It is relevant to point out that we're not using it so much for day-to-day backups, but rather, we're using it for continuous data protection for DR and we have not had any disaster, so it's difficult to quantify our return on investment from that perspective.
However, from the perspective of being able to do cloud DR and not having to pay for that infrastructure, and even when it comes to the ease of use when we're going from data center to data center, I think we've got a definite return on our investment in comparison to not having a continuous data protection tool.
There is a difference between what we do and what we would have been doing without a tool like Zerto. In this regard, Zerto is a kind of overhead because hopefully, you're not using it day-to-day in a real disaster. It's more like insurance.
We evaluated RecoverPoint, but Zerto's better integration into vCenter was probably the reason that we chose it.
We do not currently use Zerto for long-term retention, although we are looking at the feature.
I highly recommend Zerto. My advice for anybody who is implementing it is to go through all of the best practice guides and be sure to review whatever database they have in there. This way, they keep themselves efficient.
Also, it is important to keep in mind that it's only at a VPG level that everything is consistent. So, if you have multiple servers and applications that need to be consistent with each other, then, they really should be in the same VPG.
I would rate this solution an eight out of ten.
Originally, I was looking for a solution that allowed us to replicate our critical workloads to a cloud target and then pay a monthly fee to have it stored there. Then, if some kind of disaster happened, we would have the ability to instantiate or spin up those workloads in a cloud environment and provide access to our applications. That was the ask of the platform.
We are a manufacturing company, so our environment wouldn't be drastically affected by a webpage outage. However, depending on the applications that are affected, being a $15 billion dollar company, there could be a significant impact.
Zerto is very good in terms of providing continuous data protection. Now bear in mind the ability to do this in the cloud is newer to them than what they've always done traditionally on-premises. Along the way, there are some challenges when working with a cloud provider and having the connectivity methodology to replicate the VMs from on-premises to Azure, through the Zerto interface, and make sure that there's a healthy copy of Zerto in the cloud. For that mechanism, we spent several months working with Zerto, getting it dialed in to support what we needed to do. Otherwise, all of the other stuff that they've been known to do has worked flawlessly.
The interface is easy to use, although configuring the environment, and the infrastructure around it, wasn't so clear. The interface and its dashboard are very good and very nice to use. The interface is very telling in that it provides a lot of the telemetry that you need to validate that your backup is healthy, that it's current, and that it's recoverable.
A good example of how Zerto has improved the way our organization functions is that it has allowed us to decommission repurposed hardware that we were using to do the same type of DR activity. In the past, we would take old hardware and repurpose it as DR hardware, but along with that you have to have the administration expertise, and you have to worry about third-party support on that old hardware. It inevitably ends up breaking down or having problems, and by taking that out of the equation, with all of the DR going to the cloud, all that responsibility is now that of the cloud provider. It frees up our staff who had to babysit the old hardware. I think that, in and of itself, is enough reason to use Zerto.
We've determined that the ability to spin up workloads in Azure is the fastest that we've ever seen because it sits as a pre-converted VM. The speed to convert it and the speed to bring it back on-premises is compelling. It's faster than the other ways that we've tried or used in the past. On top of that, they employ their own compression and deduplication in terms of replicating to a target. As such, the whole capability is much more efficient than doing it the way we were doing it with Rubrik.
If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day. In the past, it was going to take us close to a month.
Using Zerto does not mean that we can reduce the number of people involved in a failover. You still need to have expertise with VMware, Zerto, and Azure. It may not need to be as in-depth, and it's not as complicated as some other platforms might be. The person may not have to be such an expert because the platform is intuitive enough that somebody of that level can administer it. Ultimately, you still need a human body to do it.
The most valuable feature is the speed at which it can instantiate VMs. When I was doing the same thing with Rubrik, if I had 30 VMs on Azure and I wanted to bring them up live, it would take perhaps 24 hours. Having 1,000 VMs to do, it would be very time-consuming. With Zerto, I can bring up almost 1,000 VMs in an hour. This is what I really liked about Zerto, although it can do a lot of other things, as well.
The deduplication capabilities are good.
The onset of configuring an environment in the cloud is difficult and could be easier to do. When it's on-premises, it's a little bit easier because it's more of a controlled environment. It's a Windows operating system on a server and no matter what server you have, it's the same.
However, when you are putting it on AWS, that's a different procedure than installing it on Azure, which is a different procedure than installing it on GCP, if they even support it. I'm not sure that they do. In any event, they could do a better job in how to build that out, in terms of getting the product configured in a cloud environment.
There are some other things they can employ, in terms of the setup of the environment, that would make things a little less challenging. For example, you may need to have an Azure expert on the phone because you require some middleware expertise. This is something that Zerto knew about but maybe could have done a better job of implementing it in their product.
Their long-term retention product has room for improvement, although that is something that they are currently working on.
We have been with Zerto for approximately 10 years. We were probably one of the first adopters on the platform.
With respect to stability, on-premises, it's been so many years of having it there that it's baked in. It is stable, for sure. The cloud-based deployment is getting there. It's strong enough in terms of the uptime or resilience that we feel confident about getting behind a solution like this.
It is important to consider that any issues with instability could be related to other dependencies, like Azure or network connectivity or our on-premises environment. When you have a hybrid environment between on-premises and the cloud, it's never going to be as stable as a purely on-premises or purely cloud-based deployment. There are always going to be complications.
This is a scalable product. We tested scalability starting with 10 VMs and went right up to 100, and there was no difference. We are an SMB, on the larger side, so I wouldn't know what would happen if you tried to run it with 50,000 VMs. However, in an SMB-sized environment, it can definitely handle or scale to what we do, without any problems.
This is a global solution for us and there's a potential that usage will increase. Right now, it is protecting all of our criticals but not everything. What I mean is that some VMs in a DR scenario would not need to be spun up right away. Some could be done a month later and those particular ones would just fall into our normal recovery process from our backup.
The backup side is what we're waiting on, or relying on, in terms of the next ask from Zerto. Barring that, we could literally use any other backup solution along with Zerto. I'm perfectly fine doing that but I think it would be nice to use Zerto's backup solution in conjunction with their DR, just because of the integration between the two.
In general, the support is pretty good. They were just acquired by HP, and I'm not sure if that's going to make things better or worse. I've had experiences on both sides, but I think overall their support's been very good.
Zerto has not yet replaced any of our legacy backup products but it has replaced our DR solution. Prior to Zerto, we were using Rubrik as our DR solution. We switched to Zerto and it was a much better solution to accommodate what we wanted to do. The reason we switched had to do with support for VMware.
When we were using Rubrik, one of the problems we had was that if I instantiated the VM on Azure, it's running as an Azure VM, not as a VMware VM. This meant that if I needed to bring it back on-premises from Azure, I needed to convert it back to a VMware VM. It was running as a Hyper-V VM in Azure, but I needed an ESX version or a VMware version. At the time, Rubrik did not have a method to convert it back, so this left us stuck.
There are not a lot of other DR solutions like this on the market. There is Site Recovery Manager from VMware, and there is Zerto. After so many years of using it, I find that it is a very mature platform and I consider it easy to use.
The initial setup is complex. It may be partly due to our understanding of Azure, which I would not put at an expert level. I would rate our skill at Azure between a neophyte and the mid-range in terms of understanding the connectivity points with it. In addition to that, we had to deal with a cloud service provider.
Essentially, we had to change things around, and I would not say that it was easy. It was difficult and definitely needed a third party to help get the product stood up.
Our deployment was completed within a couple of months of ending the PoC. Our PoC lasted between 30 and 60 days, over which time we were able to validate it. It took another 60 days to get it up and running after we got the green light to purchase it.
We're a multisite location, so the implementation strategy started with getting it baked at our corporate location and validating it. Then, build out an Azure footprint globally and then extend the product into those environments.
We used a company called Insight to assist us with implementation. We had a previous history with one of their engineers, from previous work that we had done. We felt that he would be a good person to walk us through the implementation of Zerto. That, coupled with the fact that Zerto engineers were working with us as well. We had a mix of people supporting the project.
We have an infrastructure architect who's heading the project. He validates the environment, builds it out with the business partners and the vendor, helps figure out how it should be operationalized, configure it, and then it gets passed to our data protection group who has admins that will basically administrate the platform and it maintains itself.
Once the deployment is complete, maintaining the solution is a half-person effort. There are admins who have a background in data protection, backup products, as well as virtualization and understanding of VMware. A typical infrastructure administrator is capable of administering the platform.
Zerto has very much saved us money by enabling us to do DR in the cloud, rather than in our physical data center. To do what we want to do and have that same type of hardware, to be able to stand up on it and have that hardware at the ready with support and maintenance, would be huge compared to what I'm doing.
By the way, we are doing what is considered a poor man's DR. I'm not saying that I'm poor, but that's the term I place on it because most people have a replica of their hardware in another environment. One needs to pay for those hardware costs, even though it's not doing anything other than sitting there, just in case. Using Zerto, I don't have to pay for that hardware in the cloud.
All I pay for is storage, and that's much less than what the hardware cost would be. To run that environment with everything on there, just sitting, would cost a factor of ten to one.
I would use this ratio with that because the storage that it replicates to is not the fastest. There's no VMs, there's no compute or memory associated with replicating this, so all I'm paying for is the storage.
So in one case, I'm paying only for storage, and in the other case, I have to pay for storage and for hardware, compute, and connectivity. If you add all that up into what storage would be, I think it would be that storage is inexpensive, but compute added up with maintenance and everything, and networking connectivity between there and the soft costs and man-hours to support that environment, just to have it ready, I would say ten to one is probably a fair assessment.
When it comes to DR, there is no real return on investment. The return comes in the form of risk mitigation. If the question is whether I think that I spent the least amount of money to provide a resilient environment then I would answer yes. Without question.
If you are an IT person and you think that DR is too expensive then the cloud option from Zerto is good because anyone can afford to use it, as far as getting one or two of their criticals protected. The real value of the product is that if you didn't have any DR strategy, because you thought you couldn't afford it, you can at least have some form of DR, including your most critical apps up and running to support the business.
A lot of IT people roll the dice and they take chances that that day will never come. This way, they can save money. My advice is to look at the competition out there, such as VMware Site Recovery, and like anything else, try to leverage the best price you can.
There are no costs in addition to the standard licensing fees for the product itself. However, for the environment that it resides in, there certainly are. With Azure, for example, there are several additional costs including connectivity, storage, and the VPN. These ancillary costs are not trivial and you definitely have to spend some time understanding what they are and try to control them.
I looked at several solutions during the evaluation period. When Zerto came to the table, it was very good at doing backup. The other products could arguably instantiate and do the DR but they couldn't do everything that Zerto has been doing. Specifically, Zerto was handling that bubbling of the environment to be able to test it and ensure that there is no cross-contamination. That added feature, on top of the fact that it can do it so much faster than what Rubrik could, was the compelling reason why we looked there.
Along the way, I looked at Cohesity and Veeam and a few other vendors, but they didn't have an elegant solution or an elegant way of doing what I wanted to do, which is sending copies to an expensive cloud storage target, and then having the mechanism to instantiate them. The mechanism wasn't as elegant with some of those vendors.
We initially started with the on-premises version, where we replicated our global DR from the US to Taiwan. Zerto recently came out with a cloud-based, enterprise variant that gives you the ability to use it on-premises or in the cloud. With this, we've migrated our licenses to a cloud-based strategy for disaster recovery.
We are in the middle of evaluating their long-term retention, or long-term backup solution. It's very new to us. In the same way that Veeam, and Rubrik, and others were trying to get into Zerto's business, Zerto's now trying to get into their business as far as the backup solution.
I think it's much easier to do backup than what Zerto does for DR, so I don't think it will be very difficult for them to do table stakes back up, which is file retention for multiple targets, and that kind of thing.
Right now, I would say they're probably at the 70% mark as far as what I consider to be a success, but each version they release gets closer and closer to being a certifiable, good backup solution.
We have not had to recover our data after a ransomware attack but if our whole environment was encrypted, we have several ways to recover it. Zerto is the last resort for us but if we ever have to do that, I know that we can recover our environment in hours instead of days.
If that day ever occurs, which would be a very bad day if we had to recover at that level, then Zerto will be very helpful. We've done recoveries in the past where the on-premises restore was not healthy, and we've been able to recover them very fast. It isn't the onesie twosies that are compelling in terms of recovery because most vendors can provide that. It's the sheer volume of being able to restore so many at once that's the compelling factor for Zerto.
My advice for anybody who is implementing Zerto is to get a good cloud architect. Spend the time to build out your design, including your IP scheme, to support the feature sets and capabilities of the product. That is where the work needs to be done, more so than the Zerto products themselves. Zerto is pretty simple to get up and running but it's all the work ahead in the deployment or delivery that needs to be done. A good architect or cloud person will help with this.
The biggest lesson that I have learned from using Zerto is that it requires good planning but at the end of it, you'll have a reasonable disaster recovery solution. If you don't currently have one then this is certainly something that you should consider.
I would rate Zerto a ten out of ten.
We are an electric utility and we have some pretty critical workloads. We have identified the most critical workloads in our environment and have implemented Zerto as a protective measure for them.
We try to keep our critical workloads protected, which are a subset of our systems. For example, we're not going to protect a print server with Zerto.
The fact that Zerto provides continuous data protection is key for us. We have tested on a regular basis, and in one case, we tested our entire ERP system. It is a pretty big workload that includes Linux servers, databases, and other components. It's about a 45-minute window to get it back up and running. For our test, we moved the entire system to our DR facility on a weekend, ran it for an entire week from the DR site, and then brought it back the following Sunday. It worked flawlessly.
I really like the 24-hour DVR-like rollback. For example, we had an issue a few years ago, when we still had an Exchange server on-premises. One of my staff came in for the morning to do vulnerability management, saw that some updates needed to be applied, applied the updates to the Exchange server, and it totally broke it. Everybody's email was down. To resolve things, we went to Zerto, rolled back to before the updates, and it was all done in less than five or 10 minutes. It was really quick. All of the email functionality was restored and it popped up and said, "Hey, you need an update." I said, "Please do not do that update." It was pretty good.
Zerto is easy to use and the interface is very intuitive. We have never had an issue with using it. We just have a one-man team to perform failbacks or workloads. It is very simple to do and during our test with the Exchange server, it was only a matter of a few clicks. It's always been an excellent product and they've only improved it over time. We're really pleased with it.
The integration with VMware is really good.
It would be nice if we were able to purchase single licenses for Zerto. As it is now, scaling requires that we purchase a multi-pack. It hasn't been a big deal for us but it would still be helpful to have a little bit more granularity on the license count.
The only timeline or limiting factor, in my opinion, is how long it takes to replicate. That all depends on your infrastructure, and we happen to be pretty fortunate that we have a nice pipe in between the two locations, between here and our DR site. If you don't have that limiting factor, it's just a matter of time. You just wait long enough for it to replicate over and then you're covered.
I have been using Zerto for approximately seven years.
We do the updates regularly and Zerto has never given us problems. We work with a lot of different technologies and we have a lot of problems, but Zerto has not been one of them.
We haven't had much opportunity to explore scalability at this point. We're responsible for another organization's IT, as well. They're a sister company of ours and they're smaller than us, so we do all of their IT and we have them on Zerto. They're using us as a DR point.
From an expansion perspective, we scaled up from our initial install to include theirs as well, which I think we got pretty close to doubling our license count.
We are 100% deployed at this point. If we were ever to add another sister company, which is possible because we have other sister companies where opportunities may arise. A lot of the time, they're so small that they can't afford IT, so it's easier to have us manage it. In cases like this, we may have an opportunity to deploy Zerto.
We have a very small team of three people, so Zerto does not affect our headcount. There is me, who is the manager of IT or manager of information services. Then, we have our desktop technician, and then we have our network administrator.
We have never had to use Zerto's technical support for anything major. Any time that we have had to contact them, it has been for minor stuff and it's worked out fine.
A long time ago, when we had an EMC SAN, there was a VMware plugin that served as a replication solution. However, it was terrible and it never worked.
Zerto is a major upgrade that is easier to use and switching was excellent.
Replacing our legacy solution with Zerto has definitely saved us time and improved the quality of our process. I never felt like I could trust our previous solution, which was a big deal because when you're talking about backups, trust is a major factor. You have to be able to trust your solution and feel like it's going to work in a bad situation.
Zerto is one of those things that you love to have but you hate to have to use because it means that something bad is going on. That said, if there are serious problems then you want to have something that's rock solid. For us, that's Zerto, and we feel strongly about that.
The initial setup was very straightforward. We had some training with some Zerto engineers on how to set up the recovery groups and other things, but once that was set up, we made several changes later on as we played with it. Overall, it was very straightforward to configure and I think that we only had an hour of training.
The deployment took us a couple of weeks to get everything figured out, although it wasn't necessarily Zerto that was the hold-up. We only had a certain number of licenses, perhaps 15 in total. We spent time trying to determine which were our critical workloads, and there was some internal debate about it. From the Zerto perspective, there weren't a lot of issues.
It didn't take a lot of time, just a couple of weeks to get us up and going. We were actually up and technically running within that same day, but to truly boot it and get it where it needed to be, it took a couple of weeks. It was a new technology to us at the time, so it took a while to get up to speed with it.
In terms of our implementation strategy, we just tried to identify the critical workloads, find the ones that really needed to be protected and start to make those recovery groups. Then, we organized them in such a way that things worked properly. For example, the components of our ERP system do have to come up in a certain order. Finding all of that stuff out and fine-tuning the process was part of our strategy. Then, we slowly started moving those workloads across. We broke it down into groups and we did those groups one at a time until the implementation was complete.
Our in-house team was responsible for implementation.
Maintenance-wise, we just keep it updated. Our network administrator applies the updates and checks the health from time to time. We have a dashboard on our big screen if we feel the need to monitor it. If we walk by and it looks like a protection group is in the red or yellow, then we look at what needs to be done to get the problem straightened out.
Price-wise, it's right in line with what we would figure. For what you get for it, it's really a good value, and we've never had any problem renewing it or anything like that.
License-wise, we budgeted $1,000 per VM. The minimum spend on it, in the beginning, can sometimes be a little bit of a headache for people, and they might have to budget creatively to get there, but once you're there, the renewals are worth it.
Licensing requires purchasing packages that consist of several licenses, and they cannot be purchased one at a time.
We paid for an hour of training that we took but otherwise, there have been no costs in addition to the standard licensing fees.
We began looking at Zerto for several reasons including the cost, ease of use, and really, the flexibility of it. When you want to switch it over and do a different workload, it's not that big of a deal.
When we first began to consider using Zerto, we had a discussion with a grocery chain that is close to us. It's a specialty grocery chain and they have exotic foods sold out of two different locations. Christmas is their busiest time of year and they have several cash registers at each location doing transactions constantly.
They had to use Zerto during the middle of that Christmastime rush and failover, from one site to the other, all of their point of sale systems. They never lost a penny in transactions. For us, that was a big testimonial. They have a similar size of environment to ours as far as server infrastructure goes, so we didn't even look at anything else.
At this time, we don't use Zerto for long-term data retention. Instead, we have some other technologies in place for that. We have Veem and we have some SAN replication and we have some network-attached storage, as well. We use Zerto as our first line of defense. For example, in response to a ransomware attack, we would use Zerto for sure to roll back before that event happened.
We have not had a ransomware attack, at least not yet. We fully expect that, if it ever does happen, we'll definitely utilize Zerto. It is essentially our insurance policy. If we ever have a ransomware incident, that would be our first line of defense to recover from it. In fact, we really haven't had many opportunities to use Zerto, thankfully. Zerto is one of those things that are great, and we're glad we have it, but you hope we never have to use it.
At this time, everything we do is on-premises but having DR in the cloud with Zerto is definitely something that we want to do in the future.
It is not important to us that Zerto offers both backup and DR functionality. For backup, we have it covered in other ways. Being in the utility business, we're very big on redundancy. In fact, we have backups to cover the backups and we have about five different levels of them that we utilize. Zerto covers the front line, and when something bad happens, we can roll back within a 24-hour period using it. Then, we have deeper levels handled by other products like Veeam. Funnily enough, Veeam kept telling us that they would add Zerto-like features, and at the same time, Zerto kept telling us that they would add Veeam-like features. We continue to use both of them.
I've recommended Zerto to several IT professionals that I've talked to because it's such a good product. I give them examples of what we have done.
Overall, it's a fantastic product.
I would rate this solution a ten out of ten.
We mostly use it just for disaster recovery. We also utilize it for our quarterly and annual DR test.
It is on-prem. We have a primary location and a DR location.
Since we are at a bank, there are certain protocols in place where we need to have RPO and RTO times of two hours or less. Zerto does a great job of setting those times and alerting us if those can't be met. We have our help desk actively monitoring that. It is extremely helpful that Zerto lists what is falling out of compliance in regards to RPO and RTO. It has been great in that regard.
If we need to fail back or move workloads, Zerto decreases the number of people involved by half versus companies of similar size who don't have Zerto.
We have had patches that have broken a server. We then needed to have it right back up and running. We have been able to do that, which has been a huge plus.
The real-time data protection is the most valuable feature. We are able to quickly spin up VMs instantly.
We have also utilized it, from time to time, if our backups didn't catch it at night. If something was deleted midday, this solution is nice because you can use Zerto for that.
I would rate Zerto very high in terms of it providing continuous data protection. We have had multiple instances that took days with our old DR test (before I was at my current company) and DR tests from other companies where I worked that didn't have Zerto. Now, we can realistically do DR tests in less than 30 minutes.
Zerto is extremely easy to use. If 10 is absolutely dummy-proof, I would give the ease of use an eight.
It has a file restore feature, which we have tried to use. We have had some issues with that, because the drives are compressed in our main file system. It is a Windows-based file server. So, it compresses the shares and can't restore those by default. However, we have done it with other things. It is pretty handy.
I would like it if they would really ramp up more on their PowerShell scripting and API calls, then I can heavily utilize PowerShell. I am big into scripting stuff and automating things. So, if they could do even more with PowerShell, API calls, and automation, that would be fantastic.
I have been using it at my company for almost four years. My company has been using it for six years.
I would rate stability as eight and a half out of 10.
I would rate scalability as eight out of 10.
We monitor and use it every day. Our current license count is 150 VMs. I could definitely see us increasing that license because we keep adding more VMs.
As big as our company is, we don't have a very large infrastructure sysadmin group. I wouldn't say that Zerto has reduced our staff in any kind of way, but it definitely has helped the small amount of people that we have.
We have around 20 people using it:
I would rate the technical support as nine and a half out of 10. I thoroughly enjoy the fact that they are located in Boston, and you feel like you are talking to someone just like you. They do an excellent job of following up and escalating anything that is needed. I rarely have to call Zerto support, but I am confident that anytime I need to, then it will be resolved.
We stay in close contact with our main local rep.
My company never used anything quite like Zerto. We still use things for backup and recovery, such as Dell EMC Avamar, which used to be NetWorker. We also use RecoverPoint for applications, but it is not at all the same. There is actual real-time recovery. It is kind of a different animal.
I have had to redeploy it a few times with data center changes and such. We went from your typical data center to Cisco UCS Blades to VxRack, a VMware Dell EMC product. With that, I had to deploy it from scratch.
It was pretty straightforward. There is plenty of very easy to follow documentation when it comes to implementing it. There is also a lot of training provided so you can understand it before you implement it. Those two things make it pretty easy.
Just to stand it up and get everything going, that took an hour or two. The overall implementation was over the course of three days, because our core is heavily utilized.
We had a ZVM Virtual Manager on our production side and another on our DR site. Most of our data is replicated from production to DR. We do have some that are in the DR replicating back, but not a lot. Our main concern was between both sites, because we don't have a very large pipe. Even though Zerto's compression is pretty good, we didn't want to send that data all back over. Our main priority, when we set it up again, was that we were able to retain a lot of the data at our DR location and remap it by using preseeded disks, which was huge.
At least two staff members are required for deployment and maintenance. Whenever an update is released, we try to do that fairly quickly. For quarterly updates or major releases, we try to stay on top of them. Then, whenever we deploy new systems, applications, or servers, depending on the RTO and RPO, we add Zerto to those. That is daily, depending on how much workload we have and how many servers we are deploying. Those two people add those groups and such configuration into Zerto.
From an implementation standpoint, just follow the guide and check their support page for things. Worst case, reach out to support if you have already paid for it. It is pretty straightforward.
Zerto has helped reduce downtime. We have had servers go down and could easily spin them back up at our DR location almost instantly. Instead of taking an hour, it took a minute.
On average, it saves us three to five hours a day.
We pay for 150 VMs per year. It is not cheap.
Having backup and DR is somewhat moderately important to us. The problem with us, and a lot of companies, is the issue with on-prem Zerto. It utilizes whatever you have for a SAN. Or, if you are like us, we have a vSAN and that storage is not cheap. So, it is cheaper to have a self-contained backup system that is on its own storage rather than utilizing your data center storage, like your vSAN. While it is somewhat important to have both backup and DR, it is not incredibly important to have both. I know Zero is trying to heavily dip their toes in the water of backup and recovery. Once you see what Zerto can do, I don't think anyone will not take Zerto because they don't necessarily specialize in backup and recovery 100 percent. They do replication so well.
Zerto did really well with presenting their solution to the management here, really getting people involved, and helping them understand what and how it could be used. At the time, their real-time recovery was pretty far above anybody else available, and even still somewhat.
Other solutions would take an entire workday to recover our core infrastructure. With Zerto, we are done within an hour for all our major systems.
As far as the GUI goes, Zerto is more user-friendly than a lot of other products, such as Avamar and Commvault. It is fairly easy to use, but I think the GUI interface of Zerto is pretty far above the rest.
We use Avamar, and I don't see Zerto replacing Avamar for the simple fact of retention and how expensive the storage is. Using an RPM storage is pretty pricey, especially to try to rely on that for a long retention of seven years, for instance.
When it comes to purchasing, I highly recommend Zerto all the time to friends that I have at other companies.
It is just for DR. We keep an average of three days of retention, e.g., journal history of three days. However, it is not always the same for all products. We don't really keep it for backups. That is more of a convenience thing.
Currently, we don't utilize the cloud. It may be an option in the future. The cloud was a bad word for our bank for a long time, and that is starting to change.
Biggest lesson learnt: DR tests don't have to be so painful.
I would rate Zerto as 10 out of 10.
We're using it for site plate replication and fail-over or disaster recovery. We're primarily using it to replicate between the data centers that we own and operate.
We've had a few disasters where we've had a site go out and we've had outages or hardware failures. However, with a single click, we can have all of the failover and when the other sites come back up, it can auto re-replicate in the reverse direction so there is no extra manpower required. Whereas, normally, we would be spending hours and hours cleaning up from the failover event.
We enjoy the simplicity of not only configuring replication but failing over with a single click and then having it automatically reverse replication. We've had other products such as Veeam, and their replication works, however, it's very cumbersome to configure. When you failover, there's a bunch of work you have to do after the fact to reverse the direction and to restore the VM and how it names it and which environment it shows up in.
In terms of continuous data protection, it's the best product that we've found that does this. It's not snapshot-based. It's continuous, so there are no specific points in time we have to worry about recovering to or from. It's pretty much any time, as long as it's within our replication window.
The solution is very easy to use. It's very straightforward. You don't really have to do a lot of reading through the documentation, or things like that. You can basically scroll through the menu and figure it out.
We have not had ransomware, so we haven't had to deal with that, however, we definitely had a disaster recovery issue we had where we had the fail-over site stop unexpectedly. It did save us a bit of data loss, whereas, normally, we would have lost six hours' worth of customer data. In this case, it was seamless. We lost seconds' worth.
The solution has reduced downtime. It has done so a couple of times. There could be some cost savings there. It's just not something we calculate.
The backup solution needs to be improved. From our perspective, Veeam and Zerto were competing products. They both do very unique things that they're very good at. For instance, Veeam can do replication well. However, it's really a backup product. Zerto can do backup, and yet it's really a disaster recovery product. It would be great if they could improve upon the backup functionality, or continually improve. We've seen some improvements, however, if they continue improving upon that it may eventually eliminate the need for the other product.
I've been using the solution for about three years.
The solution is very stable. We haven't had any issues. The only issue we had was a DHCP issue where we didn't static a couple of the DVMs, which is the agent for each ESX host, and we were having a few gaps in replication when the IPs would change, however, we've stacked those and that has resolved that issue.
We find that it's very easily scalable. The resource overhead is very minimal so it's really easy to scale up the environment and the product kind of automates the process for you. You select where you want it, hit install, and it handles it for you.
About five people use the product in our company. We have some system administrators, we have a couple of programmers and we have a DBA.
We have around a quarter of our environment replicated with Zerto. It's mostly our critical infrastructure.
We may possibly increase usage over time.
Technical support is good. I'd give it an eight out of ten. They're pretty quick to respond. They are almost always able to resolve my issue. I have no complaints. I only had a couple of support tickets, however, the experience was pretty good.
That said, their web portal is a bit clunky to navigate. For example, putting in a request, knowing where to go, or pulling up documentation or upgrading information wasn't quite as intuitive as it could be.
We are still using Veeam mostly for backup tasks. We use Zerto for site recovery.
The initial setup was very straightforward and easy.
The installation was simple. There are lots of guides and information. There are YouTube videos. They had training classes that were free that you can go to and they have a little lab environment. Even without the assistance offered, the way you install it is very straightforward and very simple. Really anybody can run the installer and have an idea of what they're doing right out of the gate without really any training.
Deployment took around a day.
We did have a specific deployment plan and we were able to execute that in about a day. Getting all the sites set up and then the VMs replicated was fast.
We have five people on staff that can handle deployment and maintenance.
We didn't use an integrator or consultant. We just did it ourselves.
There's not a direct ROI as it's being used as an insurance policy. The only time it really benefits us is when something bad happens.
It's reasonably affordable. Obviously, cheaper would always be better, however, it's not out of the expected range. We are just paying by VM. It's my understanding there are no extra fees.
I can't remember the companies off the top of my head as it's been a few years since we've done it, however, we evaluated five or ten different options that were popular at the time. Some of them were integrated with hardware. Some of them were software only.
In the end, it came down to Zerto due to simplicity. It's very simple and straightforward. It removes all the overhead of management and knowing what is active or what's the standby copy. It handles all of those pieces for you.
We're probably on the latest version or one version behind.
We very lightly use the product for very specific things. We have a couple of things that are very high data rate, very high IO, for which we cannot use traditional snapshot-based technology and we are using that to do a long-term backup.
The solution has not reduced the number of staff involved in data recovery situations. We have maintained exactly what we had. It's simplified it so it's possible to have a reduction, however, we haven't done any reduction from that.
The biggest piece of advice I could give is if you want the best-in-class for failover and replication, as well as ease of management, there is no better product that I've seen so far. Whether hardware or software combinations, this has been the simplest deployment and it just works.
I'd rate the solution at a ten out of ten.
Database replication is our primary use case. We don't use Zerto for backups. We use Zerto as DR at our sites.
It is deployed on-premises at several sites.
It is a good DR solution for a number of databases that we use. It is just a DR solution that works. It does that continuously. I like it. Once it works, it is set and forget.
We have used it as a test in a data recovery situation due to ransomware or other causes. We haven't had to use the actual recovery. We haven't actually had to use it in an actual DR scenario, but we do DR testing. The switchover is pretty good.
Recovery is the most valuable feature. It has a good DR solution.
Zerto's ability to provide continuous data protection is good. It works. It has continuous availability.
It is pretty easy to use. The interface is intuitive and easy to use. Once you set it up, it just works. So, it is great.
I like that it's low maintenance; it's set and forget.
As far as replication technology goes, it is a pretty good product.
It has some quirks. We have quirks with appliances. Some things don't really work as expected, but it is minor. It doesn't really affect the overall functionality.
I have been using it for a couple of years.
The stability is good. There have been no major issues.
I have 12 engineers on my team. We maintain/administer Zerto to our organization.
It scales pretty well.
We have a large environment, but this actual deployment is pretty limited. It has a targeted use. We are protecting some production databases.
The support is good. I would rate them as eight out of 10.
We had another version of Zerto that we decommissioned, then we built a new one.
Zerto was already present in the company when I came onboard. We just renewed the support and built several new instances.
I was involved in the initial setup from the start. It is very complex as far as requirements go and what needs to be set out for Zerto. Once you set it up, it is fine. It works.
Back and forth, our deployment took a couple of weeks. It was deployed on different sites. We needed to set up, provision, have networks open, firewalls, and allocate dedicated storage. We also needed to install it, build it, deploy the agent on the host, and configure it.
I deployed the solution myself.
I am pretty sure my company has seen ROI.
It is a good solution. It doesn't work for everything, but for certain use cases. You can't restore your entire site with it. However, if you need to restore or replicate a certain number of production and mission-critical applications, it is a good solution for that.
I like Zerto. We have big databases, so it does take a lot of storage to replicate it, but I think it is a good solution. I would recommend it.
I would rate it as eight or nine out of 10.
We have Zerto as an emergency backup if we were to lose electricity or compute.
I purchased Zerto because I wanted to get a return to operations and to minimize the downtime.
The return to operations is the most valuable feature because it decreases the amount of time it takes us to recover.
Zerto is the best of breed when it comes to providing continuous data protection.
It has a number of features rolled together. So when we need to failover, it does it successfully without a lot of stuff that we have to tune underneath the scenes. We use Zerto for the short-term retention of the data.
I would rate its ease of use as an eight out of ten. It has made it a lot easier for us to failover. Usually, in the past, we had to manually go and bring things up and this automates it.
The solution decreases the time it takes and the people we need when we need to fail back or move workloads. It saves around eight hours and one person. We had started off with two to three people.
It could save us time in a data recovery situation due to ransomware or other causes but we haven't used it for that.
We haven't had something where we had to recover data using this product, but I assume it would reduce the number of staff involved in data recovery situations.
It has helped to reduce downtime in testing but we haven't had a serious issue where we had to switch over and use it.
The documentation needs improvement in terms of the setup, getting enough detail, and getting that up to speed.
I have been using Zerto for about a year.
We found Zerto to be pretty stable.
We haven't had problems with scalability.
We don't really have users. We just have data that we move over which is basically the size of the campus.
We need at least one full-time employee to run it.
It's used for all of our failovers so it's in 100% usage.
I have had a little bit of experience with their technical support. I don't have any issues with them.
The ease of use, compared to other products, is much better. Zerto is all-encompassing.
We had to work on it for about a week to get it running the way we wanted. It took so long because of the fine-tuning. We could have set it up within three hours or something just as a test to see at work, but not necessarily do everything we wanted to do.
The time it took to sync the data up took a little bit longer.
We'll probably see ROI in around three years.
The pricing is more expensive, but the functionality is what we wanted.
There are no additional costs to standard licensing.
We also looked at Druva. We liked the flexibility that we get with Zerto.
You'll be happy with Zerto.
The biggest lesson I have learned from Zerto is to be patient.
I would rate Zerto a nine out of ten.
I am a solution provider and Zerto is one of the products that I implement for my clients.
Most of my customers use this product for disaster recovery purposes. Some of them use it in a local, on-premises environment, whereas other customers use it in the cloud.
We have assisted some of our clients with on-premises to cloud migration. These were customers that had an established local environment but wanted to explore the cloud. For these clients, it is a cloud-based DR implementation.
There are four or five customers that did not want a cloud deployment, so we have implemented the DR site on-premises for them.
If the client is given the choice, typically they prefer a cloud-based deployment. CDP technology is becoming the new norm, even for the backup industry. However, there are some instances where it is not an option. For example, in some situations, they cannot use cloud-based storage due to legal and compliance requirements.
Some of our customers that are making a digital transformation cannot afford to lose hours or even minutes of data. As such, I think that cloud-based disaster recovery is the future and the customers understand why it is much more important for them. Together with our reputation, I see this as a game-changing situation.
Most of my customers are interested in DR and do not know much about the long-term retention capability. Our last three deployments already had a backup implemented from the integrator and didn't need an overnight one to avoid the loss of data. We discussed this with them and explained that this product offers much more than what they are using it for. We pointed out that it was a two-in-one solution but they continue to use it primarily for DR.
Our customers find that the interface is really easy to use. It gives you a great deal of flexibility for the administrators, as well as for the end-users to a certain extent. Overall, with respect to ease of use, this product scores the highest points in this area.
The functionality available in the console is not complicated and is easy to use, especially for DR failover. It just works.
It offers a high level of compression, which is very good. My customers and I are interested in this feature primarily because it saves bandwidth.
The most important feature is that the recovery point (RPO) is less than one minute. This is really good for our customers, as they can keep their data loss to a minimum.
I would like to see a separate product offer for performing backups, although I think that this is something that they are expecting to release in the next version.
We have been using Zerto for between three and four years.
Based on the number of support calls that I get from my customers, where we have done the deployment, issues arise very rarely. From time to time, we get calls because the allocated space is running out. Otherwise, it is pretty much stable.
Even the situation where the allocated space runs low is rare and I haven't had this type of call in a long time. The reason for this is that I take precautions during deployment. For example, I check to see whether they have too many workflows. I know what it is that we need to do including how many VRAs we need to deploy and what the configuration should be. Over the past three to four years, I have only had to deal with four or five support tickets. Apart from that, I haven't experienced any problems.
I do not have a great deal of experience with scaling this product because all of my customers have only a few hundred VMs. I know that Zerto has the capability to go beyond 5,000 or 10,000, but that is something that I've never experienced. My understanding is that it is very capable at the data center management level.
In the initial phase, I leveraged technical support, but then I completed the deployment.
During the PoC, there were one or two times where I had to contact them to deal with issues. I am pretty happy with how they respond and how they follow up compared with the other vendors that I work with.
I don't have much of a complaint with respect to support.
I have been working with Zerto since version 6 and the most recent one that we deployed was version 8.5. Approximately six months ago, our customer that was using version 6 was upgraded to version 8, because version 8.5 was not yet released.
I also have experience with Veeam but Zerto uses a very different technology to perform the backup and change tracking. Veeam leverages the VSS technology for the volume set up, which will do the job but it is not ideal. Zerto has taken one step ahead by utilizing the Journal technology, which is the main difference that I can think of between these two products.
Prior to working with Zerto, many of my clients were using the VMware Site Recovery Manager (SRM) feature, which comes built into the product, based on their licensing. I have also had a customer who was using Commvault and others that were using NetBackup. These are typically the enterprise-caliber products that I expect to find.
One of my customers is using Veeam and because of the difference in price, with Zerto being more expensive, they did not switch. My customer felt that Veeam was convenient and the price was more tolerable. This is the only instance where my customer did not transition to Zerto.
The customers who switched have done so because Zerto provides the lowest RPO and RTO. It is one of the main points that I emphasize about this product because it is very important to them. There is also a saving in bandwidth, which is something that my customers are concerned with because they typically don't have fancy high-speed connections. The compression is superb and really helps in this regard. These are the two primary selling points.
For us, this solution is not difficult to deploy. For a complicated environment then you have to do careful planning but otherwise, it is not hard to deploy.
Typically, if everything is well in place, the deployment will take between one and three hours. In cases where the customer's environment is very complex then I might need a little bit more time. I would estimate that it would take six-plus hours, after careful planning and ensuring that all of the resources are in place.
The installation takes less than 30 minutes; however, the customer environment increases the time because we have to do things like open ports on the firewall. We tell them about these preparations in advance but we always end up doing some of the work ourselves. In situations where the firewall has already been properly configured, I can normally complete the installation and configuration in one hour.
I have two customers that use the cloud-based deployment on Azure but the majority of them use it in a local, on-premises environment.
The main challenge that I face with this solution is the price. All of my customers are happy with how this product works and they like it, but unfortunately, in the market that I represent, Zerto is expensive when compared with the competition.
Another issue is that Zerto has expectations with respect to the minimum number of devices that they are protecting at a given price range. I understand that this is an enterprise product, but unfortunately, price-wise, it is really tough when it comes to the TCO for the customers in the one or two countries that I represent. Apart from that, everyone understands the value, but at the end of the day it comes down to the price being slightly higher.
Pricing is something that I have discussed with the regional head of sales in this area. I have explained that you can't have a price of 25 million per year in this region, and in turn, have requested a lower price with different models for corporations. Unfortunately, I have not received a positive response so far.
With the separate backup product expected to be available in the next release, in a way, they have already done what I was expecting to offer to our customers. They have also announced some features that are really interesting. Right now, I'm waiting to get the new products in my hands.
My advice for anybody who is implementing Zerto is that if the system administrator has basic knowledge about networking and storage, then setting it up and deploying it will be easy, and not an issue at all. They just have to be careful and take the appropriate time to plan properly, especially in a complex environment.
In summary, this is a stable, enterprise-grade product.
I would rate this solution a nine out of ten.
We primarily use Zerto for replication and disaster recovery.
Zerto is good in terms of providing continuous data protection. We have databases that require point in time recovery capability and Zerto is very flexible in this regard, compared with some other solutions we use, such as Sybase Replication and Oracle Replication.
We do not yet use Zerto's long-term retention feature but we are planning to do so. Currently, we are exploring AWS Glacier for long-term retention, and we will see how Zerto can help with the process.
Using Zerto has helped to simplify our process. The DBS steps are very deeply involved in the case of Sybase replication. This means that it takes a lot of technical skill, time, and effort to manage Sybase replication. Compared with that, Zerto is very user-friendly.
When we need to failback or move workloads, Zerto decreases both the number of highly skilled people involved and the time it takes to complete. For example, to do a command-line restore and recovery of Sybase involves pages of steps and it requires a talented DBA. However, with Zerto, we can take care of that with an intern. Only one person is involved in the process for either case, but with Zerto, fewer skills and experience in recovery are needed.
Fortunately, we have not yet been the victim of a ransomware attack. However, I am confident that Zerto can help, should that situation occur. Similarly, since implementing Zerto, we have not had any downtime. That said, we have simulated different scenarios and our results were good.
The most valuable feature is the point in time recovery. This allows us to recover at any point in time, up to a minute or so.
Zerto is pretty user-friendly. Normally, data recovery involves a lot of DBS skills but with Zerto, it is point-and-click.
It is very important to us that Zerto provides both backup and disaster recovery in a single platform. Because of problems that people are facing, we needed to have recovery time objectives (RTO) and recovery point objectives (RPO) for the major cloud providers. This is the primary reason that we were looking for an up-to-date and current solution.
I am a little bit worried about how Zerto will work with large volumes of data, such as replication for big data and very large files. I have not tested it yet, so I can't say for sure whether it will choke or not.
The two large clouds that we use are AWS and Azure, and compatibility with these is always important for us.
We have been using Zerto for approximately five years. We are using one version back from the current one.
In terms of stability, so far it looks okay but I am not sure how Zerto will react to volume loads. We haven't had a chance to test that because we don't have such a large environment.
Scalability has been good but I have yet to see how large a file it can handle.
We have two DBAs using the product, and then we have some interns to help out.
Currently, it is running in a small network where it is backing up a couple of replicated environments. We may increase our usage in the future, as we are now just beginning to back up everything to AWS.
Zerto's technical support team is pretty knowledgeable.
Prior to Zerto, we were using Sybase replication. When Sybase was acquired by SAP, we began having trouble when we needed technical support. The reason that we started looking for a replacement product is that we used to contact technical support in California when we needed help. However, we now have to call Germany first, only to have them redirect the call to California. SAP is a mess.
I was involved in setting up the proof of concept, and I found that the initial setup was okay.
Once the PoC was complete, we went into small volume testing and then started using it after that. The deployment only took us a couple of hours.
A couple of people from our organization handled the deployment, and we had some Zerto technical reps available to answer questions. The Zerto staff are pretty knowledgeable and they answered the questions well.
Compared to the licensing fees with Oracle and SAP, we see a return on investment.
Price-wise, Zerto is fairly reasonable and I can't complain about it when we compare it against Oracle and SAP licensing.
We have not tried using any features that are outside of the standard licensing fees.
We looked into Oracle GoldenGate but it is pretty expensive and cumbersome. Sybase is better than Oracle in terms of pricing, but Zerto is cheaper.
We have not yet enabled data recovery in the cloud, but we are planning to use it. As of now, we haven't tested it. We always back things up but in terms of restoring and testing, we are behind.
My advice for anybody who is considering this product is that it is pretty user-friendly compared to Oracle and SAP. This is a good solution to start with. Once it has been implemented, I suggest moving to volume testing to see how well it handles large volumes of data.
We have never had a real situation where we were under the gun for the purpose of RTO and RPO recovery times. As such, I can't say for sure how it will behave in a real situation but we are satisfied with our tests.
I would rate this solution an eight out of ten.
We are using Zerto to facilitate cloud adoption in the organization. Our product teams are migrating their VMware workloads to the cloud, and Zerto is helping with that task.
Zerto provides us with continuous data protection and it is working well for us so far. It has matched our requirements, especially in terms of compliance and security considerations. The solutions in our environment are working together to make everything achievable in terms of different certifications.
We have only been using Zerto for less than a year, and have not had much time to consider long-term data retention. However, it is our intention to use this capability in the future.
Using this product has enabled our leaders to guide the business through our transition to the cloud. It has allowed us to implement a cloud-based disaster recovery solution.
Having a cloud-based disaster recovery solution, rather than a physical one, saves us in terms of resources. I don't have exact numbers in terms of money, but I can say that in the short time that we have been using Zerto, it has saved us between 10% and 20% resource-wise, including time.
We do not have any real use cases so far, but our model shows that we will need fewer people involved when we failback or move workloads. I expect that we will require 10% to 20% fewer resources in these situations.
So far, we have not had a use case where we had less downtime because of Zerto.
The most valuable feature for us is accelerating cloud adoption, as it helps provide greater speed for disaster recovery. Ultimately, this saves us time, as well as resources.
This product is easy to use. Initially, we had some issues and hiccups but we worked with the solution engineers and were able to rectify the problems and move forward.
When we initially set up the product, we didn't know about the exact features. Some of them were discussed in different wording. It took some time to get to know the solution in general, and exactly what functions each of the features is used for. They seemed more like hidden features to us.
I have been using Zerto for close to one year.
The availability of Zerto has been good for us, so far. We have not experienced any issues with it.
Being a cloud-based solution, it can be scaled as per our requirements. I don't see any issues with it. We have three people who work with it, although not on a daily basis. They are technical analysts and product engineers.
They were more hands-on during the PoC and deploying it, and they will be involved if we have any issues.
Technical support from Zerto is good and they help us all the time.
The turnaround time is good, as well as the help that they give us in understanding and resolving problems.
We are still using our previous solution for backups. We are switching away from it because we will be able to take advantage of automation and use fewer resources.
In terms of cost, using Zerto saves us approximately 15% over our previous solution.
The initial setup is straightforward. It was easy to set up the PoC and the results were good. Setting it up took a few hours and when it came time to move to production, it was in terms of days.
My team was responsible for deployment.
We have seen a return on our investment in terms of time and resources.
We evaluated several other solutions prior to selecting Zerto.
We chose Zero because it is more user-friendly, and better overall.
My advice for anybody who is considering Zerto is that it's user-friendly, easy to use, and easy to deploy. So far, Zerto has been working fine for us and my team has not had any complaints.
I would rate this solution an eight out of ten.
We are using Zerto as our disaster recovery solution for on-premises to Azure, and also from Azure to Azure between different regions.
At this time, we are only using it for DR. However, we will also be using it for data center migration.
I would rate Zerto's ability to provide continuous data protection a ten out of ten. The tool is very easy to use. It's also a very simple and very quick setup. The outcome from our setup showed that we had very low RPO and RTO. The interface is intuitive and as such, anyone can log in and figure out how to use the management utility.
Being able to achieve such a low RPO and RTO has significantly reduced our lengthy recovery times. For example, a recovery that previously took four hours is now completed in 40 minutes. Furthermore, it allowed us to complete the data center migration very quickly, with very little downtime.
Using Zerto has allowed us to reduce the number of people involved from a failover standpoint. There are only a few of us who can perform the failover and it is done with the click of a button. From an overall verification standpoint, the application owners are still required to verify.
We have saved money by performing DR in the cloud rather than in a physical data center for a couple of reasons. First, we saved money by not having to upgrade our hardware and pay for additional facility costs. Second, in Azure, we saved between 10% and 20% compared to Azure site recovery.
The most valuable feature is the disaster recovery capability.
The one-to-many replication functionality is helpful. While we were protecting our VMs in Azure, we were able to use the one-to-many feature to also replicate the same VMs to our new data center, in preparation for data center migration. Importantly, we were able to do this without affecting the DR setup.
When you're configuring the VPGs, they can improve the process by looking at the hardware configuration of the existing VMs and then recommending what they should be, rather than us having to go back and forth. For example, on the VM configuration portion of creating the VPGs, it should already figure out what sort of CPU, memory, and capacity you need, rather than us trying to write that down and then going in afterward to change it.
The logging could be a lot better from a troubleshooting standpoint. If the log was more detailed and more user-friendly, we wouldn't have to make the calls to the support to try and figure out where the problem lies.
They could improve on how many machines the management server can handle for replication.
We have been using Zerto for approximately two years.
Stability-wise, it's pretty good and we've been happy so far. We've had a couple of issues here and there, but nothing that wasn't easily resolved.
The scalability is pretty good. If you need to scale then you can always add more appliances on the Azure side, which is very easy to set up. For the on-premises side, you only need one management server.
We are not a very large environment; we have approximately 400 servers, and then we are protecting about 125 VMs. In terms of users, we have close to 3,000 full-time employees and then about 25,000 contractors. Being a recruiting company, we have a large base of contractors.
The site reliability engineers are the ones that use Zerto more often, and there are three or four of them.
The technical support is pretty good. The level-one has a lot of knowledge and because we've been using the product for a while now, if we get to the point of calling support, usually we have everything ready to go. We explain the situation to level-one support and we can always escalate easily to the next engineer.
Prior to using Zerto, for our on-premises environment, we did a typical database replication from our production site to a secondary site in another city across the country on the West Coast. We also replicated the storage and application code, and it was a very lengthy process. One of the environments took as long as four hours.
We switched primarily for the time savings, although there was also the cost factor. In order to meet the growing demand of our business in IT, we would have had to upgrade all of our hardware, as well as pay extra for facility costs. As such, it did help out on both sides of things.
Also, just the process itself was a lot simpler. It would have required coming up with five or six different teams to do the individual parts, whereas this automates everything for you from a server level.
We use a different product as our backup solutions. Zerto is strictly for DR and data center migration.
To set up the initial environment, it took about an hour. This included setting up the appliance, making sure it's added to the domain, and things like that. But then, creating all of the VPGs will probably be another couple of hours.
The strategy was that we already had everything ready to go, which included our server list and all of the VPG names. If you have that, you could probably have everything completed in half a day, or a day, from a setup standpoint. Of course, this is depending on how large of an environment it is, but for us, we set up five or six environments and it took us approximately half a day.
We had assistance from the sales engineer.
When we did the PoC, they showed us everything. Once we purchased the product, we used Zerto analytics to determine how many appliances we would need on the Azure side. Then, using that, we were able to break up the VPGs between the different sites.
We have an enterprise agreement that combines all of the features, and we have approximately 250 licenses. There are two different licensing models. The one we purchased allows us to support Azure, as well as the on-premises jobs. This was a key thing for us and, I think, that is the enterprise license. They have a license for just their backup utility, and there's the migration option as well, but we went with the enterprise because we wanted to be able to do everything going forward.
Zerto needs to improve significantly on the cost factor. I know friends of mine in other businesses would not look at this when it's a smaller shop. At close to $1,000 a license, it makes it very hard to protect all of your environment, especially for a smaller shop.
We're very lucky here that finances weren't an issue, but it definitely plays a factor. If you look at other companies who are considering this product, it would be very expensive for somebody who has more than 500 servers to protect.
The bottom line is that they definitely have to do better in terms of cost and I understand the capabilities, but it's still quite pricey for what it does. It would make a huge difference if they reduced it because as it is now, it deters a lot of people. If you've got somebody who's already using VMware or another product, the cost would have to be dropped significantly to get them on board.
We did evaluate other vendors, but this was the only tool that was able to fully automate the conversion from on-premises VMware to Azure. This was important because our goal, or our DR objective, was to set up DR in Azure. Every other tool required having some sort of intervention from us to convert them to Azure format.
I don't recall all of the tools that we looked at, but I think we looked at VMware SRM and also a product from EMC, from a replication standpoint. Ultimately, from a strategy standpoint, this was the only thing that was really capable of doing what we wanted.
My advice for anybody who is interested in Zerto is definitely to do a PoC. Run it against your environment to do a thorough comparison. This is the best scenario; instead of just picking the product, let it go through the different options. For example, whether you are doing on-premises to on-premises, or on-premises to the cloud, this product can do it, but you'll only see the results that you want to see if you grind it against your own environment.
Overall, we are very happy with this product.
I would rate this solution a ten out of ten.
We primarily use Zerto for backing up our databases.
We are heavily invested in database technology. We use SQL databases such as PostgreSQL and MS SQL, and we are also functional with NoSQL databases. Our use cases are majorly relying on databases for financial vendors and most of the time, we have to perform day-to-day operations with respect to finance and accounting.
We have been using the data retention functionality for a long time and whenever there is a failure and the system goes down, we recover the data from that particular snapshot in time.
We also require security, as it is one of the major concerns. Ultimately, we align these two things together.
We are deployed in AWS, although we are also deploying in GCP and plan to do so with Azure as well.
Zerto provides us with continuous data protection that is reliable. It is convenient to use because the API allows for seamless integration when performing our day-to-day operations.
Currently, we do not have any long-term data retention activities, and it is not one of our core operations. However, in the past, we did have several such use cases.
Using this solution saves us time because we have been capturing the volumes and snapshots, are we able to perform operations on the Delta. This is an important benefit to us because we are able to deploy everything into production, then continue to get the backups and snapshots from there.
Another time-effective benefit is that once we are fully backed up, we are able to perform Lambda functions on our use cases. This saves us a lot of time.
In some instances, Zerto has saved us time and on the number of people involved during failback. The number of people that are involved depends upon how critical the failure is. Any time there is a failure, we have to work from the most recent backups. For example, if the incident happens at 9:00 PM and there is a snapshot that was taken at 8:00 PM, there is one hour of work to make up for. This is much easier and quicker than having to look back at the logs for the entire day.
On a day-to-day basis, using Zerto saves us approximately 20% to 30% in terms of time. Overall, considering both our test and production environments, using Zerto benefits us with an approximate time savings of 60%.
We are using Zerto for DR in the cloud, and it has saved us money over using a physical data center. In a cloud-based deployment, the cost is quite a bit less compared to a physical environment. Also, because the cloud is a pay-as-you-go model, and you don't need the service all of the time, the paid resources are not wasted. I estimate that we save thousands of dollars per year in operations costs.
With our backups fully in place, in the cloud, Zerto has helped us reduce downtime.
The most valuable features for me are the fast performance and seamless integration. The performance is one of the main features and the integration has helped me a lot.
When we have a system that is being fully replicated, we also get snapshots. Then, we perform our activities on the snapshots only, which reside on the cloud-based volumes. This means that our production environment is not affected.
We have low latency in production because most of the things we do are on the cloud. When we have the backup, we just start to perform the data operations and with the help of Zerto, we can do this quite efficiently.
Zerto is quite easy to use. With the click of a button, I have been able to use it to do what I need. Furthermore, any end-user that I have worked with has easily been able to make use of its functionality.
Some of the integrations with our internal tools, in particular, company-specific ones, do not work. In cases like this, we have to ask for additional support. This is an area that has room for improvement.
If the API integration worked more efficiently then that would be an improvement.
We have been using Zerto for between two and three years.
Zerto is a stable and reliable product. We have not experienced any anomalies in the tool. For all our use cases and workloads, we rely on it and have found that everything can be done easily.
We have not had problems when we want to redeploy a number of things, so scalability has not been an issue.
We have between 30 and 40 users, including engineers, architects, and management. We are a growing and expanding company, and our workload increased from day to day. I expect that our usage of Zerto and other solutions will increase.
We often reach out to contact technical support and it is good. We have a lot of use cases that we need support for because we don't always have a sufficient solution.
The initial setup was straightforward, although we did have some problems. For example, there were instances where we could not integrate with our internal tools and we were not able to solve the problem. We looked at the FAQ and reached out to customer support to ask what the ideal solution would be.
Overall, it took between six and nine months to deploy.
We deployed Zerto using our in-house team.
We have seen ROI in terms of time savings, as well as other points.
We subscribe to their annual license package and we have tier one support with them. There are no costs in addition to this.
We have evaluated other tools including Veeam and Veritas. There were several factors, including cost, that led us to proceed with Zerto.
My advice for anybody who is implementing this product is to have things properly architected in advance. Otherwise, the implementation will be a hassle. Once the design is complete, if they need to change it then it will be time-consuming.
I would rate this solution a nine out of ten.
I am a cloud provider and I use Zerto to provide disaster recovery solutions for my clients.
Recently, we had an issue where one of our customers using Oracle Server experienced corruption in a database. The customer doesn't know when the issue started, so we used Zerto. We started to do a real-live failover for the machine, and we were able to determine the timestamp for the start of the issue. Prior to this, Oracle engineers tried for four hours to fix the database but did not have any luck in doing so. Ultimately, we were able to save the customer's data by using Zerto.
A few of my customers are using file-level restore but the majority of them are using the replication features for disaster recovery.
Zerto offers features for long-term data retention; however, we don't use them. The longest time that we back up data for is 30 days. At this time, I don't have any request for this from my customs, although in the future, if we have a customer that asks for it then we can provide it.
Zerto provides our customers with the ability to continue work, even if something happens to their office or data center.
We have a customer with an on-premises data center that replicates the environment to our cloud. One day, this customer had a water pipe burst in his data center. The entire data center was flooded and everything stopped working. We did a live failover and from that point, he could continue working but it was running from the data center in our cloud, instead. Zerto definitely saved us time in this data recovery situation.
It took the customer between four and five days to return everything back to normal onsite. During that time, he spoke with us at 9:00 AM on the first day, and after an hour, his company resumed work with our help. This reduced his downtime to one hour from approximately five days.
Performing a failback using Zerto is pretty much the same in terms of how long it takes, and how many people we require. The customer decides when to do the fallback; for example, it can be done during the night. We replicate the data at their chosen time and it avoids issues for them because they don't operate during those hours.
In a situation like a burst water pipe or a database becoming corrupt, Zerto doesn't help to reduce the number of staff involved. The reason is that when something affects the company, management, including the CEO, has to be involved. They do not deal specifically with operating Zerto but rather, they wait for things to develop. The good part is that they know that with Zerto, they have a solution, and they don't need to figure out what to do.
In terms of the number of people it takes to recover data in cases like this, there is typically one person from our company involved, and one person from our customer's company.
My customers save money using Zerto and our facilities, rather than a physical data center because they do not have to do any maintenance on the backup equipment. It is also much easier to pay one company that will do everything for them.
Using Zerto makes it easier for my clients, giving them time to work on other things. The main reason is that they don't have to maintain or upgrade their environment. Not having to implement new recovery solutions as their needs change, saves them time.
The most valuable feature is the ability to do disaster recovery.
Zerto is very user-friendly and engineer-friendly, as well. When we need to create a new Virtual Protection Group (VPG) for replication, then it is done with just a few clicks of the mouse. We can see all of the environments and we don't need to install agents on the customer's VMs.
The live failover feature is very helpful.
With regards to providing continuous data protection, it's great. Most of the time, it's about five seconds for replication.
The monitoring and alerting functionality need to be improved. Ideally, the monitoring would include the option for more filters. For example, it would be helpful if we could filter by company name, as well as other attributes.
I have been using Zerto for almost three years.
Zerto is a pretty stable product. We have had issues from time to time over two years, but usually, it is stable. When we have trouble then we contact their excellent technical staff.
I have quite a lot of customers that are using Zerto for disaster recovery and it is simple to scale. Our intention is to increase our usage by bringing on more customers that will replicate from their on-premises environment to the cloud.
In my company, there are five or six people who work doing the backup and recovery operations. On the client's side, they normally have one or two people that are in charge of maintaining the data center.
The size of your environment will depend on how many VMs you need to replicate. For example, if you are replicating 100 VMS then you can use a small environment. However, if you are replicating 1,000 or more VMS then you will need a stronger and larger environment, with more storage and more memory.
The technical staff is excellent and we contact them whenever we need something.
We had a customer that replicated his VM and for some reason, when we tried to do a failover test, the VM came back with an error saying that the network card was disconnected. We spoke with the Zerto technical staff and they actually implemented an ad-hoc fix for our environment. In the next Zerto version update, they released it for all their customers.
The technical support is definitely responsive and they explain everything.
I began using Zerto version 6.5 and am now using version 8. We did not use a different solution for disaster recovery beforehand.
We use Veeam for backup tasks. We looked at Veeam CDP to compare with Zerto, and Zerto is definitely better. It is more user-friendly, agentless, and the technical support is better.
The initial setup is straightforward and pretty easy to complete. It takes about an hour to deploy. During the process, you set up the Zerto server to see the whole environment. You then install VRAs on all of the hosts. In general, the management server is pretty user-friendly.
The implementation strategy changes depending on the customer. We did have a few customers that required a more extensive setup because one had an IPsec connection, and a few of them were using point-to-point connections. That's the only strategy. But with Zerto, they need to decide which VMs they want to replicate, and then we create it based on that. First, we will want to replicate the DC, the domain controllers, and then we will want the infrastructure servers, and then the database servers, and the last one is the application.
During setup, one person from our company normally works with one person from our customer's side. Only a single person is required for maintenance.
My impression is that Zerto is more expensive than other solutions, although I don't have exact numbers.
We evaluated CloudEndure and we also had Double-Take, but neither of these solutions worked well. These solutions were based on agents, which affected the customers' server performance.
In terms of usage, Zerto is a different level of experience when compared to other products. It is easier to set up and use.
With other solutions, we need to install software on the customer's server and then reboot, whereas, with Zerto, we don't need to do these things. In fact, there is no downtime on the customer's side. Depending on the customer's environment, post-installation downtime may have been as little as one minute, or more than an hour.
In situations where downtime is expected, and there is an important application like a database running, these periods need to be scheduled. Normally, downtime will be scheduled at night, after business hours. Although there may not be a disruption in work, it is an extra effort that needs to be put into the other products.
Looking ahead, I have seen that the next version of Zerto will support Salesforce replication. This could be something that is useful for my customers.
The biggest lesson that I have learned from user Zerto is that every organization should have a disaster recovery plan. My advice for anybody who is considering this product is to calculate how much it will cost in the event of downtime or a disaster, and then compare it to the cost of Zerto. Once this is done, people will opt for a disaster recovery solution.
I would rate this solution a ten out of ten.
Right now, everything is on-prem including LTR. We are looking at adding the Azure features but we're not quite there yet.
We purchased Zerto to replace our Legacy backup system that still had disks, Archiver Appliance, and everything like that. We had wanted to do something that was diskless but still gave us multiple copies. So we were utilizing both the instantaneous backup and recovery, as well as the LTR, Long Term Retention, function. We do our short-term backup with normal journaling and then our longer-term retention with the LTR appliance, which is going to dedicated hardware in one of our data centers.
We use Zerto for both backup and disaster recovery. It was fairly important that Zerto offers both of these features because Unitrends did provide the traditional backup piece. They also had another product called ReliableDR, which they later rolled into a different product. Unitrends actually bought the company. That piece provided the same functionality as what Zerto is doing now, but with Unitrends that was separate licensing and a different management interface. It wasn't nice to have to bounce between the two systems. The ability to do it all from a single pane of glass that is web-based is nice.
It's definitely not going to save us money. It'll be a peace of mind thing, that we have another copy of our data somewhere. Our DR site is approximately 22 miles away. The likelihood of a tornado or something devastating two communities where our facilities are based is pretty slim. It's peace of mind and it does not require additional storage space on-prem. We know that the charges for data at rest are not free in Azure. We get good pricing discounts being in education but it definitely won't save money.
Zerto was fairly comparable to what Unitrends was offering with multiple products. We didn't gain a ton of extra features. If anything, in the very near future, it will give us the ability for Cloud backup and retention to have some of that sitting out in the Cloud as an offsite backup. We have a primary site, a backup site, and a recovery site. We have multiple copies already, but we want to have one that's not on any of our physical facilities so we will be setting that up shortly. We just need to get our subscriptions and everything coordinated and up to par. That would be the main improvement that it's going to provide us. But we're not quite there yet.
Zerto has reduced downtime. Speaking specifically to the file restores, it's definitely restored things much quicker. Instead of waiting for half-hour to get a file restore done, it's a matter of five minutes or less to do it where they can keep rolling much quicker versus with Unitrends. Other than that, I can't say there are any huge differences.
The difference in downtime would cost my organization very little. We're a small technical college, so we're not loopy on making or losing thousands or millions of dollars if something takes five minutes versus an hour and a half. Higher ed is a different breed of its own.
In terms of the most valuable features, having the failover tests where you can see where your actual RTO and RPO would be is really nice, especially for the management level. I really liked the ease of when I need to do a file or folder restore off the cuff. Usually, it takes me less than five minutes to do it, including the mounting of the actual image. That was one thing with Unitrends, it was a similar process but if that backup had aged off of the system, then you had to go to the archive and you find the right disks, load them in, and then actually mount the image. Our main data stores are close to two terabytes. It would take 15 to 20 minutes just to mount the image. Whereas with Zerto, I don't think it's taken longer than a minute or a minute and a half to mount any image that we've needed to go back to a restore point on.
With Unitrends, some could have taken a half-hour. I'm the only network administrator here, so it usually was a multitasking event where we would wait for it to load. I would take care of a few other things and then come back to it.
Switching to Zerto decreased the time it took but did not decrease the number of people involved. It still requires myself and our network engineer to do any failover, back and forth, because of our networking configuration and everything. I know that Zerto allows us to RE-IP machines as we failover. However, because of the way our public DNS works and some of our firewall rules, we have purposely chosen not to do that in an automated fashion. That would still be a manual operation. It would still involve a couple of people from IT.
Zerto does a pretty decent job at providing continuous data protection. The most important thing that I didn't clearly understand upfront, was the concept of journaling and how that differs from traditional backup. For example, if you set journal retention for seven days or whatever, in your traditional backup, it kept that for seven days, regardless of what was happening. You had it versus the journaling, where coupled with some of the size limits and stuff of the journal size, if you don't configure it correctly, you could actually have less data backed up than what you think you do. I also found out that if you have an event such as ransomware, that all of a sudden throws a lot of IOPS at it, and a lot of change rate, that can age out a journal very quickly and then leave you with the inability to restore if that's not set up properly.
We have requirements to keep student data and information for seven years. We need long-term retention for those purposes. We don't typically need to go back further than 30 days for file restores and everything. There has been the occasion where six months later, we need to restore a file because we had somebody leaving the organization or something like that and that folder or whatever wasn't copied over at the time they left.
Zerto has not saved us time in a data recovery situation due to ransomware because we did not have it correctly configured. When we had an event like that, we weren't able to successfully restore from a backup. That has been corrected now. Now that it is configured correctly, I anticipate that it will save us weeks of time. It took almost two weeks to get to a somewhat normal state after our event. We're still recovering somewhat from rebuilding some servers and stuff like that. To get our primary data and programs back up and running to a mostly normal function, took around two weeks.
We also expect that it will reduce the number of staff involved in that type of data recovery situation. We ended up having to hire one of our trusted partners to come in and help us rebuild and remediate. There was at least a dozen staff including our own IT staff, which was another 10 people on top of that. Provided that we do now have this set correctly, it would really drop it down to maybe two or three people.
In terms of improvement, it would be helpful if the implementation team had a better best practices guide and made sure things like the journaling are very clearly understood.
Speaking directly to our incident, we did have professional services guide us with the installation, setup, and configuration. At that time, there was no suggestion to have these appliances not joined to the domain or in a separate VLAN from our normal servers and everything. They are in a completely isolated network. The big thing was being domain-joined. They didn't necessarily give that guidance. In our particular situation, with our incident, had those not been domain-joined, we would have been in a much better place than what we ended up being.
I have been using Zerto for about two years
It is quite stable. I haven't had system issues with it. The VRAs run, they do their thing. The VPGs run, so as long as we're not experiencing network interruptions between our two campuses, the tasks run as they should. In the event we do have an interruption, they seem to recover fairly quickly catching up on the journaling and stuff like that. It's fairly stable.
Scalability is pretty good. We have 50 seats, so we will just be starting to bump up against that very shortly. My impression is that all we need to do is purchase more licenses as needed, and we're good to expand as long as our infrastructure internal can absorb it.
I just recently learned from Zerto Con that they are coming out or have just come out with a Zerto for SaaS applications, which gives the ability to back up Office 365 tenants or Salesforce tenants. I am very interested in learning about that. We have been researching and budgeting for standalone products for Office 365 and Salesforce backups. From my understanding, those products would be backed up from the cloud to the cloud so that it wouldn't have impacts on our internal, long-term appliance, or any of our storage internal infrastructure. That's very appealing.
It will depend on costs. If it's something that I can't absorb with the funding I have already secured for Office 365, then it would have to be added to our next year's budget because we run from July 1st to June 30th. Our capital timeline budgeting has surpassed us already.
For the most part, the technical support is pretty decent. I've only had to open one or two tickets and the response time has been pretty good. Our questions were answered.
We previously used Unitrends. We switched solutions because we were at the end of our lifecycle with the appliances we had. At that time, Unitrends was not quite as mature with the diskless and cloud-type technologies as Zerto was. We were pursuing diskless where we had to rotate out hard drives for archiving. We wanted to get rid of that. That brought us to Zerto and it was recommended by one of our vendors to take a look at it.
Unitrends had replaced Commvault.
The initial setup was fairly straightforward, deploying the VRAs to the VMware infrastructure and stuff like that was point, click, and let it run it. It was fairly quick. The VRAs took a couple of minutes each, so that wasn't bad at all. Setting up the VPGs is quite simple. There is a little bit of confusion where you can set your default for the journaling and stuff like that and then modify individual VMs after the fact. If you want different journal sizes for different VMs in the same VPG, there are a couple of different spots you can tweak. The setup and requirements of the LTR were a little bit confusing.
We purchased six or eight hours of implementation time but that was over multiple calls. We stood up some of the infrastructures, got some VPGs together, and then they left it to me to set up some other VPGs. Then we did a touch base to see what questions I had and things like that. We had six or eight hours purchased but it was spread over multiple engagements.
For the most part, only I worked on the deployment. Our network engineer was involved briefly just to verify connectivity via the VLANs and firewalls. Once we had established a connection, he was pretty much out of it.
I'm the only one who uses it strictly for our district backups. We're a small college. Our IT programs, HR, or business services, don't have their own separate entities. It's all covered under the primary IT department.
I don't know that we've saved a ton by replacing our legacy solution with Zerto. I think there's a little less overhead with it. Setting up the VPGs, the protection groups, and everything is a little bit easier and the file restores go much quicker. Fortunately, we haven't had to perform full system restores, but I did not need to do that with Unitrends either. It's usually a folder or a file here and there. We're not really intense on restoring. It has saved a little on management, but not a ton.
Pricing wasn't horrible. I can't say that it was super competitive. We definitely could have gone with a cheaper price solution but the ease of use and management was really what won me over. Being the only network administrator, I don't have a ton of time to read through 500-page user manuals to get these things set up on a daily basis. I needed something that was very easy to implement and use on a daily basis. In the event I'm out of the office, it would be nice to have simple documentation so that if somebody needs a file restore while I'm gone, it can be handed off to somebody who is not a network admin as their primary job.
I have not run into any additional costs. Obviously, if you're going to utilize Azure for long-term retention it is an additional cost, but that's coming from Microsoft, not Zerto. To my knowledge, there is no additional licensing needed for that, that's all included in the product.
Commvault was another solution we looked at even though it was against my better judgment. We looked at Veeam and Rubrik as well.
In terms of ease of use, Veeam was pretty similar but at the time we still had some physical servers that we no longer have now. We are all virtual now. Veeam couldn't accommodate that, as I understood. I liked the features of Zerto and the ability to get the RTO and RPO reports and see where we're at. The ease of file restores was really nice.
My advice would be to make sure that you clearly understand what you require. You must have retention and recoverability. Make sure that your journal configurations correspond to accommodate that in an event like ransomware or something like that, that a high change rate can happen. Also, utilize long-term retention for instances like that.
I appreciate the continuing education that they provide. There is Zerto Con and they have different customer support webinars. They do the new product release webinars and stuff like that, where they're very open on what features they're adding, what they've released, and what improvements they're doing. Whereas it seems like most companies, say, "Okay, we have an update available. Here are the release notes." And, it's up to you to go through that.
I like that Zerto takes the time to sometimes do live demos. We're migrating from 8.0 to 8.5. We're going to do it in a live environment and show approximately how long it takes and all the steps to go through it. Make sure you check this box if you're upgrading from this. I find that very helpful. I'm a visual learner, versus learning from reading. Seeing some of those step-by-step upgrades, releases, and feature demonstrations is very helpful.
I would rate Zerto an eight out of ten.
We are protecting 91 terabytes worth of data that consist of 200 virtual machines over the span of 96 tracking groups. We currently have 300 licenses and Zerto provides protection for our critical production systems with a 24-hour journal. We do utilize another platform to backup our entire enterprise as well as handling retention for a longer period of time.
We limit Zerto access to our platform engineers so either our Linux administrators or our Windows administrators use the solution. When a virtual machine is tagged as the article, in other words something that should be replicated to a target data center, they have the authority to create a VM and make sure it is protected via Zerto.
We have an annual DR test requirement. Initially, we used Zerto for testing a subset of our production systems and generated reports that would validate that the tests were successful. We leveraged Zerto to test failover for over 200 VMs by running it in the test scenario. We ran it for a couple of days and tested connectivity to verify that all the virtual machines were up and running and that disk integrity was fine.
Over the years, we have moved from an offline test scenario to an actual real-life failover for subsets of applications. For a couple of years now, we have failed over applications into another data center and have run production from there on a small subset. Our vision going forward is to avoid these offline once a year tests and to periodically move applications from one data center to another in a real-time testing scenario.
We currently have a production data center and then we have a co-location, which we are leasing. So we actually have two locations where we can failover. We do have a small cloud presence in Azure, and we have started a small cloud presence in AWS as well, but we are not running any IaaS virtual machines in those clouds. There's really been no cost-savings at all in the cloud so we've brought those work machines back on-premises.
Prior to Zerto, we used a third-party offsite facility and a team of 25 individuals, where we would restore over 300 VMs in our network, to prove annually that we can recover our data. Since adopting Zerto, we've pretty much reduced all of that VR testing to about four team members. We've significantly reduced our costs by staying on-premises and time from only four individuals instead of a whole team of 25.
The first benefit, right out of the gate, was to duplicate a subset of our production environment and test it in an offline network scenario. That initial test was fantastic as was all of the reporting to prove that we have done those tests. Another big attraction is the near zero RPO. A lot of other products have minutes, half-hour, or an hour RPO. We have proof indicating that Zerto is near zero, or a matter of minutes, as far as the RTO is concerned. So again, that's another attractive offering where you can actually fail something over and bring it back up in a target location in a matter of minutes. Meaning very little data loss as far as recovery time. It's fantastic.
The main reason why we love Zerto is because we have a VMware environment. What we're doing now with VMware is we leverage NSX-T which gives us the ability to have a shared address space across two physical data centers. By using Zerto with an NSX-T, we can failover applications without re-IPing or anything like that. So it's a matter of literally shutting down the forced side and powering up the other side in minutes. It works fantastic and that is definitely our future DR strategy as well as our future failover testing.
I haven't seen any significant features or improvements in the past few major version releases. The only challenge I have with Zerto today, and over the past few years, is that it seems like a lot of development and effort is going toward the cloud. Since we're utilizing the solution with an on-premises hypervisor, it seems like development for our needs is kind of stuck.
The other thing I wish they would do is to develop their PowerShell module to be more robust. So instead of having to rely on the API to actually include a PowerShell command, it would let you create VPGs, delete VPGs, modify VPGs, etc. This would ease the automation effort of deployment and decommissioning and I'd really appreciate that.
I implemented the solution back in the fall of 2016.
Zerto is very stable and requires little maintenance. We probably update Zerto twice a year. There's been no real outage issues that we've encountered. There have been a few times where we've had issues with VMware which in turn provided a hiccup towards Zerto. Though Zerto was a symptom and not the root cause.
Zerto provides continuous data protection and we've had very little disruption. We've gone through mobile versions starting with version six something and we have gone through the various upgrade cycles without any major issues.
Zerto seems very scalable. I can't really comment further on that because we've only had two license upgrades from 200 to 300 virtual machines. I haven't really tested this on a very large scale like for over a thousand VMs or anything close to that. From what we've utilized it has scaled, but I'm really not a good example because we manage a smaller subset of virtual machines.
As far as our key-protected systems, we're at the 280 marker so we don't see ourselves growing any more. License increments are 25 or 100 and if we did grow, obviously, we would increase our license count. Although we've had 300 licenses for a few years now so we've kind of found our sweet spot.
There's been a couple of support calls along the way, but support has been very helpful and very responsive in correcting our issues.
Back in 2016, we conducted a 30-day POC with Zerto and that was enough time to fully implement the solution and even utilize it. We were really impressed that we could actually use Zerto from start to test within a 30-day timeframe.
We found the setup and deployment process to be very simple and not complex at all. We installed Zerto on-premises with just regular employees. It was a team of two engineers and a database administrator and that was it. After a little bit of research on the prerequisites we literally ran the installation setup. It was a breeze and there were really no custom tweaks or anything that had to be done post-setup.
The solution is very user intuitive, from the initial setup of the application and installation all the way to actually getting data in there by creating virtual protection groups and populating VMs.
As far as our IT budget is concerned, Zerto is a little bit expensive. But as far as the value that it provides, it is completely justified by all of the savings. Reducing the labor of DR failover exercises or its reporting functionality for our audit teams has saved a lot of soft dollars. Also, failing over our workloads to another data center and proving that it does work is priceless. On the other hand, the price consideration is why we're only protecting a subset of our virtual machines, those that are deemed DR critical, versus protecting everything.
We did evaluate a few different products before selecting Zerto. We looked into Commvault and Veeam. We also looked into VMware's Site Recovery Manager. Having a near zero RPO and a very short RTO was the main difference between Zerto and the products we evaluated.
The biggest advice would be to compare Zerto to another product side-by-side and actually do a demo of both products. And then at that point, post-demo, the decision will be very easy.
On a scale of one to 10, where 10 is best, I would rate Zerto a nine plus. Unfortunately, no product walks on water, so they're never going to get a 10. There's room for improvement everywhere for sure, but I'm extremely happy with the product.
Our primary use case is for disaster recovery and migrations. We have two primary sites that we replicate to. If there are on-prem clients we replicate back and forth between those two and then we replicate our off-prems to them as well. We use on and off-prem as well as Azure.
We actually have rescued a couple of clients that have had disasters on-prem due to weather or data center outages. One of our clients had left us for a cheaper provider and before our disks and retention points expired out, the cheaper provider had a flood in their data center. We were able to restore the client using the old restore points back into our data center, which was a huge win for us because it was a fairly large client. That client has worked with us ever since then.
Zerto saves us time in data recovery situations due to ransomware. We've had a couple of ransomware incidents with clients in the last year and a half. I've worked on ransomware issues before when Zerto wasn't involved and it was much more complicated. Now, with Zerto, it's at least 50 to 75% faster. We're able to get a client up and running in a matter of an hour, as opposed to it taking an entire day to build or locate the ransomware and rebuild from shadow copies or some other archaic method.
It decreases the time it takes when we need to failback or move workloads because we use disaster recovery runbooks that we work with our clients to maintain. Anybody at our company, at any given time, can pick up this runbook and go with it so we can assign one or two techs to the incidents. They work with the client and get them back up and running quickly. We're 50 to 75% faster. It's now a matter of hours as opposed to days. In an old disaster recovery situation, it would be all hands on deck. With Zerto, we can assign out a technician or two, so it's one or two techs as opposed to five to 10.
There has been a reduction in the number of people involved in the overall backup. We have the management fairly minimized. There are only two primary subject matter experts in the company, one handles the back-end infrastructure and one handles the front-end, that's pretty much it. We're a fairly large company, with 500+ clients, so it's been stripped down, so to speak.
From what I've seen, we do save money with Zerto, especially for long-term retention like the Azure Blob Storage. We had a recent incident where a client had to go back to a 2017 version of a server that was around three to four years old, just to find a specific file, and it only took us an hour to locate the proper retention point and mount it for him and get him back what he needed.
The testing features are the most valuable features of this solution. We use the failover test feature not just for testing failovers and disaster recovery, we've also had clients use it for development purposes as well as patching purposes to test patches. We can failover the VM and then we can make any changes we want without affecting production. It's a nice sandbox for that usage.
We also use it for migrations into our data center. We bring in new clients all the time by setting up Zerto in their on-prem and then replicating to wherever their destination will be in our environment.
We've also used Zerto to migrate to the cloud.
Zerto provides continuous data protection. I'd give it a 10 out of 10 as far as that goes. The recovery points are very recent, generally five to 15 seconds of actual production. It's very convenient.
It's also fairly simple to use. Zerto does have some quirks but they have worked those out with recent releases. They're really good about listening to feature requests. We're actually a Zerto partner at our company, so they take our feature requests pretty seriously. Zerto is one of the easiest disaster recovery products I've used. We use Veeam as well which is much more complicated to set up in the back-end.
Zerto seems to keep up with what I think needs to be improved pretty well.
One improvement that could make it easier would be to have an easier way to track journal usage and a little bit more training around journal sizing. I've done all the training and the journal is still a gray area. There is confusion surrounding how it's billed and how we should bill clients. It would be easier if it had billing suggestions or billing best practices for our clients to make sure that we're not leaving money on the table.
I have been using Zerto for three and a half years.
Stability is pretty good. It's gotten better over the years. It's kind of 50/50 between features that have been added and our understanding and usage of the product over the last three years. But it's definitely gotten better.
It's highly scalable. That's one of the things we like about it. We can empower clients. I have one client that's migrating from his on-premise into one of our private clouds, and we have enabled him to do so. We set up the environment and we're enabling him to build VPGs and migrate them as needed without our interaction at all. This is bringing in tons of revenue. It's super scalable and it seems to be not just easy for us to use, but easy for us to enable a client to use it as well.
Technical support is astounding. I've said that to Zerto technicians and I've said that to clients as well. Being in my role, I work with a lot of vendors, a lot of different support, and Zerto is off the charts as far as skill and ease to work with. It's been wonderful as far as that goes. Zerto was some of the best support I've had across vendors.
Before Zerto, there really wasn't anything that was as good as Zerto, so it was a game-changer.
The initial setup is pretty straightforward.
For an off-prem client, I would send them a welcome letter that details what they need to do on their end with the server. I would send the download package, everything like that. If the client is immediately responsive, that could be done within an hour, but then some clients take a little longer. Once they have the infrastructure set up on their end and the VPN is set up, I can have a Zerto off-prem implementation replicating into one of our private clouds within an hour or two hours maximum, even for a large environment.
A client was migrating into one of our usage-based clouds, so it automatically bills by the resource pool. The more that they put in there, the more we gain. We've probably increased the input to that environment 10-fold. It's a 10-time multiple of what we invested into it, just particularly for that one use case because he's growing so rapidly. Every time he brings over a new client, it adds to the billing which is hands-free for us. We've enabled him to do it.
Pricing is fair. For the license that we have and the way that it's priced, it is pretty simple and it's not over-complicated like some other platforms. It would be very beneficial to have some sort of training or even just documentation around every component of Zerto and how it should be built or there should be suggestions about how it should be built. It would help newer companies that are adopting the platform to have a better opportunity to grab all the revenue upfront.
Journal history was one of the things that we didn't take into consideration when we implemented Zerto initially and we lost a lot of money there. We talked to one of the reps after that and found out that some clients do roll in the cost of this journal and some clients actually charged separately for it. Zerto has made it easier to plan for that lately with Zerto Analytics, but it's still a gray area.
There aren't any additional costs in addition to standard licensing that I'm aware of.
We still use Veeam in the environment but the recovery points aren't as robust. They're a lot thinner. You can get maybe an hour or the same, but you can't get five-second production. We used Veeam and the old active-passive standard of building a server in each environment and replicating to it.
I've actually pushed us to use Zerto for our backups with the solutions team for quite a while, since version 6.5. I don't think they plan on doing it just because we already have two other backup offerings and they don't want to complicate our Zerto infrastructure. From my understanding, we're not planning on doing it. But with every release, it gets so much better and it's just a matter of time before we revisit it.
My advice would be to follow best practices when it comes to back-end infrastructure. We have made some changes specifically to track certain things like swap files and journal history. Previously, we had everything going to production data stores and now we have dedicated journal data and restore data stores for swap files, which helps us to thin out the noise when it comes to storage. Storage implementation is very important.
Make sure to go through all the training. The training on MyZerto is free, very straightforward and it's very informative. That's one of the things we didn't do initially but it wasn't really as available as it is now.
I would rate Zerto ten out of ten.
We utilize Zerto to backup our on-prem environment to our cloud provider. We've also used it for migrations from on-prem to our cloud provider.
Our deployment model is a hybrid. We're using on-prem and also replicating to Azure.
It is used in our production environment and also our lower environment, on-prem. It's like a DR, as we're backing it all up to our cloud provider. There are a handful of servers involved, replicating and backing up.
It saves us about eight to 10 hours a month in staff time.
Another benefit is just the peace of mind that everything is backed up. We rely on the backups, that they're good backups. It's not like we have to second-guess them.
It has also helped us with our migration to our cloud provider. It's made it easier, sped up the process, and taken a lot of the guesswork out of it.
The solution has reduced the number of staff involved in data recovery situations for our backup and recovery side by at least two people.
The most valuable features are
In addition, the RPOs and RTOs are great on it. It keeps up with things. The protection has been perfect so far when we have done our tests of spinning things up every six months or so. All our backups have come up with no issues at all. They just make great replication copies.
Zerto is also easy to use. That single pane of glass makes it very easy to check on the status of replicated items, and if there are any issues, to dig into them to fix them.
So far, it's been pretty good. I haven't had any issues. If I had to pick anything, it would be the documentation for upgrades. They need to make it easier for users to do upgrades without having to contact support, by providing better documentation for that.
We have been using Zerto since 2018.
So far, it's been very stable. We don't have any issue with the services or the ZVAs. They just keep trucking. There have been no stability issues at.
It's very easy to scale with it. As our environment has grown over the years, we've been able to add ZVAs to it, configure them, and they just fall right into the mix. Scaling is very easy.
Technical support has been very helpful and quick to get back with responses. Ticket turnaround time has never taken more than an hour for me to receive a response back to a general question.
We used Avamar. The primary reason we switched to Zerto was the integration with cloud providers that it provides.
I was not involved in the implementation, but I do remember that it was a pretty short implementation time. It included setting up the ZVA agents in our on-prem environment and connecting to our provider's cloud storage. The longest part of the implementation was getting the data, the initial seed or the backups, up there. But that's nothing against Zerto. Every environment will be different on that and has to get its initial copy up there. Since then, keeping copies up to date has been good. It meets up with RPOs and RTOs.
The initial implementation and getting everything set up took us about two and a half weeks. After that, to get everything that we are protecting into the cloud took us close to a month. We had to do it in stages, due to our work environment and our connections at the time. We didn't have the biggest connections, but that's more on our side, not Zerto's.
There are three people involved in maintaining Zerto for us. They're systems engineers.
Zerto is a lot easier to use than Avamar: easier management, easier setup, and the single pane of glass to watch over everything makes it better. I wouldn't say there's really a cost savings. They're probably comparable in price, but there were a lot more features and options with Zerto than in Avamar.
If you want something that's easy to set up, with a single pane of glass, and that doesn't take a backup administrator to admin, Zerto is the way to go.
The only lesson we really learned, and this has been resolved now, is that when we initially started using Zerto there were some hiccups when it came to Linux servers, hiccups that we had to work through. Support was very helpful and resolved it for us, but it made it a little bit of a manual process. In the later releases of Zerto, they've resolved those issues. They just had to work out some kinks.
We use this solution for disaster recovery and business continuance.
We are protecting: SQL, our file servers, and some other applications that are specific to the healthcare domain.
In terms of providing continuous data protection, Zerto has been great. We've had no real issues and it's pretty easy to work with.
At this time, we do not use Zerto for long-term retention. It's something that we may look into, although we don't protect all of our VMs. We only have 60 licenses, but we have more than 300 VMs. We use Veeam for the actual backups at the moment, and it didn't seem practical to have two separate solutions, where we use Zerto for a few and Veeam for the rest. Licensing-wise, it was too expensive to put replication functionality on every VM, just to get a backup of it. I know that Zerto is changing its licensing so that you can get a backup only. However, when we purchased Veeam, it was for three years and we still have part of a year left. After that expires, we will revisit it.
Prior to implementing Zerto, we didn't really have any way at all if there was a disaster at one site to be able to spin things up at the other site. It would have been restored from backups, but we didn't have a backup environment at the other site that they would restore there. This meant that depending on how bad the outage was, it was going to be weeks or months to be able to get back up and running. Now we're in a situation, at least with our key applications, that we could get those back up in a matter of minutes versus weeks. There is now a much better comfort level there.
If we had to failback or move workloads, Zerto would decrease the time it takes to do so. Fortunately, we've never had an event where we've actually had to use Zerto for a live failover. We test the VPGs and get the actual individual teams that run the software involved to test everything out, to make sure it's good. Other than that, fortunately, we haven't really had a need to actually fail anything over at this point.
We have leveraged it at times to move a workload. An example of this is that we've had servers that we were initially told were going to be built at one site, but then a couple of weeks later, it's "Well, no, we want this at the other site." So, instead of having to create a new VM at the other site, decommission the old one, and all that work that's involved with that, we just used Zerto to move it. This is something that saved us a lot of time and it worked perfectly. Between building another one and decommissioning, it is probably a savings of three days' work between all of the people involved.
Fortunately, we haven't had to use Zerto to recover due to a ransomware attack. We haven't been hit with anything like that yet. That's one of the things that also made it attractive for us, was that we're able to potentially get to a point in time just before that happened.
We have also used it in a scenario where we've had a vendor doing an upgrade. We replicated it to the same site instead of the alternate site, just so that if something went wrong we'd have a more instant restore point that we could pick from versus our backups. Since our backups only run once a night, we could have potentially lost a decent amount of data. Again, the upgrade went smoothly, so we didn't have to leverage it, but if there was going to be a problem with that then it would have saved us time and potentially data.
The most valuable feature is the ease of upgrades. We've updated it numerous times since we started, and we can perform upgrades, including with VMware, without impacting anything in conjunction with it.
The reporting on failovers, including the step-by-step and the times, is useful because we can run through a failover and provide reports on it.
I find Zerto extremely easy to use. Setting up VPGs, the upgrade process, failover, and testing are all super easy to do. It is all very straightforward, including the initial setup.
I would like to have an overall orchestration capability that would enable you to do multiple VPGs in some sort of order, with delays in between. For example, at least in our testing scenario, we have our domain controllers. We have to fail that over first, get those up and running before we bring up the application side so that people can log in. If there was an actual failover, there would be certain things that would have to failover first, and get them running. Then, the application would be second, like SQL for example. For our dialysis application, one would have to have SQL up and running first before that. It would be nice to be able to select both and then say, start up this VPG and then wait 10 minutes and then fire up this one.
I have been using Zerto for between three and four years, since 2018
I find this product super stable and I've had basically zero problems with it. A couple of minor things came up, and support resolved them pretty much instantly. We've never actually been down with it, but one problem was where it didn't recognize our version of the VMware. It was an entry in some INI file but that was quickly resolved.
I would think it scales great and it's just a matter of licensing. Right now, we have just the basic license that enables us to go one-to-one. We do want to go to the one-to-many and then out to the cloud, which is an option that would be better for us. We're just waiting to get the cloud connectivity before we upgrade the license. In this aspect, it should scale well.
At this point, myself and perhaps one other person use the product. We're licensed for 60 VMs and we have just slightly less than that, in the upper 50s. I would think that our usage in the future will increase.
Every time that we have a project come along, as part of that, they're supposed to verify what the DR business continuity needs are in terms of RTO and RPO. The only option for us other than this is backups, which are up to 24 hours. If that doesn't meet the needs of a new project, we are supposed to get a Zerto license for it. It's something that should be increasing over time.
The technical support from Zerto has been great. Anytime that we put a ticket in, they've called back very quickly, and the issues have always been resolved in less than a day. Really, it happens within hours.
It is also nice that you can open a case directly from the management console, instead of having to place a call and wait in a queue. When you open a ticket, it's created, and then they call you back. It seems to be a great process.
We are currently using Veeam for backups only, whereas Zerto is used for our business continuity disaster recovery. We have never used Veeam in terms of DR. When we purchased Zerto, you had to buy a license for replication. You could also leverage it for backup, but it didn't make sense because it was more pricey than using Veeam for that.
For backups, Veeam is pretty easy to use. Backups seem slightly more complex than the DR part, at least in terms of the way Zerto is doing them. Ultimately, it's easier for me to work with than Veeam's backup, per se. But backups historically have always been a little bit more tricky.
We used to have IBM Spectrum Protect, which was a total beast. So, Veeam is much easier to use than our previous backup solution. I know Veeam does have a DR product and we've never really looked at it. So, I can't really compare Zerto to that. I know Zerto does seem to be a better solution.
Prior to working with Zerto, we didn't have a DR business continuity plan. Essentially, we had no staff working on it.
The initial setup is straightforward. We had it up and running in no time at all, and it wasn't something that took us weeks or months to implement. The install was done in less than a day and we were already starting to create VPGs immediately.
We started off as a trial running a PoC. We had a trial license mainly because, being in the healthcare industry, we have some unique applications. The other options for disaster recovery on those were going to be pretty pricey, and then, that would be a solution just for that one particular application. At that point, we were more interested in having the backups.
We don't like having five different backup utilities and we were hoping to have just one product that would handle all of our DR business continuance needs. That seemed to be Zerto when we looked at it, so we wanted to do a proof of concept on one main application, Meditech. It is our primary healthcare information system that everybody uses. It wasn't officially a supported DR business continuity methodology for it, but we did put it through the wringer a bit during the PoC phase to make sure it worked before we were really committed.
A lot of the other applications are straightforward, so we weren't as concerned with what we were going to do after the fact. But Medtech was one of the big driving ones that needed to be tested out before we committed to purchasing it. We did make calls to other hospitals who were Meditech customers as well, that were also using Zerto, to get a better comfort level based on their experiences.
Two of us from the company, including a technical analyst and an enterprise architect, were involved in the initial setup. One of the vendor's reps came down to assist us with the first one, and he was great to deal with. Any questions that we had, he was able to answer them right away. He didn't say things like "I'll get back to you on that". He definitely knew what he was doing.
The install was pretty basic and we probably could have done it ourselves regardless, but just to fill in some of the knowledge gaps of how it actually works under the covers, he was able to provide that and some other pointers on things.
In terms of ROI, it is hard to say. Fortunately, we haven't had any issues. Obviously, if we had an issue we would have seen ROI, but it's kind of like insurance. You pay for it and then if nothing ever happens, that's it. But, if something were to happen, then you're pretty glad that you had it in place.
Similarly, if you have an accident with your car, it's good that you had insurance because it's saving you money. But if you never have an accident, then you're spending money. In that way, I look at any disaster recovery business continuity as insurance.
Although we've never had to use it, if we do then we will see ROI the first time.
The pricing doesn't seem too bad for what it does. I know that the license that we have is being deprecated and I think you can only get their enterprise one moving forward. I know that we're supposed to change to that regardless, which is the one that gives us the ability to move out to the cloud and do multiple hypervisors, et cetera.
Overall, it seems fair to me. Plus, that you can do backups and everything with it means that it is even of greater value if you're doing your entire environment. It could cover everything you need to cover, plus the backups, all for one price.
We were looking at VMware Site Recovery Manager at that time as the other option, and Zerto seemed a lot easier to use and easier upgrade paths. Even within the path to update your VMware environment with two products, it seems like the easier of the two products.
Now that a backup-only license will be available for Zero, switching away from Veeam is something that we'll look at when the time comes for Veeam renewals. One of the things that we'll do is a cost analysis, to see what it costs comparatively.
We are not using DR in the cloud, although we are looking at using it in the future.
My advice for anybody who is looking into implementing Zerto is to do like we did, which was to implement a proof of concept, just to feel good about the solution, that it's going to meet your needs. Feel free to reach out to other people that are in your industry, as we did with other healthcare people. There should be a decent number of people out there that are doing what you're trying to do.
Zerto seems pretty good at hooking people up with other customers that are doing the same thing they're doing, so you have a chance to talk to them directly. I've been on those calls and Zerto basically just hooks you up with that person and they don't stay on the call themselves. It's just you and them talking, so they're pretty unbiased answers from most people. I definitely suggest reaching out to Zerto to get feedback from customers. Basically, just do your due diligence and research.
I would rate this solution a ten out of ten.
We are using it to protect all our on-premise virtual workloads, which includes mission-critical applications, line of business applications, and several unstructured data type repositories for disaster recovery.
It is our sole disaster recovery solution for what it does. It is protecting all the workloads at SmartBank.
Both of our data centers are on-premise and in colocations. Our plan over the next year or two is that we will very likely be shifting to DR in the cloud.
We had a ransomware event on one of our file servers. We detected that event very quickly using other methodologies. However, because we had Zerto in place on that server, within about 30 minutes from seeing the problem, we were able to go back and recover that machine before that ransomware event had happened. This is a great example of the solution's ability to restore so quickly that it really helped us.
Because of its ease of use, it has increased the number of people in IT who can failback or move workloads. This used to be something that was done only by our infrastructure team, because it was manual processes and complex. We now have the virtual protection setup so effectively, and Zerto does it so effectively, that we have now been able to get another three or four people from other groups of our IT company trained on how to do recovery operations. This helps us tremendously when we are doing recovery because there are just a lot more people who might be available to do it. On average, we have saved two hours per workload, and we have hundreds of workloads. We have taken about a two-hour process down to about 10 minutes in terms of recovery. Zerto is really good at what it does. It has been tremendous.
We can have a single person restoring scores of machines as well as doing DR. Backups are still managed separately. In our case, we did not reduce staff. Our staff was already kind of a limiting factor. We put Zerto in to enable our staff to do more, not to reduce our staff. Therefore, we have tremendously reduced the amount of workloads being handled by specialists.
The most valuable features are the ease of use, i.e., the relatively low complexity of the solution, as well as the speed and effectiveness of the solution. This allows us to protect our workloads with extremely small latency, making it very easy for us to monitor and recover. So, we are very happy with it.
In terms of Zerto providing continuous data protection, I would rate it as a nine out of 10. It is incredibly effective at what it does. I really have no complaints.
I would like to see more managed service= options. While Zerto isn't doing this a lot, there are a ton of third-parties who are doing managed services with Zerto.
For this company, we have only been using it for about six months. However, I have used it at two other companies for a total of about four years.
It is stable.
For our current needs, the scalability seems excellent. The scalability of the solution is really more of a function of your bandwidth and the amount of virtual resources you can point at it. I don't think there is any conceivable scalability limit.
Probably 10 people on my team touch Zerto in a meaningful way:
The heavy lifting is done on the infrastructure side, but the other teams monitor, maintain, and most importantly, test it. This is a big deal because we previously had the infrastructure team do all the testing for us before Zerto. Now, the business unit managers directly in IT can do their own testing, which is a big change for us.
Their technical support is excellent. They have a great support portal, which is easy to use. They are very responsive and generally able to help us with any configuration or performance issues that we run into.
Our previous product was VMware Site Recovery Manager. We switched to get a less complex system that could protect our workloads better and enable faster recovery. Those were kind of the main reasons why we switched.
The initial setup was straightforward.
We deployed Zerto initially with a VAR. They explained the process very well. It was just an initial installation service which included some training. Then, we took over the management of it and have been managing it in-house ever since.
We have seen ROI. The biggest way that we have seen it is in avoided downtime. We have had outages before, and we count downtime in terms of dollars spent. We have cut that down so dramatically, which provides us a very quick ROI. We have drastically reduced the amount of time it takes us to recover workloads, from an average of two hours to an average of 10 minutes.
We measure our downtime in thousands of dollars per minute. While it depends on what is down and who it is impacting, we take in an average of $1,000 a minute at a minimum. So, 120 minutes of downtime at $1,000 is $120,000 per workload that is down, and that can add up very quickly.
My only business complaint is the cost of the solution. I feel like the cost could be a tad lower, but we are willing to pay extra to get the Premium service.
Zerto does a per-workload licensing model, per-server. It is simple and straightforward, but it is not super flexible. It is kind of a one size fits all. They charge the same price for those workloads. I feel like they could have some flexible licensing option possibly based on criticality, just so we could protect less important work. I would love to protect every workload in my environment with Zerto, whether I really need it or not, but the cost is such that I really have to justify that protection. So, if we had some more flexibility, e.g., you could protect servers with a two-, three-, or four-hour RPO at a certain price point versus mission-critical every five minutes, then I would be interested in that.
The costs are the license and annual maintenance, which is the only other ongoing fee. I would imagine a lot of customers also have an initial project cost to get it implemented, if they choose to go that direction, like we did.
We do not currently use it for long-term retention. We have another solution for long-term backup retention, but we are in the second year of a three-year contract, so we will evaluate Zerto when those contracts are up. We will probably test it out. It is certainly something that we will look at. We will also plan to vet having backup and DR in one platform.
The incumbent was Site Recovery Manager, so we evaluated them as an incumbent. We also evaluated Veeam Disaster Recovery Orchestrator. We use Veeam for data backup, and they have a disaster recovery piece. It would have been an add-on to our Veeam, so we evaluated that while also looking at Zerto.
It would be ideal to integrate your backup and disaster recovery into a single solution, so that is a pro whichever way you go with it. Zerto certainly has an answer for that, but so did Veeam. Zerto's replication is superior to anyone else's out there. It's faster, simpler, and effective. I don't think I could get as low an RTO and RPO with any other solution other than Zerto.
When comparing this solution to Site Recovery Manager, pay special attention to the fact that Zerto is hypervisor-agnostic and hardware-agnostic. It is a true software-based solution, which gives flexible options in terms of the types of equipment that they can recover on and to. Ultimately, it is very flexible. It is the most flexible platform for system replication.
I would definitely advise them to give Zerto a chance and PoC it, if they desire. It is the best solution in the marketplace currently and has maintained that for quite some time.
I would give them a nine (out of 10). I really love the solution. I want more Zerto, but I can't afford more Zerto. I would love to protect everything in our environment, but we do have to make a business decision to do that because there is a requisite cost.
We use the solution for two different data center sites. Inside the data centers we use VMware virtualization, NSX stretched VLANs and Dell servers. There are many servers, storage, virtualization, and a myriad of operating systems such as Red Hat and Windows Servers.
We use Zerto to replicate our VMs from one site to the other, where we don't want to have to pay for two licenses of the same thing. We also do this to have high availability or to have the disaster recovery version of a piece of software. It is a benefit to be able to use Zerto to replicate that VM at the second site, and not have to power it on or anything. We know that it's always replicated on the other site. We currently use the solution for disaster recovery only but we are looking at longterm backup retention in the future.
I think it's perfect for providing continuous data protection for us, it is excellent.
The most valuable feature is how simple it is to implement and how quickly you can get up and running at the second site. The solution is also extremely easy to use, for example, You just log onto the console and you can do a test failover with a few clicks. You can run a failover test for your auditors or your management. Afterwards, you can get a report on how easy it was to failover a specific application and the VMs associated with that application.
In future releases, doing backups of the environment we need to be able to do hot backups of the database. Granular based backups of the OS, versus taking a backup of the entire VMDK. Currently, I don't think we are able to do all that right now. Having an agent-based backup is a benefit because you can back up the OS files, and If you have an agent for the database, you can do a hot backup of the database and restore it. You then would have the ability to do an entire VMDK backup. I don't think that they have the ability to do a hot backup of a database itself via an agent or something similar.
I have been using this solution for four years.
We have a couple hundred people using the solution within the organization. The solution is very stable, you set it up and you can forget it. When we have had issues where we lost the connectivity to a data center, we were easily able to bring up the VMs of a data center that was available using Zerto.
It's very easy to add new hosts and the VRAs get to pull it out automatically. It's very easy to scale, at more sites. We are already increasing and adding more data centers that Zerto can protect for us. We are very pleased with it.
The customer service is stellar. They always answer and they are very helpful. I have had very good relationships with the sales executives and sales engineers. If the team at the technical support cannot get an issue solved, then our pre-sales engineers will get on calls with us and help us sort through problems. They have been great.
We were using SRM, VMware's Site Recovery Manager before we switched to Zerto. We did the switch because we were impressed with the demo that was given to us. Additionally, SRM was very complicated and cumbersome.
The setup was easy and demo replication was simple too. The initial process started by us building out the VMs of the virtual machines, as per their requirements. We deployed the manager, based on all the log information of the vCenter. You then select the data storage and it installs the VRA out on your environment. Once that is done, you put together the virtual protection groups and you build out your replication site, it is very easy.
Our deployment took about a month to go through everything with three different staff members and for the maintenance, we have one technician. Make sure that we grouped everything properly together, based on the network and its functions, and how it should be brought back up etc.
I have saved days and even weeks of working time from using the solution. We are in the process right now of designing a new cloud infrastructure for one of our environments to utilize Zerto to replicate our VMs to our cloud. It is going to be a huge time saver, probably saving us a couple hundred thousand dollars. We've definitely seen some good return on investment with it. Our auditors are impressed by it.
This solution is far less expensive than SRM and NetBackup. After the standard licencing cost there is an annual support contract, nothing that we were shocked about.
We have also used NetBackup but Zerto was much easier to set up.
When trying to think of improvements I cannot think of anything to critiques at this time because it does behave so amazingly well. I've been involved with other SRM implementations and SRM is very complicated to put together and to configure, whereas Zerto is just so easy out of the box. Overall, the solution probably has saved us hundreds of thousands of dollars or maybe millions.
Some of the important lessons we have learned are you need to plan your DR carefully. That is the most important. Also, make sure that your applications are grouped together, be cognizant of the different virtual networks they go into. For example, If you have a web frontend DMZ that goes into one component, where the application and the database are in another place. You need to be careful on what networks you are sending them to at the replication site, be aware of that.
I highly recommend Zerto. I speak about the product all the time. I think that it is priceless what it does for us.
I rate Zerto a ten out of ten.
We are using it for disaster recovery for our day-one applications that need to be up first, upon failover.
We previously had our Microsoft SQL Servers set up as clustered pairs, with the primary in one data center and the secondary in the other, and they were staying in sync via SQL Server Log shipping. That was not a very efficient way to get SQL servers failed over. There were also some things that weren't replicated through log shipping, such as the SQL Server Agent jobs that are defined on the server, or the custom permissions that are set up for the different roles. Zerto was able to replicate the entire server, including the jobs and the permissions, and eliminate the need for us to have that secondary server. We were able to break all of our SQL clusters and just have standalone SQL Servers. It helped to increase our efficiency with failover and reduced our overall compute and storage footprint around SQL by about 40 percent.
When failing back or moving workloads, the solution saves time and reduces the number of people involved. The time from the initiation of a failback to the completion is about five minutes for us. We've also made some tweaks in the DNS to help that to update and replicate quickly so that we're not waiting for that, even if the resource is available. As for the number of people involved, for SQL especially, it used to require getting the SQL team involved and they would do everything manually. Now, anybody can just click through the recovery wizard and perform the failover.
Our savings from Zerto are around licensing and how we structure our current environment. We were able to save money with our on-prem deployment, but we don't use it for cloud.
And in terms of downtime, every time we test a failover it's non impactful to operations, because we're able to do testing in an isolated environment. Before, if we wanted to test our failover processes it was going to create a production outage. That is no longer the case. Before, when we were doing regular DR tests, I would estimate the cost of the downtime to have been about one weekend per quarter. That's the time we would have to take to do that. Only if we were to do a live failover as a test, which would probably not be done more than once a year, would we really have to worry about impacting any operations.
The most valuable features would be the
The granularity enables us to failover specific workloads instead of an all-or-nothing type of scenario, where you have to move your entire IP block and your data center, or you have to move large chunks of VMs. Those situations also make it prohibitive to test effectively.
The replication piece with the built-in WAN compression is important because the network circuit that we send our replication traffic across isn't actually behind our normal WAN accelerators. We were able to use Zerto's built-in WAN acceleration to help those workloads compress.
The failover is important because that way I can delegate initiating a failover to other people without their having to be an expert in this particular product. It's easy enough to cross-train people.
Continuous data protection is Zerto's bread and butter. They do all of their protection through your journaling and that continuous protection gives you countless restore-point opportunities. That's extremely important for me because if one restore point doesn't work, because it is a crash-consistent restore point, you have so many others to choose from so that you really don't have to worry about having an app-consistent backup to recover from.
Zerto is also extremely easy to use, extremely easy to deploy, and extremely easy to update and maintain. The everyday utilization with the interface is very easy to navigate, and the way in which you perform testing and failover is very controlled and easy to understand.
The replication appliances tend to have issues when they recover from being powered off when a host is in maintenance mode. Sometimes you have to do a manual task where you go in and detach hard disks that are no longer in use, to get the replication appliances to power back on. There are some improvements to be made around the way those recover.
My other main inconvenience is fixed in version 8.5. That issue was moving virtual protection groups to other hosts, whenever a host goes into maintenance mode. That's actually automated in the newer version and I am looking forward to not having to do that any longer.
I've been using Zerto for coming up on four years.
My impression of its stability is very positive. It doesn't seem to have any issues recovering after you shut down any of the particular components of the application. It seems everything comes back up and comes back online well.
Sometimes the replication appliances will stop functioning, for one reason or another, and most of the time a power cycle will resolve that. But anytime that I do have a sync issue, support will generally be back in touch with me within the first half hour after opening a ticket. They're very responsive.
The scalability is able to take on any size environment. We don't have a huge environment here. We only use it across 20 hosts, 10 at each site. They're very large hosts. If you have more than a certain number of virtual disks protected on a single replication appliance, the replication appliance will automatically make a clone of itself on that host to accommodate the additional virtual disks. It seems to be built to scale in any way that you need it to.
While our hosts are very large hosts, we don't have any current plans to extend that deployment because we have capacity to grow within our current infrastructure footprint, without having to add on resources.
I rate their technical support very highly. They're very responsive. Usually within the first 30 minutes of opening the case, someone has tried to reach out to me. I will just get a screen share, or a reply to my call with an answer, or a KB article. I have a very positive impression of their support.
We were using Site Recovery Manager for several years, and I always struggled with keeping that functioning and reliable. Every time something changed within the vCenter environment, Site Recovery Manager would tend to break. I wanted to switch to a DR product that I could rely on.
In addition to Site Recovery Manager, we were also using NetApp SnapMirror. We are still using that for our flat file data which is non VM-based. We have Rubrik as our backup solution because, while we replicate our backups, there's not any automation behind bringing those online in the other sites. So it's a manual process to do disaster recovery.
We were having to utilize those solutions to do the failovers for our day-one application in SQL and they were inefficient and ineffective for that. Zerto was able to come in and target those workloads that we needed better recovery time for, or where we needed a more aggressive replication schedule. Zerto is supplementing those other solutions.
Zerto is easier to use than the other solutions. There's definitely more automation and there are more seamless failover activities.
When I deployed the solution, it took certainly less than a day to get it up and running. The upgrade process has been fairly seamless and painless, in the past, as we have gone from one version to the next. That includes some of the features they've enhanced, where it automatically updates the replication appliances as well as the management pieces.
We have two data centers and they're both Active-Active for one another. Our deployment strategy for Zerto was to stand up a site server at each one, pair them together, and then start identifying the first workloads to add into Zerto protection. We started with our SQL environment.
I was the only one involved in the deployment. If I had questions I would ask my account team. My sales engineer and the account rep are both very knowledgeable. But I actually didn't need to open a support ticket as part of the deployment. It was very easy and straightforward.
About five of us utilize Zerto. I am the infrastructure engineer, focusing on the compute side of the house. We've got a storage engineer. My manager is an applications delivery manager who uses it. We've got another senior network engineer who focuses more on the runbook side of things, and he uses it. And my backup, who is our Citrix guy, is starting to use it.
Zerto doesn't really require any particular care and feeding. Whenever a new version comes out that has features sets, I'll decide when I'm going to update it and do that myself. It doesn't really even require a support call. It's pretty straightforward. For each management appliance, updates have taken 10 to 15 minutes, in the past. And it's just a couple of minutes for each replication appliance.
Our ROI is quite significant. The SQL cost savings alone would be in the hundreds of thousands of dollars per year. That's due to the fact that we don't need to have our SQL clustering set up as an always-on cluster, which would need to be a higher tier of Microsoft licensing. We're able to use SQL standard for everything, and that wouldn't be possible without a third-party like Zerto to do the replication and failover.
Get the Enterprise Cloud license because it's the most flexible, and the pricing should come in around $1,000 per VM.
Support is an additional cost. We are currently doing three years of support. There's an additional 15 or 20 percent of overhead during each year of additional support for each license.
Definitely take the free trial and put it through its paces, because you really can't break anything with it, given the way that you can do the testing. It gives you a good opportunity to play with the tools without having to worry about causing any problems in the environment.
We have plans to evaluate the solution for long-term retention. I'm going to start testing some of their features once we upgrade to version 8.5, and then we'll evaluate if it makes sense to do that or not. We do have other backup products that we're evaluating alongside of that though.
The solution has not reduced the number of staff involved in overall backup and DR management. We already run a very lean engineering team.
I got what I expected. I'd actually been trying to bring the product in since 2014 but I kept not getting budget funding for it. I feel satisfied with what I ended up with and I'm glad that we were able to move forward with the project.
We use it for disaster recovery. We use it for some testing. And we use it for hot backups on databases.
This past summer we had multiple hurricanes down south. We host for our clients, and what we did was proactively move them from their location down south up to our Boise data center in Idaho. We were able to do that with Zerto.
When you need to fail back or move workloads, Zerto decreases both the time it takes and the number of people involved. I was actually part of a project to move a data center, and we used Zerto to move it. We moved 20,000 virtual machines and the downtime was just a reboot of each machine. Before, it probably would have taken at least six people in multiple teams to do it, whereas in this move it was just two engineers from the same team who did it.
In addition, we recently had a corrupt database that we recovered using Zerto. If we didn't have Zerto, we would have had to do a restore and we would have had a loss of data of up to 24 hours, because the backups were done every 24 hours. In this case, we were able to roll the database back to a point in time that the DBAs deemed had good data. There was very little data loss as a result. Using Zerto in that situation saved us at least eight hours and from having to use multiple teams.
In that situation, for the recovery we would have done a restore from backup. The problem is we would have had X amount of hours of data loss. I don't know how long it would have taken the DBAs or our developers or app owners to reproduce the information that would have been lost. That could have ended up taking days. I've seen it take days in the past to recreate data that was lost as part of the recovery process.
Another point is that the solution has reduced the staff involved in overall backup and DR management. The big thing is that it reduces the teams involved. So rather than having the SAN team involved, the backup team involved, and the virtualization engineers, it ends up being just the virtualization engineers who do all the work. It has reduced the number of people involved from six to eight people down to a single engineer.
The most valuable features of this solution is the ease of use. In the event of a disaster, you don't need a technical person to actually run the software. You can bring anybody in, with the right instructions and credentials, and they can run the solution.
Having been in disaster situations myself, one of the things that a lot of companies miss is the fact that, during a test, it's all hands on deck, but during a disaster not all those hands are there. I don't know what the statistics are, but it's quite infrequent that you have the ability to get the technical people necessary to do technical stuff. I was also part of the post-9/11 disaster recovery review, and one of the key conversations was about situations where an organization had the solution in place but they didn't have the people. Their solutions were quite complex, whereas with Zerto you can do it with a mouse. You can do it with non-technical staff, as long as you have your documentation in proper order.
I've been doing disaster recovery for 20 years and, in my opinion, the solution's continuous protection is the best on the market. The ability to do the split-write, without any interruption to the production server, and the ability to roll back to any point in time you desire, are two really key features. The back-end technology, the split-write and the appliances, they've got that down very well.
There's room for improvement with the GUI. The interface ends up coming down to a personal preference thing and where you like to see things. It's like getting into a new car. You have to relearn where the gauges are.
I'd also like to see them go to an appliance-based solution, rather than our standing up a VM. While the GUI ends up depending on personal preference, the actual platform that the GUI is created on needs to go to an appliance base.
Another area for improvement I'd like to see is the tuning of the VRAs built into the GUI. It's a little cryptic. You really have to be a very technical engineer to get that deep into it. I'd like to see a little better interface that allows you to do that tuning yourself, rather than trying to get their engineer and your engineer together to do it.
I've been using Zerto for five years.
We had a rough start, but in defense of that, we were doing a lot of going long-distance with what we had.
The thing that I liked most about the problems that we had was that Zerto wasn't afraid to admit it. They also weren't afraid to put us in touch with the right staff on their side. It wasn't a big deal for me to talk to their developer. Normally, when you're at that level, the developers are shielded from customers, whereas with Zerto it was a more personal type of service that I got. We had a problem and they put me in touch with the developer who developed that piece of the solution and we brought it to resolution.
It's very scalable. We grew from just a few hundred to a few thousand pretty quickly, and there were very few hiccups during that process.
Out of the gate, when you call their number, they could do better.
The thing is that I've developed such a good relationship with all of them, at all levels at Zerto, that I know who to call. If you're off the street and you call in, you're going to get that level-one support who's going to move you through it. When I call in, they put me right through to the level-two support and I move from there. It's like any support, if you know the right people, you can skip the helpdesk level and go right into the engineering.
The disaster recovery solution for the company I'm currently with was the typical restore from backups. They were using SAN replication as part of it.
Personally, I've used many solutions over the years, starting with spinning tape, boot-from-disk, and then as we virtualized the data center, we started doing SAN-based replication. I've deployed and supported VMware Site Recovery Manager under different replication solutions, and then moved into Zerto. Prior to Zerto I used several different vendors' products.
Having been in disasters, living in Florida and experiencing them, I understand what it takes to recover a data center. I worked for my city in Florida and volunteered in the emergency operation center. Not only did I sit in technical meetings on how to recover computers, but I also sat in meetings on how to recover the city. So I have a different perspective when it comes to disaster recovery. I have a full view of how and what it takes to recover a city, as well as how and what it takes to recover a data center. Using that background, I pull them together.
As a result, I first look for a solution that works. That's key. If it doesn't work, it's out the door. The second factor is its ease of use. It has to be very easy to use, just a few clicks of the mouse and you're able to do a recovery. Zerto meets my requirements.
Not only was the initial setup simple, but upgrades actually work and backward compatibility during the upgrades work. I've been doing IT for 25 years and it's one of the few solutions that I have come across where backups work, not only doing the actual backup, but they're compatible with what you have in place. Upgrades are very impressive and very seamless.
I started with working with Zerto during the 4.5 version. Right after we deployed that we went to 5.0. The length of time really varies depending upon your engineering platform process. I did the PoC and all the documentation, and then I did the deployment into production. I spent a few days on the PoC because I needed to know what its performance impact was going to be on the host, on the VMs. Then I had to see what the replication impact was going to be as well.
And documentation took me a couple of weeks. Because I've been in disasters, when I do documentation I do it so that I can hand it to anybody, literally, including—and I've done it—to the janitor. I've handed the documentation to the janitor and I've had them sit down and do a recovery. I'm picky on documentation.
The actual sit-down at the keyboard to do the deployment, after everything was in place, including getting a service account, getting the VM deployed, etc., was quick. In one day we had it up and running.
I tend to do it myself because I'm old-school. I want to know how it works right from the ground up so that if I have to do any trouble shooting, I know where not to go to look at things. If you understand how something works, you can troubleshoot a lot faster.
I'm the lead architect, engineer, and troubleshooter. We have about four other people who are involved with it. We have several people because of our locations. We have more here, in the Idaho area, than we do in our other data center. We have one down in the southeast, hurricane area, of the United States. They're not expected to do a whole lot of disaster recovery, whereas we are.
I don't dive too much into the pricing side of things, but I'd like to see better tiering for Zerto's pricing. We do multi-tier VMs. I don't think I should be paying a penalty and price for a tier-three VM where I don't need a really tight SLA like I do for a tier-one.
Also, if we're looking to replace the data center backup solution, I have VMs that I may not need for a week in the event of a disaster. I'd like to see a backup price per VM, rather than the tier-one licensing that I currently pay for, per VM. I'd like to see better tiering in regards to the licensing.
We have Commvault, Cohesity, and Veeam. Veeam is probably the closest to Zerto for ease of use. The problem is that Veeam doesn't have the technical background of the split-write that Zerto has. Veeam can be very painful. It can't protect any VM in your infrastructure. Its process of doing snapshots is very painful. Whereas with Zerto, it doesn't matter how busy the VM is, it can protect it. Veeam does not do it that way, but its GUI is pretty easy to use. But again, if it doesn't work, it doesn't matter how easy it is.
Commvault and Cohesity are both complicated solutions. Cohesity is like Veem, it is snapshot technology. Its GUI is okay but it's a little cryptic and that's the thing that I don't like about it. With 25 years of doing IT, I can tell that the interface that Cohesity designed was done by Linux engineers. It's very kludgy with multiple clicks. You've got to know where to go. With Zerto, it's plain and it's simple to use.
Do your homework. Do a PoC. Make sure you have technical people doing your PoC, people who can dive deep into the technology. If you do your due diligence on the PoC, it will win every time. We did the PoC against five other products, and no one could touch Zerto on the technical side of it, at all, and that's besides the ease of use.
What I've learned from using it is to make sure you're able to tune the replication. Like any replication, if you're doing boot from stand or you're replicating your launch from place to place, you have to tune it. I was fortunate. I've been tuning replication for many years. If you're doing long distance, you have very high latency and you need to compensate for that. I worked with Zerto developers and we were able to tune replication to meet our site-to-site requirements. That was a key thing, and that's missed a lot of times. When people deploy the solution, they're not always keeping up with the SLA, and it has nothing to do with how it was deployed. It has to do with the pipe and the latency between site-to-site. That tends to be missed when deploying replication.
It is on our drawing board to look at Zerto for backups and long-term retention. I would say we're going to end up using it. It makes sense, at least from my standpoint, to keep things simple. It already has the data, so why not use it to move it wherever?
When it comes to the fact that it provides both backup and disaster recovery in one platform, I had never thought about the backup piece. When they announced it, it just made sense to me as an engineer with a logical mind. "Hey, I'm already holding the data, shoveling it across states. Instead of putting it here, why not put it over here at the same time?" So I was very excited about a two-for-one product. My company has backup solutions and they're struggling with them. I'm looking to replace their backup solutions with Zerto, probably in 2021.
We're also still looking at doing DR in the cloud rather than in a physical data center. We've done some testing with it. In my previous company we were using it and deployed it around the globe. Due to border restrictions, we had to go to the cloud with it. It was big because we were able to go to the cloud and we didn't have to stand up another data center. I'll be conservative and say that it saved us a few million dollars.
I give Zerto a nine out of 10. The only reason that I'm not giving it a 10 is that I'd like to see the GUI made into an appliance.
It's on-prem only, and we're replicating part of production data centers to the DR location. We use it 100 percent for DR. Zerto, as a product, has a lot of capabilities, but we're only using it to replicate servers for disaster recovery, on-prem.
Providing DR for the entire organization is a big improvement, compared to the previous way we did DR. With the old DR tool we identified the systems that we wanted to protect and we installed agents and installed a server in the remote location and pretty much treated every physical and virtual server the same way. That tool was agent-based and required installation and maintenance of a server on the remote site. Now, the effort involved is a fraction of what it was before. We just click the VMs that we want to protect and they are protected.
Zerto has reduced the number of staff involved in DR.
It has also helped to reduce downtime. With our old solution, something that took 10 to 15 minutes of outage, required one reboot, which took less than a minute, with Zerto. That amount of downtime would have cost our company a couple of thousand dollars.
Zerto's support for different hypervisors is a valuable feature because we have a mixed bag. We have VMware and we have Hyper-V. For us, that was extremely critical when we made the decision. We wanted a single tool that is able to replicate all our virtual servers. At this point, I think the only tool on the market that can do that on-premise is Zerto.
It does a great job of continuous data protection. That's why we're using it for DR. It has the journal, the recovery points. It's doing its job. It's a good tool.
It's extremely easy to use with a very intuitive interface. You can set up a VPG (virtual protected group) and add VMs to it in a couple of clicks. Everything is in a single dashboard and you can do everything from there. If you need some granular information, you click the Analytics and get your RPO or RTO and how much data you would lose if you do a DR at this point in time.
They definitely have room for improvement in a couple of areas. One is role-based access control. Right now, they don't have an identity source so they use the identity of the vCenter or the VMM. If they connected to an identity source like Active Directory and allowed for granular roles and permissions, that would be an improvement.
Another area of improvement is support for clusters. They have very limited support for Microsoft clustering.
Also, integration with VMware could be improved. For example, when a VM is created in vCenter, it would be helpful to be able to identify the VM, by tags or any other means, as needing DR protection. And then Zerto should be able to automatically add the VM to a VPG.
There is definitely room for improvement. But what they have implemented so far, works pretty well.
I have been using Zerto for about five years.
It's pretty stable.
We're always one version behind. The current version is 8.5 and we're running 8. We always wait until at least Update 1 before we upgrade. So when v9 is out, we'll probably upgrade to 8.5, Update 1, or whatever the current update is. Because we are a little bit behind and we're running on a very stable, mature version, we rarely experience issues.
We're running thousands of hosts. Scalability is not a problem.
We plan to keep the product. It's doing a good job.
Our experience with their technical support has been good. But keep in mind that we have a pretty high-level, Premium Support agreement with Zerto. We have a dedicated technical account manager from Zerto, and he has direct access to the developers.
We used Double-Take DR which treated all the physical and virtual servers exactly the same way with agents. Zerto replaced it.
We switched because it is a little bit inefficient to treat all the virtual machines as separate physical servers, because on the DR site you need to install them, you need to configure them. You need to put the agents on both sites and configure the replication relationship. It's very complex. And whenever you need to patch or do some maintenance on the target site, it's double the work because you patch the source and you patch the target—you have a live server at the remote site. With Zerto, as soon as I patch the VM at the source, the updates are replicated to the target immediately.
Zerto's ease of use is very good compared with other similar solutions for replication.
The initial setup of Zerto is quite simple. You build a SQL instance. You build a Windows VM and install the ZVM on it. You integrate it with vCenter and then, from the ZVM, you make sure your firewall ports are open and you push the VRAs down.
Deployment takes a couple of hours, for a relatively big environment. It would typically require 30 minutes of DBA time, an hour or two of Windows engineering time, and another person from VMware for another hour.
It doesn't require any staff for day-to-day maintenance. It's used by our operations team, which is close to 100 people; those are people who have access to it.
It's quite easy and straightforward. We do it with internal labor.
The way we use it there is no return on investment. You can think of Zerto as an insurance policy. We use it to protect our business, but we actually hope that we'll never put it into action.
It's not the cheapest tool, it's expensive. But it's doing a good job.
We pay the standard license, maintenance every year, and we pay for our technical account manager, which is pretty much Professional Services, with our Premium Support.
We looked at other solutions. We own another solution called VMware Site Recovery Manager, SRM. We have licenses for our entire environment and we still decided not to use it. That's how big the difference was in the experience that Zerto provides.
We also compared Zerto with our previous disaster recovery solution, which was called Double-Take DR.
Zerto is much better. It is not a cheap solution. The fact that we decided to buy it when we already had all the licenses for VMware, bundled in our ELA with VMware, should tell you how big of a difference there was.
My advice would be that when you need a tool to bet your business on, as a last resort, make sure you evaluate all the options, test them, and don't be cheap.
The biggest lesson I've learned from using Zerto is that a third-party company can do a better job of protecting the workloads than the vendor. It does a better job than VMware and Microsoft together.
In terms of using the solution for long-term retention, we're evaluating Zerto's offering. It's a new feature. We already have an established backup system, using Symantec. In a couple of years, when we need to refresh Symantec, we might consider it. But at this point we don't use it and we aren't considering it.
We use the Veritas NetBackup solution. They split from Symantec so Veritas is separate, but it was a Symantec solution for backup. We don't use Veeam, we don't use Cohesity, we don't use Rubrik. The only potential is to replace our Veritas/Symantec backup product, in the future, with Zerto Long Term Retention.
If we have a DR situation, we are not planning to fail back. It's not part of our DR strategy. If we need to fail-over a production data center, it means that this data center has been destroyed, it's a smoking hole in the grass. We will be running continuously from the DR data center, which is a full-scale data center.
I would rate Zerto at nine out of 10. There are new features that they're working on, which will be nice to have. That's why I won't rate it a 10, but overall it's a really good, stable, easy-to-use product.
Our use case is 100 percent disaster recovery between two different geographies. We have a very large private cloud offering. We've got about 1,200 customers and almost 10,000 VMs that are under Zerto protection. Every one of those virtual machines needs to be replicated from Waltham to Chicago, from the East Coast of the U.S. to the central U.S. Likewise, we have a European business with the exact same flow, although it's much smaller as far as number of VMs; it might be a couple of hundred. That implementation is going from Berlin to Amsterdam. We've got one-way protection in two different geographies and all of those machines are under Zerto protection.
The number-one benefit is that for the first time we could show, at a customer-level of granularity, how a customer was protected, and what their RPO was in, real time. Each one of our 1,200 or so customers has a portion of those 10,000 VMs. For the first time we were able to tell a product leader or product manager what the RPO was on Thursday at 2:00 PM for that customer. We could say, "Hey, it was 67 seconds." Our company is very customer-centric and customer-focused. There's less interest in what the overall health is, and a lot of times there's specific interest in, "Hey, how is that customer doing?" either for performance or for RPO time.
Zerto also allowed us to easily pick groups of virtual machines, group them as a whole, and have that be segregated from the storage layer. That is the storage-agnostic benefit from their product. That agnostic feature with respect to the storage layer allowed us to group VMs by customer and not only report on RPO by customer, but also to more easily sell different RPO plans. We were able to prioritize and say, "Okay, these 10 customers have platinum and these 500 have silver."
Four years ago when we did a PoC between two other vendors and Zerto, there were two features of Zerto that sold it, hands-down. One was the ease of creating protection groups, the ease with which our engineers could create protection, add virtual machines into the Zerto product, and get them under DR protection. The other products we were looking at required work from two different teams. The storage team had to get involved. With this product, the whole thing could be done by just our virtualization team, and that was a big sell for us.
The second feature that sold us was the sub-second RPO. One of the things that made Zerto's product stand out from some of the more traditional solutions four years ago was its ability to maintain sub-second RPO over a group of machines, and that group of machines could be spread over multiple storage hardware. It was the storage-agnostic features of the product.
The number-one area in which they need to improve their product is what I would call "automatic self-healing." This is related to running them at scale. If you're a small company with 50 VMs, this doesn't really become a problem for you. You don't have 1,000 blades and 1,000 of their VRAs running that you need to keep healthy. But once you get over a certain scale, it becomes a full-time job for someone to keep their products humming. We have 1,000 VRAs and if any one of their VRAs has a problem, goes offline, all of the customer protection groups and all of the customers that are tied to that VRA are not replicating at all. That means the RPO is slipping until somebody makes a manual effort to fix the issue. It has become a full-time job at my company for somebody to keep Zerto running all the time, everywhere, and to keep all the customers up and going.
They desperately need to work self-healing into the core product. If a VRA has a problem, the product needs to be able to take some sort of measure to self-heal from that; to reassign protection. Right now it doesn't do anything in that self-healing area.
My company implemented Zerto in 2016, so we've been live with their product for a little over four years.
The stability comes back to size and scale. It depends. If you are not replicating heavy workloads—meaning you don't have a SQL server that's doing thousands and thousands of IOPS, and you don't have multiple SQL Servers on the same very large hardware blade—Zerto is incredibly stable, based on my experience with the product.
However, we are doing that. There's a one-to-one relationship between the Zerto VRA, which is essentially their chunk of code that does the replication, and a physical server. The physical server is running anywhere from one to as many virtual servers as someone can fit onto it. And that one VRA has to manage and push all the change blocks for all the workloads running on it. So if you've got five or six really heavy workloads running, that one VRA that has to handle all of that and push it to your destination can, and does, crash. VRAs in that situation crash or become unstable. We've worked a lot with Zerto over the last two years on tweaking the VRAs with advanced settings. We've directly been involved with identifying a couple of bugs with the VRAs. When the VRAs are pushed, they can only be pushed so far and then they crash.
It does perform. However, we have VRAs that crash all the time. When we go back and we look at why they crashed it's because we're pushing them too hard. We're doing things that they would say we shouldn't be doing. They would say, "Don't set six SQL Servers on the same blade. That's too much. Don't do that."
Zerto has worked with us very effectively on identifying advanced settings that we can make to the VRAs to make them perform better, and to be more stable in the "abusive" environment that we throw at their code base.
It could be more stable for really heavy use cases like that. But Zerto would come back and say, "Well, our best practices would have you put some sort of anti-affinity rule in place so that you don't end up with that many heavy I/O machines on a single blade." They would say that doing so is not best practice; don't do it. You could say that we abused their product, in that sense.
But it works. If you align with best practices, it's pretty stable.
We have no concerns about the scalability, although I should qualify that statement. Zerto can scale horizontally extremely well. They've got one VRA per blade and that one VRA is their data plane. You can scale out your environment horizontally with as many blades or servers as you want, which is how people do virtualization and Zerto will scale with you. We've never hit a limit as far as its ability to scale as horizontally.
The caveat would be, as I mentioned elsewhere, the size of the pipe in your infrastructure to handle all of that replication. But that doesn't tie to the Zerto product itself.
In terms of the issue of VRAs crashing, you want to scale horizontally rather than scale vertically, because if you scale vertically you're packing more and more virtual machines into the same number of physical servers. You're stacking them up high rather than across. If you stack them up high you have concerns about the scalability of the single VRA. The VRAs do get overloaded. Don't pack them too high. Scale out, not up.
Zerto has spread out as a company. They've mushroomed out into other areas. They've started to develop a presence in backup and they've started to develop a larger presence in reporting. Their core product, however, is known as ZVR—Zerto Virtual Replication. We've implemented that core product 100 percent. There's no other way we could be consuming it differently or more effectively.
The newer stuff they've come out with—certainly the backup—we don't touch that at all. The backup product is not ready for prime time. It might be good for a small customer that may have 50 machines they want to back up. But for our use case, with SOC compliance, and having to report on the success of backups for recovery, and although we looked very closely at their backup and where they were going with it, it's not ready for us.
They're starting to go into Docker containers. None of our product right now is containerized.
A third area is analytics and reporting. The analytics and reporting would be the one new area that they've put focus on that we could be using more and getting more value out of. They've got a SaaS solution now for reporting called Zerto Analytics. We do use it. You turn on their core product and you tell it to send your reports to their SaaS offering. We've done that and we can consume the analytics product, but we just haven't really operationalized it yet. That, for us, has been a tool looking for a problem.
It took us about two months to deploy the solution, but that was because we're a very conservative company. We purposely went extremely slowly. If we had wanted to go faster, it could have been done probably in a week or two, to get all 6,000 VMs under protection.
When we deployed it, there were two dedicated people at our company who were involved, paired up with three people from a Professional Services team from Zerto. As a tertiary, we had a full-time person from our VAR, the reseller that sold us the licensing for Zerto. With that help from Zerto and the value added reseller, it only took two of us to install it to about 600 blades and probably 5,000 virtual machines.
Our experience was excellent. Both teams were great. It was a very painless rollout.
I'm less involved with the pricing and licensing area now. The last time I was involved was a couple of years ago. In my opinion, their model is somewhat inflexible, especially for their backup product.
One of the reasons why we didn't pursue looking further at their backup product was, simply, licensing. Today we have to buy a Zerto license for every virtual machine that we want protected by their product. We have a lot of virtual machines that aren't production and that don't need to be protected by their product. They don't need sub-second RPOs. They do, however, need to be backed up. But Zerto's licensing model two years ago was, "Well, we don't care that you just need to back up those VMs, and you don't really need to replicate them. It's the same price."
We would have had to double our licensing costs for Zerto to adopt it as a backup solution. It was just not even within the realm of possibility financially. It made no financial sense for us to move off our current backup vendor. Their inability to diverge in any way from that was rigid.
Their licensing could be less rigid and more open to specific companies' use cases.
The other two vendors we evaluated back were Site Recovery Manager by VMware, and whatever Veeam's product was at the time. We also looked at CommVault lightly, but they were never considered seriously.
Zerto can do what it says it can do. It can absolutely provide sub-second recovery point objectives, but with a couple of caveats. The caveats tend to apply to large companies like mine, and by "large" I mean if you have over 2,000 to 3,000 virtual machines, versus a small to medium-sized company that maybe has 50 to 500. Once you cross that barrier, you're getting into a larger environment that you're trying to replicate with Zerto.
A couple things can break down. Zerto's product doesn't control the path between your source production data and the destination you're trying to send it to. There can be tons of bottlenecks on that path; you can be going around the world. If the bottleneck doesn't exist there, their product can absolutely do what it says it does. It's up to the customer. The people using Zerto have to understand that they own the bottlenecks in their environment. If there is a bottleneck between production and the targeted DR, the RPOs are going to slip. You're going to go from sub-seconds to minutes or hours. That's not necessarily a fault of Zerto's product. It's the fault of the design of the customer's environment and what they brought it into.
That doesn't just exist for the pipe between the two sites. On the destination side, the side that's receiving this data, the storage layer underneath needs to be more performant than the production side. That's somewhat of a strange concept for a lot of customers and people coming into the Zerto solution. They see the marketing side of, "10 seconds to RPO" and say, "Yeah, I want that." But what it means is that you've really got to look at your hardware and you've got to have class-A hardware the whole way through that Zerto pipeline, for their product to do what it says it does. Zerto makes that very clear. They don't recommend hardware; they're not in the business of supporting other vendors. But they have a published list of best practices. The best practices clearly say everything that I just said. They also have best practices around managing your workload I/O on the source side, so that you don't overwhelm their product.
But not everyone follows best practices. Certainly, when we implemented it we said, "Yeah, we get that. We understand what you're telling us. We understand that's a best practice, but we're not going to do it anyway, because it's too expensive," or we didn't have it in budget for that year. So we knew it and we went in without following them. A couple of years later, when we got to a tipping point, we realized, "Okay, we need to go back and align with some of those best practices," things we didn't think that we had the time to align with back in 2016. We've made that journey painfully with their product, but they were very upfront with us on what the requirements for their product would be.
Overall, I would rate Zerto as a solid eight out of 10 for the core disaster recovery offering.
We needed Zerto in order to provide a disaster recovery solution for the entire organization. We use it to replicate some resources on-prem and for quick recovery. We also use Azure to replicate for disaster. If we ever have a catastrophic failure or attack at our main headquarters, we could failover and run our resources in Azure.
We don't use Zerto for backup, we use Veeam. Once the new long-term retention features are added to Zerto, then we will investigate using it for that and possibly dropping Veeam.
There wasn't anything in place that compares to what we're getting from Zerto. Before Zerto, we didn't have a proper disaster recovery program or application in place. We had a simple backup solution where we could back up our data every 24 hours. So we went from that to being able to recover full systems within a matter of minutes. With Zerto, if we do have an event or disaster, we know that we can recover from that much quicker than we were able to before.
We use Veeam Backup for data and not for replication so this is purely just for disaster recovery and replication. We don't use it for data backup, we're still using Veeam for that.
Zerto definitely decreases the time and people it takes when we need to failback or move workloads. The benefit of using it with the Cloud is that we don't have to maintain extra hard work or an extra infrastructure for disaster recovery. With Zerto and Azure, it can all be done essentially by one person. If we're restoring data and systems from the cloud, it can all be controlled from the Zerto interface, whether it's on-premise or in the Cloud. To move the data back, depending on the size of the disaster, if we were to have to rebuild our hardware on-premise, that would obviously require more people. But if it's just a matter of restoring data from the Cloud, it would only need one person. Whereas before, you could probably still do it with one person, but the amount of time that would take would be a lot longer. We would have had to rebuild servers to restore the data. With Zerto, we can restore entire servers from our Cloud repository and have them up and running, it would just be dependent on the speed of the internet. Zerto could easily save us days of time.
It saves us time in data recovery situations due to ransomware. If we had a ransomware attack, we could have our systems available for investigation and run our environment entirely in Azure, separate from our on-prem network. With Zerto as well, we could also recover our systems to the point in time before the ransomware attack happened, ensuring that it doesn't happen again. With our resources in the Cloud, we can scan it for infections and pull it out if it's been lying dormant. The big benefit against ransomware is that we can easily just go back in time to the point before the attack.
The ability to do DR in the Cloud rather than in a physical data center has enabled us to save money. It has saved us quite a bit of money by utilizing Cloud resources, instead of buying a whole new recovery site on-premise. We did an analysis of the buy and one of the reasons why we went with Zerto on Azure is because of the amount of money that we would save over a five-year period. Based on our analysis, it saved us roughly $25,000 a year.
The one-click failover feature is very valuable because of the ease of use as well as the little to no data loss with the constant replication in journaling technologies that it has.
The one-click failover feature is really valuable to us because we need a solution that's easy to use. There's the potential that myself or other staff may not be available at the point of the disaster and it would be possible to have somebody who may not know the technology be able to initiate a failover on our behalf by simply just asking them to click a button.
The important features of having little to no loss of data are extra valuable because if we do have a failover event or an event where we need to initiate a failover for disaster, having no data loss is really important because if we were to have a disaster where we needed to initiate the failover for recovery, and if there was data loss, that's lost time from staff. It's also really hard to tell what data is lost and what has to be made up. We have certain resources here that can't afford any sort of downtime or loss of data.
Its journaling technologies are always sending replicated data up so that we can view what the recovery point objectives would be in real-time. We can see it could be a matter of six seconds to a couple of minutes, and that gives us peace of mind that things are moving constantly so that when we do have a failure, we can go back to pretty much any point in time that we want and have our systems available again.
Zerto is very easy to use, the interface makes it really easy. The wizards that are available, the how-to guides, and the support from Zerto has made it really easy to use. With little to no training, we were able to get it up and running in the test environment in under a day. The interface makes it really easy to use from using it from day to day, setting up new jobs for replications, or even restoring data.
Long-term retention of files is a function that isn't available yet that I'm looking forward to them providing. The long-term retention is the only other thing that I think needs improvement.
We have been using Zerto for around nine months.
Zerto is very stable, we have not had any issues with it so far.
Scalability is fantastic. It can go from a very small number of machines up to a very large number of machines without any issue. We started small and they included more and more to it and I haven't had any issues. We have not had any problems scaling across sites to other sites within the organization and integrating it all together. It's as advertised that it can be in any environment of any size. It scales very well.
Only one or two people are required for the maintenance of this solution. As the manager of technology and infrastructure, I and the system administrators do the maintenance. I mostly work with it. One of my other staff works with it from time to time, but for the most part, it just does its thing and we don't really need to do a whole lot with it.
Zerto is used extensively in my company in the sense that it is our primary disaster recovery solution. It is used for servers throughout the County for all departments. Every system that we have in place relies on Zerto for DR. As servers increase, we will add those servers to Zerto, for disaster recovery purposes. It's completely integrated into our system.
Zerto hasn't reduced the number of staff involved in overall backup and DR management only because we have a small team to begin with. Our infrastructure team that I'm in charge of is only six staff. So DR and backup is one job amongst many, for all the staff here. The amount of time dedicated hasn't changed a whole lot for us.
Their support is fantastic. Anytime we've had an issue, which has been not too many, they've been very good to resolve any issues.
We also use Veeam and it is really easy to use too. They're both easy programs to use. If anyone can use Veeam, they can use Zerto. I wouldn't say Zerto's any easier or Veeam's any harder. They do different things; Veeam does back up really well. If you need a backup solution, Veeam is far cheaper. Whereas Zerto is fantastic at disaster recovery and replication, but when it comes to backup, that's not really what it's made for. Moving forward that may change. But Zerto is definitely a much costlier program compared to Veeam but it does a lot more.
Zerto itself was straightforward to set up. There was good documentation available and we utilized some of their engineering services to help set up as well. For the size of the products and the complexity that it can do, the actual setup and operations over this are quite easy. It took a couple of days, which included getting everything in Azure set up properly.
The implementation strategy that we did was to create the on-premise environment for a dedicated network, virtual machines, and the installation. Then Azure would become our disaster recovery site in the event that we needed it if we had a disaster on-premise, we could failover all of our services and servers that we needed to in Azure. Then our client computers would connect to them while in the Cloud while be prepared for recovery on-premise.
We utilized a third-party consultant to assist with setting up our Azure environments and Zerto technicians helped us set up Zerto on Azure. Our experience was really good. There were some challenges and there was lots of learning to do, but overall, the experience was good. The staff from Zerto were exceptionally good. They really know the product well, helped quite a bit, and provided instructions and training on how to use it outside of that.
I think that return on investment will come in the event that we do have a disaster that we need to recover from. We have seen some ROI from Zerto by moving virtual machines between data centers, where that has saved us a lot of time. The technology not only is useful for disaster recovery, but also for server maintenance and moving resources between posts and impairments. Before, it could take hours to copy virtual machines, even days. We use Zerto to move resources around with little to no downtime in a lot quicker time. So we were able to save staff time and resources by using Zerto.
It wouldn't have cost us too much with the government. It's hard to equate a lot of downtime to dollars and cents for us because it's more so around staff time and convenience. We have long-term care homes that we need that are up all the time. And any of those maintenance windows we usually schedule after hours. So it's more of an inconvenience for IT staff to work overnight instead of during regular business hours.
Zerto is not cheap; however, it is worth the cost. The licensing model is easy. You buy based on the amount of virtual machines you want to protect and go from there. Even though it is not a cheap program, you do get what you pay for, but overall it became cheaper than maintaining a separate data center.
We looked at Cohesity, Rubrik, and Commvault. Veeam does replication as well, but it doesn't do it nearly as well. We looked at a few other solutions from Dell. We went with Zerto because it had all the disaster recovery functions that we needed, the ability to recover within minutes with minimal to no data loss, and is integrated well with Azure.
I would recommend doing the free proof of concept exercise with Zerto pre-sales engineers and work with them to discuss your environment and then review their recommendations on implementation. From time to time do the free training. I highly recommend doing that. Get your hands on this software and try it out first before doing the production implementation.
The biggest lesson I have learned is that disaster recovery doesn't have to be hard.
I would rate Zerto a ten out of ten. I don't rate many things ten, but Zerto offered me exactly what they're upfront with, what it will do, and it's doing exactly what they said it would do.
We use Zerto for DR purposes, we replicate what's critical to continue business. We replicate it from our headquarters to another state, a DR site. If something happens to the headquarters where we are located we could run the continued business from the DR site with Zerto.
It improved the way we function because we know that in the event of a disaster, we can easily recover our state of being in a rather quick amount of time. If there's malware, for example, we can go back to a point in time, up to five seconds before the malware started, and start production from that point on, undoing all the damage that the malware did. The ability to do that is a very good feature. It has replication and DR, but at the same time, if something happens with malware and it compromised your backups and compromised your offsite remote copies, you have that third option of saying, "We'll go to Zerto and see if we can reverse back from that point." That's pretty comforting.
If we need to fail back on work we would absolutely use Zerto for that. We'd probably do that first before we tried the backups. It's easier to do that than to try to look for the backups.
Zerto decreases the amount of time spent trying to get everything back online. That's the most important part. If something happened, I could just go back. It would take me around 15 minutes to locate something and launch it fairly quickly. If you're doing it from a backup, you're probably going to look for a good backup, launch it, then it's got to set up. So you're looking at an hour, maybe, if you're lucky. It's a big difference.
We once had a server and there was some software and one of the controllers in the software wouldn't boot up and we did not have a copy. It was a brand new server, so we didn't have a backup copy of it just yet. We used Zerto to go back. It was in Zerto, but it wasn't on backups yet. We were replicating it just in case and we used that to restore the server back. I called support who walked me through it fairly quickly. The whole thing took around 15 to 20 minutes. It was easy. If we had used a different solution, it would have taken us an hour or maybe two. We would have needed to find the backup and then mount it. We would then launch code or a bunch of series of Wizards. Zerto is always running so if we need to get something, we just highlight, and then we go to more, and whatever we want to do, it's right there. We don't have to turn it on. We don't have to get it going. It's always running. Backups are in a stopped state, so we would have to get it going first, then look for something, then mount it, and then do whatever we're going to do. There are four steps there, versus with Zerto where it's, "Oh, it's in this VPG," go to more, it gives you all the actions you could do, clone, delete, copy and you just do it.
Zerto has reduced the number of staff involved in data recovery because it's only two people who manage the Zerto platform. It's mostly me. I do about 80% to 90% and then 10% is my supervisor. He's more into the meetings anyway. We require two people.
The replication is the most valuable feature. It's almost like a tape recorder. You can rewind if you need to, if something bad happens. You can rewind the tape and your production begins where your tape left off. Where you want it can be replayed for such a purpose.
Zerto is very good at providing continuous data protection. For replication purposes, it's definitely better than Veeam. Veeam doesn't do as good a job as Zerto does when it comes to replication. The other alternative would be to just have backups somewhere. But even with backups, you lose a lot of time because you have to set it all up. With Zerto, you failover, you just click a couple of buttons and you run from the other location.
It's very easy to use. Every morning I go into the dashboard and I can tell the health of the VPG groups. If there's a problem or anything, I'll see it in the active alerts. So the dashboard is pretty simple. There's a status that tells you if the RPOs are falling behind, or from what you set it to, there's a reminder that tells you when to do the failover test. I like that. If I'm going to add a server to a VPG, I just go to the VPG section of the menu, find our group, select it, and then edit it, go to VMs, and there it is. It gives it to me side by side. It tells me what is unprotected on the left, and then this is protected on the right. I can move from unprotected state to protected state or I can remove something that's in a protected state to an unprotected state. It always tells me at the dashboard from the bottom, how many licenses I have, and how many I'm using. I don't go over my license count where I might not be protected. It's pretty easy.
We've never had a situation where we had to go to Zerto for downtime. It's just protection, but we haven't had the situation where we've had to failover. Hopefully, we never do. It's like car insurance. You want car insurance, but you don't want to get into an accident.
The improvement that I would like to see is a little bit easier product knowledge, things like that. It's getting a lot better than it was before because it's not as old of a product as Cisco, but if you look for something like Cisco routing and networking, you'll find millions of articles out there and it's everywhere. It's prolific. So with Zerto, you have to find it within the Zerto application. Hopefully, as they grow, it'll be more out there on the net. Same thing with Microsoft. If you look for a problem with Microsoft, you're going to find millions of articles on it, maybe it's just because they've just been around for so long. I'm hoping that one day Zerto is just as prolific and can be found everywhere.
I have been using Zerto for two years.
Zerto is very stable. There are very few errors. I just don't see any errors. The only one time we had an issue with it was with journals. We were filling up too much and we called support and they walked us through it. We found out that we were replicating a temporary drive and it's not good practice to back up temporary drives because they constantly change when it's not even necessary. So we removed the temporary drives and we never had that problem again.
I'm a system administrator.
If you were doing recovery from a backup, you'd probably want at least two people looking through backups quickly to speed up locating them. You might need somebody to set up a mount server to mount the backups on. You can get away with three people but you might need three depending on how urgently you want to get going. It depends.
For DR management, if I was with another solution I would require more people. If you have a Dell EMC, they have storage administrators that that's all they do. And so that's a dedicated position. I'm a system administrator, which means that I do the storage, I deal with the servers. It allows me to do a lot of things, not just one thing.
Scalability is excellent. We have one here, we have the one there. I know we could add another ZVM, another location, or another DR site if we wanted to. That's been in the talks for a long time. It's on pause now because of COVID-19. But if at some point we decide that we want to add a secondary DR site that is geographically opposed to where we are now, then we could replicate one, two, and three at the same time. It's got good potential to be increased.
We have about 20 to 50 servers in Maryland and we replicate all the critical, essential ones that would be required to continue to run business to North Carolina. Everything is virtual, which helps us out. We have one Zerto virtual manager here, we have a Zerto virtual manager in North Carolina, and we do failover every six months just to make sure that it works. As a matter of fact, we have a failover test coming up that we have a test of failover to make sure that it's continuing to work.
The support is very good. I've used them before. When we had the journal issues, it was easy to resolve the issue. We've done upgrades on the versions and they've always been very responsive. If I do a P1, which is critical or they do a P4, which is just information, they respond fairly quickly.
We've used Veeam for backup. Before Veeam, we used Unitrends, which is even worse. It didn't work.
Zerto's good for DR replication. I don't think anybody does it better. Veeam is good for short-term backups. It doesn't do well with the replication part at all - even they say it. I've spoken to reps who agree and say that Zerto's better at it than they are.
The initial setup was pretty straightforward. You set up a ZVM here and there and tell it the direction you want it to replicate in to. You can create an EPG to the journal that goes attached.
Because we were setting it up in the middle of things, it took around one month.
We did have a strategy of how to put those servers here, build servers there, and IP addresses we were going to assign to it. We did have some sense of where we were going to put things.
We used the reseller who helped us with the deployment. They were great. It was easy. No problems with it.
We bought it through CTI.
I think we do see ROI. We need a defensive posture to protect ourselves.
Pricing is okay. You don't use Zerto to put all of your servers in Zerto. The purpose of it is you take what is absolutely critical to continue running your business, whatever servers are in your business continuity plan. Those are the ones that you put in Zerto. Then you'll be fine in the licensing because if you just buy 200 licenses or 300 licenses and you're backing up a utility server or any server that's not essential, then your bosses are going to think you're spending too much money. But if you just zero in on what's critical and back that up with licensing, you'll be fine.
There are no additional costs that I'm aware of. We have the licensing fees that come up and then that's it, as far as I know.
We had a couple of proposals. We had the one from Veeam but we realized really quickly that it doesn't work for replication. The other alternative would have been to save the backups to the offsite location, have servers there, and load backups at the server location. That takes a lot of manual labor so we decided Zerto would the best option.
We don't know what we're going to do for long-term retention. We use it for DR purposes only. But we are still looking at the long-term retention and what to do with it.
I would say if you're looking for true DR protection with minimal recovery time, then Zerto is probably going to be the one. If the objective is minimum time to recover, then this is the product you need to buy. If you want to spend time trying to set up again in a disaster, then there are a lot of things out there and for ransomware too. We have about a five-minute window where once data is compromised beyond five minutes, it's useless. So we need to keep the window to about five minutes. Because of that, Zerto is really going to have to do that at this point in a cost-effective way to recover.
I like Zerto. You learn different things as you use it more and more, so you become more competent with it as you use it. I know that if you do have an issue, as with most other vendors, the easiest solution is to provide the logs as soon as you can, and then they're better prepared to respond if you do it that way.
I would rate Zerto a nine out of ten. Nothing is perfect.
We use it for DR as well as migration. We have four data centers and migrate workloads between them.
We don't use it for backup.
We had some ransomware that got on and infected the corporate shared drives. It was just one system and one user type of thing. It didn't spread because we had it locked down pretty well. So, I just bumped the server back entirely so we did not have to worry about it.
We have only had one instance, and it wasn't widespread, where we had ransomware. The RPO was approximately 20 minutes. We had an active snapshot from when the incident happened, because we couldn't really iron it down. Therefore, Zerto saved us time in this data recovery situation because I didn't have to rebuild the thing or do a SnapMirror.
If we had used a different solution, it might have taken a week for our data recovery situation instead of 20 minutes with four or five technical folks (not including management), instead of just me. This is because we didn't have anything documented and just counted on Zerto to do it. I don't know what the company had set up previously since I'm new, but at the previous place that I've experienced malware, you would have to stand everything up from scratch and scrape through all your backups and differentials.
We use in the data center if there is a live event that could cost the company millions of dollars, which I haven't experienced, e.g., if our data center were to explode or get hit with a meteor, then ceases to exist. We have the option to go in and flip a switch. That has never happened. However, our tests using SRM went from a day to minutes when we switched to Zerto.
The most valuable feature is DR. In my opinion, there is nothing better at what it does.
The solution provides fantastic continuous data protection. We do a lot of spin up test environments depending on what happened, then make changes and rip it down. Or, if we got hit with malware, then we use that to do a point-in-time recovery. We custom create software in-house, so we will spin up a test environment to test code deployments or do a copy to do the same thing, if we want it to be around longer than a test recovery. For example, somebody got hit with something, then they infected the server. We were able to restore it back to a point in time before the infection.
It is super easy to use. A non-technical user can get it up in a day. I can get it up in 15 minutes. I've brought it to help desk guys and network operations center guys, and it's easily grasped.
While I am open to transitioning over to using Zerto for long-term retention, the problem is the alerting function in Zerto is very poor. That makes it a difficult use case to transition over.
The alerting has room for improvement as it is the biggest pain point with the software. It is so bad. It is just general alerting on or off. There are so many emails all the time. You have no control over it, which is terrible. It is the worst part of the entire application. I have voiced this to Zerto hundreds of times for things like feature changes. Apparently, it's coming, but there is nothing concrete as to when you can do it.
Four years.
The stability is fantastic. It has gotten a lot better as far as the maintenance. Initially, it required a lot of prodding and poking. As it sits today, it is really stable, though you sometimes need to mirror the changes in the application to what you have changed in your own infrastructure.
The management once it is already deployed is easy to moderate. Things can get a little goofy with the DRS and if you're shuffling things around. If your infrastructure is pretty static, you're not going to have any problems with Zerto. But, if you move things around or do any updates, you have to come in and make sure everything is good to go. It is not difficult, but sometimes you are required to go in and maintain it. Because we turn off the alerting in most places, you don't know its status without going in and manually looking.
I am the primary Zerto administrator. Therefore, I own the product for my company and use it every day.
Scalability is great. It will scale essentially one-to-one with your virtual infrastructure. However, if you have more hosts and VMs, then you have to go in and manage that many more hosts and VMs.
Four people know it and use it to do things. I'm the primary, then there is another guy who is the direct backup on my team. Then I have trained a couple other people who know how to utilize it in the event of an emergency, e.g., "This is how you would failover X environment." Because it won't automatically do failovers, somebody has to pull the trigger. Therefore, we have documentation in order to do that. It is very simple.
We don't use it for everything, not in both instances where I implemented it or been in charge of running it over. However, it definitely has freed people up to do other things in that space. It only takes me to entirely administer Zerto, instead of a backup and recovery operations team with two or three people.
We are at about 60 percent of use. I would like to see more. We don't do persistent long-term backups or use any of the cloud functionality, though I think we will as we're in the midst of looking at AWS to potentially migrate workloads there. I also very interested in using it as cold storage.
Initially, years ago, the technical support was very poor. We were promised one thing that was physically impossible with the software. I spent a lot of time fighting everybody in support. Since then, the support has been really good. In my experience, they are all mostly stateside. They understand the product inside and out to help you with your needs or come up with some type of creative solution.
At my previous company, we were using SRM and our DR tests would take one to two days. For our primary customer, we switched to Zerto, then it took 15 to 20 minutes instead of days. It was a huge difference. That was from Boise to North Carolina, then back. It was approximately 30 terabytes of data with 19 virtual machines. It was a pretty large orchestration.
SRM was replaced by Zerto due to simplicity. SRM is very complicated. It is also not easy to use and set up. Zerto is better for implementation and ease of use. So, it was a no-brainer.
The initial setup was straightforward, though it could be more straightforward. Now, you just install the software on a Windows system. It would be nice if they had an appliance that autodeployed in VMware. That would make it simple. But if you can install Office or any kind of application on Windows, you can do this. It is super easy to set up with minimal front-end learning required.
The deployment takes about an hour for an experienced person. If it is your first time, then it will take a couple hours.
You need to know your use case for an instance where you need something to be backed up. Once that need is identified, you need to know where it is and where you want it to go. Once you already have those questions answered, the implementation is simple. Through the installation progress, you just plug in those values of where is it, what is it, and where do you want it to go, then you're done.
At the company I'm with now and at my previous company, I was the architect and implementer. Zerto generally requires one person for the setup.
The RTO and RPO are unparalleled. In the event you do have an issue, you can be back up and running (depending on the size of your infrastructure) within minutes. Your RTO can be 15 minutes and data loss be five minutes. I don't think that's matched by anybody else in the field.
It has helped decrease the number of people involved in data center moves. For the infrastructure pieces, which is my primary responsibility, I am the sole person. Whereas, we use to have an OS guy and a network person before to manually configure the pieces. We also had application teams, but they are still relevant. Previously, it took four people because we were touching each environment and machine. Since we wanted it done fast, we would stack a bunch of people on it. Now, it's just me and it's done faster.
When migrating data centers, we have saved a lot of time on my team. Something that takes an hour or two used to take a week or two.
There is big ROI for ease of use, management, and labor overhead versus other solutions.
Zerto is more expensive than competitors, making the price difference pretty high. While it is very expensive, it's very powerful and good at what it does. The cost is why we are not leveraging it for everything in the organization. If it was dirt cheap, we would have LTR and DR on everything because it would just make sense to use it.
We currently use Veeam and Commvault.
In general, moving VMs through VMware using site-to-site is not as easy than with Zerto because the data has to go on flight, and Zerto just sends it over. I like that aspect of it. During our data center moves, we move from one location to another (San Jose) with a two-hour total downtime from start to finish: From powering the systems down, getting them over, getting a live feed changed, and back up and running to the world. This would be way slower with a different product.
For long-term retention, we do Veeam to spinning disk. While the LTR is something I am interested in, I think Veeam has the upper hand with alerting and job management. Both Veeam and Zerto are easy to use, but Zerto is easier to use.
I am not a big Commvault fan.
It could replace Veeam and Commvault, but not at its current price point.
Most people assume catastrophic failures have a long-term data impact. However, with Zerto, it doesn't have to be that way. If you spend the money to protect everything, you are going to get that low data recovery time. Whereas, if you are cheap and don't buy Zerto, it's going to be hours to days of data loss. With Zerto, it is in the minutes. Thus, how valuable is your data? That is where the cost justification comes in.
If you are thinking about implementing this type of solution:
It's that value of time, money, and data. I can implement Zerto and use it in an emergency situation anywhere. If you're talking to somebody like me who understands data protection and disaster recovery, the question is how much is your data worth to you and how fast do you need it back?
Currently, we are doing our own storage as the target for protection, but there is interest in enabling DR in the cloud, e.g., to do Glacier or something cheap in Azure.
I would rate this solution as an eight (out of 10).
We use Zerto for data migrations. We use it to move our virtual machines from one location or data center to another and eventually, we then switch that over to DR from our facility in one state to another. It's for the migration of existing VMs.
Zerto has improved my organization by allowing us to do several VM moves. It simply allows us to bring a server back up on the new side, which looks like a reboot of a server. It's a virtual move to the new stage, so it goes from existing VM host to new VM host on the other side.
It has reduced downtime for the servers that we migrate over. By how much is a hard number to put because we do a big group of them together, so we're able to group the move as opposed to doing more one-offs.
The amount the downtime would cost my company would strictly depend upon which servers we were moving because some don't really cost the business. There are others that would cost the business for having to be up as much as possible, 24/7.
The Move feature is the most valuable feature because it allows us to move the VM from our old environment to our new environment with minimal disruption.
It's extremely easy to use. It's pretty self-explanatory as you run through setting up your VPGs for your protection groups and then to do a migration or a test failover.
Some of the features need improvement. One would be, as you're creating a Move group or a VPG, as they call it, it should either autosave or have the ability so that you can save it for coming back to later because if the setup times out, you lose all your work. That would be a nice improvement to have.
We have been using Zerto for three months.
So far, the stability has been great. We have not had any issues with that.
Only two of us work on it and we're both system engineers.
We do not need dedicated staff for deployment and maintenance of the solution.
It's being used to move a total of around a couple of thousand VMs, so I don't have any issues with scalability.
Currently, we aren't planning to expand capacity because we have a total of around 500 agents to protect, so until we get the true DR, we will have to evaluate if we need to expand that. We will primarily only be using it for DR and any server migrations we may need to do from one system to another.
Their technical support has been very good and prompt to get back to us with answers.
We would either use a Veeam or VMware solution, but we haven't had a real DR product outside of Veeam.
We find Zerto to be the most beneficial right now in helping us migrate from one data center to another data center for the testing environment. And for future capabilities, for a true DR scenario.
I would say it's a lot more simple to set up and maintain than VMware and Veeam.
Replacing these legacy solutions has saved us on the costs needed to manage them.
The implementation was really straightforward and easy. We worked with one of their support engineers and we got it up and running really quickly. The deployment took around one hour.
We didn't really have an implementation strategy. It was about getting the server manager and server up and then walk through the installation steps. We followed their guidelines.
I'm not sure if I can put a dollar amount on ROI but the biggest return is time to actually get things set up and then begin to migrate virtual machines over to the new environment.
I'm not 100% sure about the pricing because I wasn't as much part of the pricing part of it, but it fell within our budget. Its features and price are good compared to the options we were looking at.
We also evaluated Rubrik and a solution from Dell. The main advantage that we found was that Zerto fit our current need for migrating from one environment to another better than others, and its good standing in the community where there are a few products.
My advice would be to plan out your Move groups and work with your business to get everything validated so you can back up on the other side.
I would rate Zerto a nine out of ten.
We use it for disaster recovery and to migrate machines from one location to another.
The big thing for us was our disaster recovery. At that point, we were only able to do a disaster recovery test once a year. Now, we officially do a disaster recovery test once a quarter and we do a subsequent test once a month to verify that it's doing what it's doing and the IP address is changed. Instead of one mass disaster recovery exercise, we're easily able to perform up to 12 in the year.
It allows us to verify in a much more granular aspect whether our data is being migrated or not. Once a year, if we find some issues, we're at least 11, 12 months behind at that point. Every 30 days, if we do a test and we find an issue, we're able to correct that. The time between tests is shorter, which means that if there is a problem we're able to resolve it in a much shorter amount of time versus an entire year, and then waiting another year to see if everything is working again.
When we need to failback or move workloads Zerto decreases the time it takes and the number of people involved. We are able to put a machine into Zerto, let it do its magic in migrating the data from one side to the other. We've had instances where we've got machines that are four or five terabytes that we can move from one side to another after it's done synchronizing in 15 minutes or less. Sometimes it takes DNS longer to update than it does for us to move the machine.
Instead of me having a server person, a network person, and a storage person, I can put it into Zerto, let Zerto do its job, fail it over, and then just have the application owner verify that the server is up and running, and away we go. So on a weekend, I don't have to engage a team of people, it can just be myself and one other person to verify that the machine is up and running. It really cuts down on overhead for personnel.
In situations of failback or moving workloads, it saves us hours. If I were to have to move a four or five terabyte machine using something like VMware's virtual copy it has to install on the machine and copy the data over. Then it has to shut the machine down and do a final copy, which means there's a lot of downtime while it's doing the final copy. As far as downtime from an application standpoint, with Zerto, we're from hours down to minutes, which is great when you have applications that are supposed to be the five nines of a time kind of thing.
We have not had any ransomware issues. But we have had an instance where somebody installed something that messed something up. It was a new version of Java and we were able to roll back. Thankfully they realized it fairly quickly because we only keep a 12-hour window. We were able to roll back to almost a per minute instance prior to that installation and recover the server in minutes. Our backup was as of midnight, but they did it at 8:00 in the morning. So we didn't lose eight hours' worth of processing.
If we were going to use our backup solution, it would have taken minutes to restore the actual server, but then from an SQL perspective, we would have had to roll the transaction logs from backups. I couldn't even tell how long that would have taken because we had to do all of the transaction logs, which are taken every five minutes from midnight, all the way to 8:00 AM in five-minute increments. It would have taken considerably longer using traditional methods versus Zerto.
Although it hasn't reduced the number of staff involved in overall backup and disaster recovery, what it has allowed us to do is actually focus on other things. Since Zerto is doing what it's doing, we're able to not have to stare at it all day every day and make sure that it's working. We have the screen up to make sure there are no errors, but we're able to focus on learning how the APIs work, working on the other products that we own for backup and storage. That's mainly what my group does, we do disaster recovery and storage backups. We have six pieces of our enterprise and before it was just the main piece that we were working on. Now, we're able to actually work with the other five or six entities and start doing their backups and disaster recovery because we have a lot more time.
The failover capabilities are definitely the high spot for us. Previously, when we did disaster recovery it would take us easily a day or two to restore all of our servers. We can do the same thing with Zerto in about an hour and a half.
We're about six or seven seconds behind our production site and it does a really good job of keeping up, making sure that we're up to date. That's one of the other things that we think is just phenomenal about the product, we're able to get in there and put a server in and within usually a few minutes we're protected. Six or seven seconds behind is a pretty good RPO.
Currently, we are using another product for longterm retention, so I don't think we really have any plans on switching over at this point.
Zerto is very easy to use. We did a proof of concept and it took longer to build the Windows servers that had to be installed than it did to actually install it and roll off the product. Our proof of concept became production in minutes.
The interface is the only thing that we've ever really had an issue with. It's gone through some revisions. The UI, it's not clunky, but it's not as streamlined as it could be. Some of the workflow things are not as nice as they could be.
I like the fact that Zerto does what it does and it does it very well. I have had Zerto since version four, so the longterm retention and things like that were never a part of it at that point. I just like the fact that I can install it, I can protect my virtual machines, and I'm comfortable and confident that it's doing things correctly because of the amount of testing that we've done with it.
We have been using Zerto for a little over three years.
It's very stable. Once a month we verify that the internal mechanisms of Zerto are working. When I do a test failover we check if VMware tools come up, if the IP addresses change, and the things that Zerto is configured to do automatically. Usually, if there's an issue, it's either I did something wrong when I configured it, or I put in the wrong IP address or the VM itself has an issue, the tools aren't loading correctly or at all, or it was trying to do an upgrade and failed. We've actually been able to identify other issues inside the environment that we would not have realized were an issue by doing these tests.
Our next step is not so much increasing the capacity but protecting things to the cloud. We'd like to be able to take those same 350 machines or so do we have, and definitely, if not the important 50 that we have, but all of them, have them not only go to our disaster recovery site but also split to AWS. It's where we have both of the sites, one in one location and one in a vastly different location and if for some reason, one were to go offline, we would have those objects in AWS to be able to spin up and do what we need to do.
We ramped up from that 50 to 350 within a year and Zerto just took it and kept on running. We are still about the same RPO as we were before, we're protecting 60 plus terabytes of data at this point with those 350. It did what it had to do to create new virtual machines, depending on how many disks there are. I think that I was able to scale with our needs really easily. 73 terabytes are what we're protecting right now across 357 VMs, and we have a seven-second RPO. It went from a small number to a very large number. The issues that we've had around Zerto protection has either been that networking wasn't sufficient, or the storage itself had to be increased.
There are three of us who work with Zerto, that's it. We do contact other teams, often our networking team to get an IP address for something. But when it comes to doing the testing, when it comes to doing the implementation, and when it comes to doing verification processes, it's all my team of three people.
I am the data management supervisor and then I have a lead storage administrator and a senior storage administrator.
Prior to Zerto becoming our disaster recovery product, we were using Dell EMC's Avamar for backup recovery and for disaster recovery, which we quickly realized was not going to work out very well. We used it for about four or five years. When your disaster recovery test is five days and you take one and a half to two days to do restores only, that doesn't leave a lot of time for testing. Now, we're able to do the restore in an hour and a half. Then we actually can start testing the exact same day that we did the restores. In most instances, we're able to actually finish everything within 24 hours.
When we first purchased it, the backup portion did not exist. So having backup and DR in one platform really wasn't that important to us. We use Rubrik for backups and longterm retention at this point. We really don't have any intent on using Zerto for longterm retention, as we're extremely happy with Rubrik. But time will tell if we decide to switch over to the LTR portion of the product.
Compared to Avamar, Zerto is extremely easy to use. I can bring Zerto up and start recovering, failing over, or testing machines before I can even log into Avamar. Avamar was very clunky from its interface. It's very easy for Zerto to go in and recover a machine to a certain point in time. Where moving around in Avamar, since it was Java based, would take quite a long time to get from screen to screen. And the workflow was not user friendly at all.
We have different use cases for Zerto and Rubrik but I think that the interface and functionality, as far as what I get out of that particular product, what its purpose is, they're both about on par. Honestly, we've told both companies before, we would love for one to buy the other so that we can get the best of the disaster recovery with Zerto and the best of backup and recovery, longterm retention type things with Rubrik. Because they definitely are probably the two best products for their market segment.
Replacing Avamar has saved us on the cost needed to manage them. As far as management goes, we still use the same three people. But as far as renewal maintenance costs, definitely. Dell EMC is very proud of their products and their renewal maintenance costs were rather large compared to what we do with Zerto.
Initially, we saved about $1,000,000 three years ago by switching to Zerto. Zerto and Rubrik replaced Avamar. But buying both products together, versus what the renewal/upgrade costs would have cost us for Avamar, with all the hardware, was a savings of $1,000,000.
The initial setup is very straightforward. I built a couple of virtual machines to run the manager on, deployed some VRAs, and then attached it to VMware and checked what I wanted to protect. We probably had it up and running in about an hour total. Then we tested protecting some machines, and we had some test boxes that we tested back and forth. It was a very easy setup. People are definitely sold about how easy it is to install and configure.
Initially, our deployment strategy was to protect a small subset of very important machines for an enterprise. And then once we saw how easy it was to implement, how easy it was to put things in there, and how easy it was to protect them, it went from a handful of machines to 350 or so. The initial intent was to protect a very small number. That went from that to a very large number very quickly. Zerto was able to handle it no problem. We actually had to end up buying more storage on the target side because we had not planned on doing that many machines from the initial implementation.
We worked with our account team. We were able to get the proof of concept software, a link to download it. They gave us a key, they gave us a little Excel sheet stating how many machines and IP addresses we needed. Then they basically sat on the phone with us for the hour with WebEx. And we set it up just that moment. That's really the only implementation help that we've ever gotten from them. Everything else has just been pretty much us on our own.
Their support has been very, very good. We've had some technical issues that we've been able to work through with them. Nothing major, but if I have a question or if we run into an issue, we're able to either open up a support ticket and they respond fairly quickly, or we are able to do some searching in their knowledge base. We've had an instance where we did the upgrade to a new version and it caused some problems. But within, I'd say a few hours, we were able to correct it because they had already experienced that. And they had that logged in their internal database of issues. So, they were able to log in, and give us the fix that we needed and get us back on track.
It definitely is a very robust product. The feature set from 4.0, 4.5 to now has increased greatly. We do like the fact, even though we're not using it, that as long as I pay my maintenance when the new features come out like longterm retention, analytics, the monitoring, the reporting, the things that were not there when we first purchased it that are there now, is all part of maintenance. It's not a bolt-on price. They don't charge extra. It was one of the things with Dell EMC that was always a pain was. They had additional costs. With Zerto it's like, "You paid your maintenance, here's a new feature, enjoy!"
They have licensing breaks as far as 50 users, or 50 VMs, 100 VMs, 250 VMs. We ended up with a bunch of 50 at first, and all of our maintenance renewal dates were all different. It ended up costing us more because we didn't just make the investment up front to say that we wanted 250. We had to end up going back and resetting all of our maintenance dates to the same date. It was just a nightmare for our maintenance renewal person. If you did a proof of concept and you like it, definitely make the license investment upfront. That way, you're not trying to piecemeal it afterwards.
Licensing is all-inclusive, there are no hidden fees.
We looked at RecoverPoint for VMs. A long time ago, one of the companies inside this enterprise had used RecoverPoint and it worked really well when it was the physical RecoverPoint. But as things became more virtual, it no longer was as good as it had been, so they had discontinued it. RecoverPoint for VMs was definitely not as easy to set up. It was not as easy to use. It took a lot more resources. This is three-year-old information, but I feel like we would have had to have had more people on our team than we do now with just the three of us. We didn't feel like it was as stable. It certainly wasn't as easy to use, test, or get to work as Zerto was.
My advice would be to do the proof of concept. They're very willing to help you with the installation. Do a proof of concept. If you're not amazed by it, I would be surprised. Everybody that we've ever talked to about this and have done a test of it says, "I can't believe it's just that easy."
I would rate Zerto a ten out of ten.
We have servers in Houston and we have servers at a DR site, we need to be able to make sure that they're replicated in some form or fashion. That's what we use Zerto for, to replicate between our primary site and our DR site.
The biggest improvement for us was going from a possible 24-hour lag on our backups to real-time lag. With the hurricanes here in Houston, buildings losing power, and so on, it was nice having the ability to just go flip a switch and we're live with current data as opposed to we're live with what happened yesterday.
Zerto has helped to decrease the number of people involved when we need to failback workloads. It's a much smaller number. It's time-consuming because of the way it works, but it's not overly overbearing. Instead of taking a better part of the day or two to get everything up and running, it really only takes us three or four hours. It has also decreased the number of people we need. It would take three or four of us to bring up servers, make sure they're all running, test them, and all that stuff. Now, it takes one person to bring them all up and then there's a couple of us to test it, so we have half or less of what it used to take.
We've never had a ransomware issue. The reasons for our failover has typically been natural disaster caused.
Pretty much all of the features are valuable. The biggest thing we use it for is replication, so the ability to set up our virtual server, set it to replicate, and Zerto handling everything else is the biggest feature that we like.
The continuous data protection is great. We love it because we can see exactly how many seconds behind real-time we are, which is usually under 10 seconds. It keeps things up to date. We love the product.
We currently don't use it for long-term retention. It's something we may look at in the future, but that's not the product we're using for that.
Zerto is very easy to use once everything's set up, which isn't difficult. It takes a little bit of time to make sure all the network stuff is all set up properly, but once everything's set up, using it day to day is very simple.
Zerto has saved us money by enabling us to do DR in the cloud rather than in a physical data center. Our DR is to a physical data center. We don't put our data in the cloud.
For what we got it for, it does it great. I use a different solution for my disk-to-disk local backups to where I can have a local backup of files. I don't think Zerto does that well to where it keeps a memory of the files that are there. Basically, when something is deleted on Zerto, it gets deleted on the replicated version. So, some sort of snapshotting or something where I could have backups at different points in time of files would be a really helpful tool.
We have been using Zerto for five years.
Stability is great. The only downtime we have is during upgrades and patches. I really haven't had any problems with the platform or stability.
The time it takes to update or patch depends on the size of the patch. Major upgrades take a little bit longer, but I mean, it's typically a couple of hours at the most. It's not a huge thing.
Scalability has been great. It continues to grow as we grow. I haven't had any problems with it.
Zerto is being used 100% across our environment.
We've got about 11 servers doing backups in the 20 to 25 terabyte range most of the time.
Only I work with Zerto in my company.
The two times that we've contacted technical support, we didn't have any problems. They've been helpful. They made sure we got the issue resolved and did very well with it.
We previously used Veeam. We switched because of real-time backups. Veeam was a point-in-time backup. We said, "You're going to back up at this time." It took a snapshot and backed it up. Zerto just continually backs it up and makes sure that we're currently up to date and matching the server at your primary.
We use Zerto primarily for disaster recovery to the DR site. We still use Veeam for our backup disk-to-disk local for file backups.
Once Zerto is set up and running it is much more hands-off. You don't really have to do anything. You just log in to check, make sure everything's going well, and you're pretty much done. With Veeam, I feel like I have to check in a little bit more often, make sure the backups are running properly, making sure all the files are there, and everything like that. There is a little bit more checking to do on a regular basis.
I don't know if we would have failed over with Veeam because of the amount of time it took and coming back online at the primary site. I don't know that we would have failed over, which would have been probably five or six days of downtime. If we had failed over, we'd probably have lost two or three days in one direction, and probably another two or three days coming back to the primary.
The initial setup was pretty straightforward. It took a little while to make sure we had everything connected right, and that it was going to the right place, but it's no more difficult than any other setup for something like this. I didn't find it difficult at all.
If you don't include seeding and you only include the setup and deployment, it only took us a day or two of planning and then another day of actually implementing it. The seeding took a while, but that's to be expected.
In terms of our implementation strategy, we were using a different product back then, which wasn't as up to date and live. We were just backing up at night, so we had a nightly snapshot that was being transferred to our DR site. Our strategy with Zerto was to get us to more of a real-time backup solution at the DR site and make sure everything was good. That was the entire purpose of going with Zerto.
We used a third-party integrator for the deployment. We used Centre Technologies and they were great. We've used them for other stuff and we didn't have any problems with it and never have.
The one time we had the failover and run at the DR site, instead of having two or three days of downtime, we really had less than one day of downtime. If you measure that in how much money we were able to make that day, it's around $200,000 to $300,000.
We are on the lowest license because we don't exceed the number of servers for the base license, so I don't have a lot of information about licensing. The price of it was comparable, if not better than what we were paying for Veeam. I have no problem with the pricing at all.
There are no additional costs to the standard licensing.
Make sure you know how long it's going to take to do your initial seeding. If you've got a lot of data, and you're doing it over a pretty good distance, just make sure your pipe is big enough for the initial seeding. Once the seeding's done, pipe size doesn't matter, but the initial seeding can take a good amount of time over a small-ish pipe if you're replicating a lot of data.
For our largest servers to seed can take a full week or to 10 days for one server, for our large file server to seed is about seven terabytes, but we don't have a huge pipe at our DR site. We negotiated to increase the pipe size temporarily while we were doing the seeding, and that reduced the time drastically on how long it took to seed. I can't really give a number or what to look for. I would just have that conversation with Zerto about how long a certain pipe is going to take. How long is it going to take to seed using whatever pipe size based on the amount of data that they have.
Make sure all of your notifications are set up well when it fails. It takes a little tweaking and making sure that everything is set up right, but when you want to make sure you're notified if you get outside your SLA on how long the backups are trailing, making sure all that's set up properly is key.
I would rate Zerto a nine out of ten.
We do a semiannual disaster recovery test, usually one in January and another in September, where we fail our entire company over to our Arizona DR facility. We run the business out of the Arizona location for the day. In order to be able to do that, the Zerto application allows us to migrate 58 machines over to that location and allows us to run our business from that location for the course of the day.
We are able to have a successful disaster recovery solution through using Zerto for our Disaster Recovery drills. We are able to fail over anytime, day or night, to run our applications out of our Arizona facility. Within a 15 or 20 minute time frame, we can have those application servers up and running in Arizona. It is just a huge help to have a successful, reliable disaster recovery solution that we know at any point in time, within 15 or 20 minutes, can be running out of a different location.
Most of the time, this is at least a two person job. Previously, when we had a disaster recovery drill it would take two of us working for three or four hours just getting applications up and running in Arizona. Now, for the disaster recovery drill, I'm able to finish my work in about 30 mins and be available onsite to help and assist anybody else as needed during the disaster recovery drill. Its ease of use and the ability to have a reliable solution for disaster recovery has become invaluable to us.
There is built-in active logging if needed for a longer retention period. If we fail a machine over and are just doing tests for it, we can fail it right back at the end of the failover without much issue. We couldn't do that with SRM. The ability to keep track within the activity log of what is going on with the VM, then fail it back prior to the one-hour time frame that we have set up without having to worry about it losing data during our tests or production failover drills.
The product is very easy to use. On a scale of one to 10, I'd say it's a nine as far as ease of use goes. In order to do an update in our old product (SRM), we basically had to take down almost our entire vCenter to be able to do the updates. Whereas, I can do updates to our Zerto product within 30 minutes to both our ZVMs in Massachusetts and Arizona. We haven't had problems troubleshooting after doing upgrades. Within five minutes, we can configure a whole new cluster solution and work on getting it synced out to Arizona.
It transfers up-to-the-minute files. Therefore, if something was to happen and the business was to go down Massachusetts due to a server failure, we could simply fire up those VMs in Arizona within approximately five minutes. The data protection level is top-notch. We haven't lost any machines, data, or VMs during the course of utilizing this product.
The alerting doesn't quite give you the information about what exactly is going on when an issue comes up. We do get alerts inside of our vCenter, but it doesn't give you accurate information on the error message to be able to tell us what's going on without having to go actually login into Zerto to determine what's causing the issue.
Another issue with the alerting is that it will pause a job. E.g., if we have something running from Massachusetts to Arizona, but a VM has been removed, updated or moved to a new location in vCenter. It literally pauses the VPG the VM resides in but will never give us a notification that it's been paused. Therefore, if we had an issue during the course of the day such as a power event and we needed to gain access to those VMs in some sort of catastrophe, we wouldn't be able to get access to them because that job was paused and were never notified about it being paused for whatever reason. It would therefore be a big problem if the VM was needed to be recovered and we didn't have those resources available.
It would be great to get more precise alerting to be able to allow us to troubleshoot a bit better. Or have the application at least give us a heads up, "A VPG job has been paused." Right now, it's sort of a manual process that we have to monitor ourselves, which is not a great way to do things if you have a superior disaster recovery solution.
Almost two years.
The stability is rock-solid. Nothing has gone down since we installed it; there has been no downtime.
Typically, once a quarter, we have an update. Last year we were at version 7.5, then we recently went updates to 8.0. On top of that, they release security patches and other things to improve bugs they find in the program. Right now, there is a U4 version that's out, which we will be updating to this quarter.
In the U4 version, there are security enhancements because a lot of zero-day issues that are being found in a lot of the applications. Zerto is making more security modifications and enhancements to the encryption between one location and another, so somebody can't hack your data and access it while it's in transition.
Scalability is very easy. We are going through a POC right now because we want to branch out to the cloud. Just getting that set up and going through the process was about 60 minutes.
It's very scalable and extendable. We can do one to many solutions, as far as where our disaster recovery is going. This is what we wanted. We would never have been able to do that with our SRM product.
There are two engineers trained to use the product. I'm the primary contact for the application and do most of the work on the product. One of the storage guys handles a lot of the storage set up on the back-end with me. We have at least two people trained on each application that we have in-house. Both of us are in charge of making sure the application is up-to-date and doing what it's supposed to be doing.
Zerto's technical support is very good. They are very reliable and always very pleasant to deal with. We've never an issue working with them. They usually come back with the precise solution to whatever we are troubleshooting.
Our issues are usually user self-inflicted. E.g., we remove a host out of the cluster to upgrade it or do something else with it and don't follow the correct procedure that's needed in order to be able to shut down the Zerto appliance correctly. If somebody doesn't follow that procedure, because they either don't know how, weren't aware of it, or just skip that step, then it causes problems inside of Zerto. This will pause jobs and the VPG will no longer be accessible on that host. Sometimes it's easy to get it back up and running again. Usually, when you put a new piece of hardware in the cluster that has a different set of parameters with its hardware, then the appliance will be missing because it was taken out with the old hardware. Usually, you need to get their technical support involved in order to be able to troubleshoot the issue with them to be able to get the VPG back online again on the new hardware. As I said its self-inflicted most of the time because steps are missed with our processes.
The documentation that we got from them was in depth and work well when needed, if you follow them correctly you will have success. If you don't follow the steps, that's when problems develop. Therefore, it's not a fault in their documentation, it's a fault of the user who's not following the proper steps for success. It doesn't happen often but I think we have contacted technical support only three times in the two years that we've had the product.
For eight years prior to using Zerto we used to use a product called SRM, which is part of VMware. We finally switched over to Zerto after having them come in and do a presentation for us. This was after trying for about a year to do that and convince our vice president to allow us to migrate over to a different platform.
The reason why we used SRM was because SRM was built into our VMware vCenter licensing. We never had a successful DR test during the previous couple of years with SRM. By switching over to the Zerto product a year and a half ago, we were able to run a successful disaster recovery test within three months of switching over. We had our first successful disaster recovery tests in two and a half years because Zerto made our life so much easier and helped getting servers over to a new location almost seamlessly.
In order to be able to have a successful disaster recovery, we need to be able to successfully migrate 58 servers from our Massachusetts location to Arizona. On previous attempts, we got about half the stuff over there, then we'd fail. In other scenarios we would get everything over there but some of the machines wouldn't come up because of the way they were configured. One time, the business was down for about half the morning because it took us that long to get the stuff back up and running using SRM. This was a real pain point for us, getting this product in place and working successfully. It took Zerto to be able to finally get us to do that. It's been a lifesaver. All we had with SRM was nothing but headaches.
The initial setup was very straightforward. We had everything running in half an hour. It got deployed with two virtual machines (ZVMs): One got deployed in Massachusetts and another in our Arizona location. From there, we deploy appliances to each one of the hosts that's inside of the clusters that we are managing for our disaster recovery solution.
Within 30 minutes, we had it deployed to our entire production cluster and the hosts in here. After that, we just started creating jobs, which took quite awhile to do because we have a lot of large servers. However, that's not the worry of the Zerto application, but the size of the VMs we have in production.
For our implementation strategy, we just mimicked what we had in place for our SRM environment. Our 58 machines are spread across different clusters: some in our DMZ, some in our prod and some in our WebSphere clusters. After that, we ran two tests to ensure that we were able to fail over to our Arizona location then fail back without any changes or modifications to the VMs. Once we did that, we started rolling out to each of the clusters, one Virtual Protection Group (VPG) at a time. I think we now have 23 VPGs total.
We worked with an outside vendor (Daymark) who does a lot of our work through outside vendors. They work with Zerto directly. When we set it up originally, we had a Zerto technician on the call as well as a Daymark technician on-site working with us.
Our experience with Daymark has been very good. We love working with them and try to use them for our integration and infrastructure work. They are a very good company that are easy to deal with. We try to use them as much as we can. Thanks to Rick and Matt for a great working relationship.
We have seen huge ROI.
It used to be a three-person job, and now it only takes one person to manage and run the process. The fall back is the same thing. We've never had any issues with stuff coming back out of Arizona to our Massachusetts location. Within 15 to 20 minutes, we can have our servers successfully migrated back, then up and running just as they were originally without having too many conflicts or configuration issues.
The solution has helped us reduce downtime in any situation that we have come across, thus far, for disaster recovery at a 4:1 ratio.
We are an insurance company therefore, if we're down for an hour, it's thousands of dollars being lost. E.g., people can't pay their insurance bills, open new policies or get the support they need for an accident.
These things have been invaluable to us:
It's very equitable, otherwise we wouldn't do it. It's something that we utilize for the licenses per host used. Therefore, it's very cost-efficient as far as the licensing goes. For the amount of stuff that we have configured and what we're utilizing it for, the licensing is not very expensive at all.
There is a one-time cost for maintenance and support. We have a three-year contract that we will have to renew when those three years come up. There is also licensing on top of that for whatever product you are using it depending on the host configurations.
Right now, we use Veritas. We will be evaluating Veeam and Rubrik as a new solution for our backups in the next quarter or so, on top of the fact that we may decide to use Zerto. The three of them are in the mix right now for when we decide to switch over vendors for a better backup solution.
Zerto gives you the ability to utilize it as a backup solution, but it's not a true backup solution because it can't do file level backups. If you want a particular file off of a server, it can't do that for you. What it can do is give you the whole server, then you need to go back and pull that file off it. Mainly for that reason, we haven't chosen to use Zerto and may never use Zerto as our backup solution. The other solutions allow us to get a file level backup.
Don't hesitate. Go out and do it now. Don't wait two years like we did. Push harder in order to be able to get the solution in place, especially since we know it will work better for you. Don't just take, "No," for an answer from senior management.
The application is phenomenal. They continually add new things, more plugins, and modifications to the way things work. It just gets better as they go.
We don't plan to use the solution for long-term retention at this time, but we are looking at going into a hybrid cloud solution in the near future which we may be using long-term retention for to make a duplicate copy of everything we have in our Massachusetts data center into a cloud solution. Whether it be an Azure or Amazon location on the cloud.
While I can't really speak to whether it would allow us to do it, the application is set up to create a duplicate of the actual servers in Arizona. That's how it works so quickly. If we ever had a problem, I could always revert back from the duplicates that we have out in Arizona using the application, if necessary. Luckily, we haven't had a need for that, and hopefully never do.
I would rate this solution as a nine (out of 10).
We use Zerto for disaster recovery data replication from our headquarters to an offsite data center at another location.
It has replaced all of my legacy backup solutions.
The real-time replication of data is the most valuable feature. It is a vast improvement in scheduled daily backups. Real-time data is streamed to the offsite data center, which allows us to restore our mission-critical applications up to 10 seconds from when the last changes were made in our system. If we enter a sales order or enter any kind of information in our ERP application it is replicated within 10 seconds to the offsite location. So if we were to have a disaster, it takes about five seconds right now if I look at it. If we were to have a disaster, we would not only have current data, but we'd also be up and running within hours at our offsite data center, rather than days if we had a tape backup solution.
We have begun using it for longterm retention. We also replicate our file server. Our file server has archive or historical data that we have to restore occasionally. And restoring from long term retention is applicable to those types of scenarios, versus the streaming of the data, the real-time data. The longterm retention allows us to restore from further back in time. Real-time is more for recent changes to the data, and the longterm retention is for if we have to restore from further back.
It provides continuous data protection. It has been extremely effective. I've done failover testing, and the data is accurate and current. It works.
In terms of ease of use, Zerto is very intuitive. The graphical user interface of the application, both for monitoring VPG replication, longterm retention success, the configuration of VPG for longterm retention, and the analytics feature is intuitive and allows you to essentially analyze any changes to your environment. All of that requires some training but is not incredibly complex. It's presented in a very easy to use format.
Zerto dramatically decreases the amount of time it takes to do a failover. I can essentially do it all by myself and I'm one person, I don't really need help. It allows me to restore our environment fully in a matter of seconds, literally. I can do that on my own from my desk very easily and with no outside help.
Compared to other products, I would praise the intuitiveness of the product. But I think that can always be improved. The intuitiveness of the graphical user interface, while it is very solid and I don't have issues navigating it. I would say that it can always be improved.
I have been using Zerto for around three years.
The stability is very solid. It just runs. It has not crashed or had issues. So long as you stay on top of the versions of the application and you have it installed on reliable hardware, you're going to be just fine.
It can scale into the cloud. I know it has that capability, but I have not done that yet.
It's essentially myself and I have one junior person that uses the application, but it's mostly myself.
It's used for all of our mission-critical servers. Not every single one of our servers, but probably about a third of our total servers.
I do not have plans to increase usage.
The tech support is top-notch. I have an engineer who I work with on a regular basis that communicates with me anytime there is an issue. He has worked side by side with me on any issues, questions, and implementations that I have wanted to accomplish. They by far go above and beyond more than any of my other vendors and I have quite a few so that says a lot about them.
We previously used Asigra. We switched because of the cost, limitations, and complexity.
When we decided to go with Zerto, it was imperative that it provided both backup and DR in one platform. Granted, we didn't take advantage of it for a while but that's entirely my own fault. It was very important to have that functionality.
It was initially set up by a third party. But since then, I've had to re-set it up and it was pretty easy. It wasn't very complicated. It was quick. There were instructions that we followed pretty closely and there were no issues, so it was straightforward. There were a handful of steps, but nothing overly complex. The deployment took around 30 to 45 minutes.
We haven't had a need to use it in an actual live disaster scenario, but we have that capability, which we did not before. But if we had to use it, it would save us a tremendous amount of money. Tremendous.
There are no costs in addition to the standard licensing fees.
We also evaluated Veeam.
It has not saved us time in data recovery situation due to ransomware just because we thankfully haven't had any issues. I've done some testing and in those types of situations, it would be greatly beneficial. But I have not had any of those situations currently.
At this time it has not helped to reduce downtime in any situation.
We don't have it replicated in the cloud at this time so it has not saved use money by enabling us to do DR in the cloud, rather than in a physical data center.
I would recommend Zerto to anybody considering it.
My advice would be to make sure that after implementing the product, go through and accomplish the training labs so you know how to use a product really well, develop a disaster recovery plan in the event that you should need to use the product, and work closely with your Zerto engineer to ensure that the implementation fits your business needs.
The biggest lesson I have learned is how valuable real-time replication of data can be in the event of a disaster and how valuable that functionality is in the event of a disaster. It has the potential to save the company many days' worth of lost business.
If I could rate it an 11 (out of 10), I would. But we'll go with 10.
We're backing up VMs with it. Our company has about 200 VMs and we're using Zerto on 30 of them in the main line of business applications. We're using it to replicate all that data over to our DR site so we can do our testing and reporting against that.
Within those 30 servers we've broken out into three different SLAs on which ones get spun up first. We have it all scripted with monthly plans to fail over, spin it up, actually use it over there, spin it down, bring it back into production, etc.
The business that we're in means we have to run our network 365 days, 24/7, with no downtime. If there's any kind of interruption to business processes — power outage, tornado, fire, etc. — we need to be able to get certain systems up and going in almost real-time. That's how we're leveraging Zerto, to guarantee that uptime and for the ability to spin these things up near-instantaneously.
I know my networking team loves the tool and the interface and being able to roll back and do the failover stuff very easily. But for me, personally, it's how it has impacted our business. The reporting functionality showing that our DR plan is rock-solid and stable, and my ability to generate summaries for our customers, have really improved business processes for us. It gives peace of mind to our customers that our systems are stable and the services that we're providing are stable.
Also, when we need to failback or move workloads, Zerto decreases the time it takes and the number of people involved. The failback feature, from a technical standpoint, is what sold us on Zerto. One of the challenges we had with Site Recovery Manager was spinning up and being in production at DR. If everything is equal, everything is patched and everything's working, both solutions offer a very similar experience: the ability to move a workload from production to disaster recovery works with both of them, no problem. Coming back the other way was just a bear of a move with Site Recovery Manager. With Zerto, it's almost seamless. With Zerto, it takes about four or five mouse clicks and stuff fails back over, and our end-users are none the wiser. And it's just one guy doing it. When failing back from Site Recovery Manager, we'd have to get one of our sys admins involved and we'd have to let our end-users know that they all had to log out.
While it hasn't reduced staff, we have become more efficient and it has allowed me to reprioritize some projects. It's freed up some capacity, for sure. We haven't reduced headcount, but it has definitely taken a big wedge out of the daily grind of our backup and recovery; the stuff they always had to check.
Personally, what I find valuable is the executive summary that says our DR plan is operational. I can then pass that out to our customers.
Per Mar has about 75,000 customers and, more and more these days, especially given all this [COVID] pandemic, we're asked: Do you have a business continuity plan? Is it tested regularly? Do you have documentation for it? Two years ago, a simple email from me saying, "Yes, we have this," sufficed. We're finding now that people want true documentation from an independent system that generates a report. The reporting that comes out of Zerto is a lifesaver for me. I'm able to generate that up, send it out to the customers that need it, and say, "Yes. Here are our SLAs. Here is our monthly test routine. Here is where it shows us being successful," and so forth.
We are doing continuous data protection. It works flawlessly. Our recovery points are measured in seconds. We have all these "baby snapshots" throughout the course of the day, so we can roll a VM back to any point in time, spin it up, and away we go. We're actively using that. It works great.
It's easy to use and there isn't a huge learning curve. Even some of the advanced features are very intuitive to folks who have been in this space before. If you have any kind of skill sets around any kind of backup and recovery tool, the user interface for Zerto is very natural.
One thing I would like to see, and I know that this is on their roadmap, is the ability to use long-term storage in the cloud, like in Azure or AWS, making that even more seamless. Whether it's stored in glacier or on-prem, being able to retrieve that data in a quick manner would be helpful. They're just not there yet.
I've been using Zerto for about a year.
It just works. We architected it pretty nicely. One of our licensed servers is a complete test solution for us to show that it is truly working. We're able to take a small test server, a Dev server is really what it is, and we can move from production, move it over to DR, have it run over there for a day, and then we move it back with no data loss.
It's never not worked and when you come from the SRM world, that's just unheard of. Now we're a year into this product and have gone through an upgrade, and our June test went off without a hitch. It's very rock-solid.
Their tech support has been fantastic to work with. We ran into a glitch when we did our update in mid-May and our primary data center stopped talking to our secondary data center. We couldn't figure it out. We got their tech support involved right away. They identified a bug right away. They were able to roll us back and then stayed engaged with us as they figured out how to fix the bug. And once the bug was isolated and fixed, they got right back a hold of us to say, "We're ready to go," and then they walked us through upgrading both sides. There was a lot of hand-holding in that upgrade scenario. It was a fantastic experience.
It took them four or five days to fix the bug and they stayed engaged with us just about every single day, letting us know the status of it and when it went to QA. We didn't fall into a black hole. It was a very customer-centric experience.
We were using VMware Site Recovery Manager. We're still a VMware shop. Zerto replaced SRM. It was probably cost-agnostic, but what it really came down to was that SRM breaks all the time. You apply some patches or a Windows update. Uptime and reliability for us are super-critical. We don't have a ton of time to spend on making sure it's always working. We were really looking for a solution that we could architect, deploy, and just let it run, knowing that we're protected without our always having to go back and mess around with it.
What we kept finding with Site Recovery Manager was that every time we wanted to do a full-scale, failover DR test, we would have to spend a week ahead of time prepping for it, to make sure everything would work flawlessly during our test. It always worked, we knew how to patch it and get around it. But disaster doesn't give you a two-week notice. You don't know you're going to have a tornado in two weeks. You get about a 10-minute notice and then you've got cows flying through the air. We wanted a tool that we know would just run and work and be reliable.
It was cost-neutral to the budget, the timing was right, and the solution was rock-solid so we made the change.
Ease of use and deployment are fantastic. This is a solution that we started with a proof of concept. We threw it in a lab and said, "Hey, let's just see what it looks like." Next thing you know, we never even had to tear down the proof of concept. Once we started seeing it working we said, "This is definitely something that we want." All we really ended up doing was negotiating licenses, applying the license key, and we were off to the races.
Soup to nuts, it took us five hours to spin the whole solution up and to create our protection groups. It was very fast. That includes downloading the software, spinning the VM up, and protecting and backing up data.
We worked with one of their engineers through the proof of concept. Once we said, "Hey, this is going to work," we tested it on a few servers and then we became a paying customer. They worked with us to help us define what made sense for the 30 licenses that we bought and what machines to deploy it to. But it's really not a complicated tool to deploy. There wasn't a ton of architecting and solution-building around it. There was some, but it was a very simple solution to install.
We have seen ROI. And even when you cost-compare against Site Recovery Manager, none of these solutions is cheap. But we are folks who need to have uptime and these things have to work. When you start comparing it against Site Recovery Manager, Zerto blows it out of the water, in my opinion.
If it were easier to license, and to scale it out a little bit more economically, that'd be a godsend. At the end of the day, my druthers would be to have all 200 of our servers protected by this platform. But for a company of our size, that stretches our IT budget and it just doesn't make economic sense. I would really love to be able to just apply Zerto to every virtual machine that we spin up, drop it into the right SLA bucket, and just be done with it, knowing that it's protected, soup to nuts. Unfortunately, that's just cost prohibitive.
My advice would definitely be to leverage the number of VMs. It's not a cheap solution by any stretch, but it delivers on its promise. There's definitely value in the investment. With hindsight, I would have gotten a better cost per VM if I was able to buy, say, 100 licenses. It would have been easier for me to put other servers under the protection of Zerto. I wish I would have had that flexibility at the time. Eventually, budgets will open up and I'll be able to go get another 50 or so licenses, but I'll still be paying a higher price, more than if I would have negotiated a higher quantity to begin with.
We took a look at a couple of other solutions. The other ones fell off the table pretty quickly. We're based in Iowa. We have a good account team here in Iowa from Zerto that knew our account from previous relationships. They came around and said, "This is a tool that you guys really need to take a hard look at."
The sales process took about six months. They came in about six months before my renewal with VMware. We had a few conversations and, about two to three months before the renewal, designed a proof of concept to see if it was actually going to work. They came in and did that. My guys were raving about it and I saw some of the reporting out of it. At that point I said, "Okay, done deal." It was cost neutral. When Site Recovery Manager came up, we canceled that portion of the renewal. There wasn't really a need for us to go out to market. I just trusted the account guys. They knew who we were. The tool worked the way they called it. I don't get too picky. If it works, it's good enough for me.
Take a hard look at it. Don't pass it by, don't be scared off by the price. Definitely take them up on the proof of concept. Have the team come in and do that. You'll be pleasantly surprised.
They talk about technology that can just actually do what it promises. I've been doing this for over 20 years and sometimes you get jaded by the fact that people over-promise and under-deliver. Zerto was definitely on the opposite end of that spectrum. The solution went in so easily that I had to do a double-take when my guys were telling me, "Hey, it's already up and running." I said, "It can't be done already." I'm used to complicated deployments. They promised and it does exactly what they said it would do. Don't be so skeptical. Keep an open mind to it and explore the possibilities.
I just sat through ZertoCON. They put a lot of emphasis on long-term retention. It really started putting a question out there as to whether you need a different backup and recovery solution. We use a different partner called Rubrik for backup and recovery. The challenge that we have with Zerto is that we're only protecting 30 VMs, whereas with Rubrik, we're protecting all 200. There's a little bit of a dance between value and return. So we're not using Zerto for long-term storage right now. We're evaluating it. I don't know if it makes economic sense to do so, but we are taking a look at it. And we're not protecting all 200 servers because of cost.
In terms of using the solution for a data recovery situation due to ransomware or other causes, knock on wood, we have not had to use it in that capacity just yet. We have a very mature cyber security posture and we haven't been popped by ransomware in the last year. But it does give me peace of mind that we also have that ability. That's just another layer of our cyber security posture and we know that we're protected against those threats. So there's definitely a peace of mind around that.
The only folks using it are on our IT team, about five or six of us. Five of my guys use it on a regular basis and know how to manage it. I'm the sixth guy. If I ever have to get in there, we're in trouble.
We protect about 15 virtual machines. We use Zerto to replicate them from our home office in Pennsylvania to our co-lo facility in Arizona. Our main data center is in our Pennsylvania office, but if that office were to go down, we would use this as a DR solution so we could run our company out of Arizona.
When I started with the company, we didn't have a disaster recovery option. If our office were to have gone down, our company would pretty much have ceased to work. Having implemented Zerto, now we know that if there's a power issue or some kind of facility issue at our home office data center, we can run everything that's protected by Zerto out of Arizona.
The most valuable feature is the ability to spin up a copy of a virtual machine which is a complete copy, within minutes.
I also enjoy the Analytics, which is something they added recently. They tell me all about my virtual machines and what kind of data we're pushing back and forth. I've been very impressed with Zerto Analytics.
The only time I ever have an issue is because there's a virtual server on each host in our environment. If I have to reboot a virtual machine host, I have issues with Zerto catching up afterward. That's about the only thing I would say needs improvement. Sometimes, when I have to do maintenance, Zerto takes a little bit to catch up. That's understandable.
I've been using Zerto for between a year-and-a-half and two years.
It's extremely stable. I've never had any real issues with it. When there are issues, it seems to recover eventually, so I don't really have any problems with it.
It's very scalable. As long as you have the licensing, you can add more virtual machines or more VPGs, which are virtual protection groups, to the license. As long as you have the licenses, you can protect the whole environment and add and remove virtual machines from Zerto as you want.
We have 15 virtual protection groups which protect 15 virtual machines at this time. Because of the licensing costs we couldn't go crazy. We have a total of about 60 or 70 virtual machines, but we only needed to protect the critical ones. We're using 12 of those 15 licenses.
We don't have plans to increase usage of Zerto at this point because these are the critical servers. If we add more critical servers that need to be up in case of an outage at our home office, we may add more. But this 15 has covered us.
Their technical support is good. Just like any technical support, it's all based on the severity of the case. I've never had any outage cases, so I have never had to sit on the phone or wait for them to get back to me.
I opened two cases with them and they got back within a reasonable amount of time. Both times, they knew exactly what the problem was and how to fix it, just from the details I left them in the case notes.
They also have a nice option where you can submit a case, or enable remote support, right from the interface. The support's pretty nice because they can actually look at logs, once you give them remote access right into your environment. That's very useful. And they're very knowledgeable.
We didn't have a previous solution. We selected Zerto because the RPO is extremely low, so you can get that server back up almost immediately. That was a huge thing.
Also, the ability to do failover tests, where you can test your environment, but not have it impact your production environment, was huge.
Those two features were the main selling points for us to pick up Zerto.
The initial setup was very straightforward, very easy. We set up a virtual machine at both locations, which are both Windows, and then installed the Zerto software and gave it credentials to connect into our environments. It did the rest for us. Once it was initially set up, we just had to figure out which virtual machines we wanted to protect and which way: did we want it to copy from our data center over to the co-lo, or back to our data center from the co-lo. They walk you through step-by-step with wizards. It's incredibly easy to set up.
Because there's a lot of data initially to sync over, the deployment took about a week in total. The initial setup only took a couple of hours, but then you have to wait for all that replication to sync.
We didn't have an implementation strategy for Zerto. Because we didn't have a previous solution, we didn't have any migration to do. We just paid for the license, got it installed, and rolled with it.
I did it myself.
Technically there are four users who have access to it in our company. I'm the main administrator. The other ones are guest administrators and they have a little less access than I do. But nobody else really logs into it except me, unless there's an issue and I'm not there. But as the main administrator it's really all on me.
We have seen return on our investment with Zerto, absolutely. Just to have an option for disaster recovery in case our main data center goes down — which can happen, because we don't have a generator or anything in our home office — is a type of return. Not just IT, but everybody in the company from the C-suite, was happy that we have a disaster recovery option now.
First of all, you should figure out which virtual machines are critical and how many licenses you may need before you start getting prices. You don't need to go crazy if you only have a handful of servers that need licensing.
Zerto sells licensing in bundles or packages, so I wouldn't go crazy and buy 100 licenses when you only need 30. Figure out what you need before you get your licensing, because it can get expensive.
We have Veeam which we use for backup and I know they have replication, so we looked into that, but it just wasn't as feature-rich or as quick to restore or bring up a VM as this was. We hadn't heard about Zerto really until we went to a conference in Philadelphia. They told us about it so we looked into it and it seemed like the best option at the time. We did look at maybe one or two other options, but this was the one that looked like the best option for us.
The biggest lesson from using Zerto is the failover capability and the testing capability. Those are two very useful things. If somebody calls me and they need to test something in a test environment, I can use the test failover copy of Zerto to bring up that virtual machine, or machines, and test things without affecting production. The other thing that is impressive is that you really can bring up a virtual machine almost immediately.
I would definitely give it a 10. I have no problems with it. I'm very happy with it.
I like the review. I would add an additional feature comment that it is not hardware dependent so you can use it on any brand or model you have.
For all the most important applications, we are using Zerto as a hot site in case something were to go on with our on-prem data center-based applications. We can immediately resort to Zerto as a failover.
It's deployed for replication from our data center into the public cloud.
The most important thing is the mean time to restoration. When anything goes wrong, we should be able to rely on the failover data that is available, and we should be able to restore it as quickly as possible. We have been able to reduce that mean time to restore the data pretty significantly with Zerto. It's gone from a few hours to a few minutes.
There are two things that are keeping us with the solution:
Both of these points are valuable to us because we have application data and it means we keep the data in sync. It is very important for us to know exactly where we left off in the event of any disaster or contingency. We can always rely on, or resort to, the data that we have as a backup or a failover. Also, in the event of a contingency, or even for doing a mock contingency exercise, the speed of retrieval of data and the speed of getting back up and running — minimizing the downtime — is important. That's where the second feature comes into play.
There are two areas which I would recommend for improvement. One is when we are trying to upgrade any virtual machines, we have to stop the virtual machines that have been replicated in Zerto and then upgrade or update to the virtual machines onsite. Instead of having to do it manually, there should be some way of automating that particular function.
And when it comes to AWS failover, the documentation has a lot of scope for improvement. It's come a long way since we implemented it, from the scantiness of documentation that was available to do a failover into AWS or recover from AWS, but they could still do a much better job of providing more details, how-to's, tutorials, etc.
In terms of additional features that I would like to see included in the next releases, if they could provide us some kind of long-term storage option, that would be the best thing. Then it could be a storage and a failover solution combined into one.
I have been using Zerto for two-and-a-half years.
It's a very stable solution.
It scales very well, in terms of the data size and the number of sites that we want to add on. It has scaled very well, at least in the last two releases.
We have plans to increase usage, but as it is we are using it for about 75 percent of the data at this point. The balance of the data will come onboard by early next year.
We have about 25 people using Zerto, and they're mostly database and storage administrators, infrastructure people, and security people.
We have not used the technical support. One thing I can say is that they have a very friendly team of engineers. If you have a problem, they are at your beck and call. You can call them and get it resolved.
We were using another solution but I don't want to name it. The primary reason we switched was the ability to restore the. Our main goal was not only to have good replication of data, but to be able to restore the data as quickly as possible in the event of any contingency, whether planned or unplanned.
From that standpoint, when we put Zerto against the existing product, what took us a few hours in that product took us a few minutes with Zerto. That was primarily the goal. Even though this product was a little more expensive than what we had prior to going with Zerto, we still went ahead with Zerto.
The initial setup is very straightforward compared to a lot of others. The user interface is very simple and very intuitive. It goes one step at a time so you can logically follow through the steps to set it up. Whether it's a small site or a big site, it doesn't really matter.
Overall our deployment took about two weeks. We had a detailed project plan, as we always do with any new products or projects that we come up with.
It doesn't require any full-time staff to deploy and maintain the solution. Once you turn on the process, all that somebody needs to do is just monitor the schedule and see whether it's doing things the way it has been programmed.
We have absolutely seen return on our investment with Zerto. We do mock disaster recovery exercises and, in every such exercise since we've gone ahead with Zerto, we've been able to restore the data within a few minutes, very easily, without any business loss. That gives us the confidence to say that, even in the case of a real disaster, we should be able to restore the data.
We didn't evaluate any other options.
Know your use case and then do a thorough proof of concept with your use case to see whether the solution works for your environment and your specific use case. Have a well-defined project plan and negotiate your way with the vendor.
The biggest lesson our organization has learned in using Zerto is that you should understand the product very well. You should understand what the product is capable of doing and leverage the options and features that are available in the product to the optimal extent.
We use Zerto for replication to a DR site of Windows and Unix machines. We like having a testable solution which does not interfere with the performance on our production machines. It has an included feature allowing assignment of a specific LAN or IP address to segregate the machine while testing. We are replicating 56 machines, totaling more than 30 TB, but compressing at 70 percent for space savings. We use the email alerts as a way to monitor replication status. This helps in off hours alerting for potential problems.
Testing and auditing are required at our organization. Zerto has saved a tremendous amount of time in performing these tasks. I am alerted every six months to retest each protection group. This setting is customizable. All past testing reports are retained and available upon demand. It has also added assurance in recovering servers and/or files. Being able to run tests on a working machine is beneficial. Being able to group virtual machines in order to recover all of them to an exact point in time is a definite benefit.
The mobile application is very useful as a real-time monitoring and reporting tool. When management asks the status of our VM backup and recovery, an easy way to answer is to display the status on the real-time Zerto application on a mobile phone or on a local computer browser.
The Zerto Analytics tool helps predict future storage needs by tracking trends in space, journal size, and I/O rate. These are reportable statistics making quantifiable tracking easy and accurate. It is nice the see developing trends.
Having a web interface simplifies access by other system administrators.
Certain areas were designed and work fine for VMware but are under development for Hyper-V. Eventually, all features will work for both platforms. Zerto support is very responsive when those questions arise.
There is a comprehensive online training program which is a good start to using the application. But nothing can take the place of actually using the product in your own environment.
The online search for solutions is very large. This is good, but also bad, as there are solutions present but you have to be diligent to find the answer you need.
I have been using this solution for more than three years. My organization utilizes Hyper-V instead of VMware. One big advantage of Zerto is its hardware agnostic. I have used various models of arrays and servers from Dell EMC and HPE with no issues from Zerto.
It picks up nicely where it leaves off (in the case of a reboot).
It easily grows at whatever pace is needed.
In the few cases that I have had, every one was dealt with quickly and by support staff who knew what they were doing.
We still use our previous solution because it creates a backup of both physical and virtual machines. However, there was an impact on performance running a backup on a running machine.
There is a slight learning curve when setting up, but nothing overwhelming for a good administrator.
Work with your local representative on running a live test to see if the solution fulfills your needs.
Zerto is not the least expensive alternative to software replication, but it is reliable and easy to use.
We use Zerto as a robust failover and replication solution.
Currently we replicate about 50-55 VMs to our DR site. We have run multiple test failovers, and have even done a full-scale, full company REAL failover. Zerto worked flawlessly.
We use Zerto to make sure that our primary Server farm is replicated and protected in case of a failover.
Zerto gives us peace of mind. It is also extremely easy to use and very intuitive. We don't have to worry about what would happen if our server room were to be damaged or our building destroyed. We always have that big red button available to failover to our DR site which Zerto does flawlessly and easily. We also have peace of mind with their Technical Support, which has always been nothing but stellar for us!
I love all the features really.
The fact that the interface is so intuitive is wonderful. The setup and customization of VPGs are great too. They allow you to customize all of the IP information and even the MAC (if you wish) for any and all VMs, allowing you to change the IP or Gateway, or whatever for any VM you might failover.
It is incredibly granular and I really appreciate that. Zerto has also caused us to organize our datastores in a better fashion that makes sense so that they are by priority and not just random.
I have brought this up to support before, but it would be really nice to have the option to "roll back" a particular VM to a previous time in the past if it were to become damaged, compromised, or infected. Zerto does not allow this. It's all or nothing, so you must roll back the entire VPG. You cannot roll back a single VM unless that VM is ALONE in a VPG all by itself.
It would also be nice if they could find a way to make it where one VM does not impact the entire journal history of the VPG. I do not understand why a single VM with mass amounts of changes should impact the journal history of the entire VPG. Although this has never caused me problems, it is an annoyance for sure.
We have been using this solution for a little over two and a half years.
Zerto is literally one of the most stable solutions we have. If we have any kind of bounce, drop, or failure on our fiber line to our DR site, Zerto quickly recovers and catches everything up as soon as the failure is remedied.
It is incredibly scalable and easy to use. I think that's what makes it so valuable and attractive, especially to those who do not have a DR solution.
Support is VERY important, and Zerto knocks it out of the park!
Every time I have called Zerto support, the person I have spoken with has been well informed and knew their stuff. Even if they couldn't readily solve my issue, they quickly escalated to someone that could. They are probably the best vendor we deal with in our IT Dept.
We had tried other solutions in the past including SRM and RecoverPoint for VMs. I can tell you they all pale in comparison, not only in functionality and user-friendliness but in support as well.
The initial setup is very straightforward.
They assigned us an engineer and we were set up and replicating within two hours.
We used a Zerto engineer, who was assigned by our Zerto sales rep.
The ROI with Zerto can't even be measured. The peace of mind we have gotten from knowing that all of our protected VMs are safely replicated with almost live RPOs is something that you can't even quantify.
I would suggest getting a dedicated, well-informed rep. I'm sure they all have great training but always hold your rep accountable. Ask lots of questions because there are no stupid questions.
We have evaluated SRM, RecoverPoint for VMs, and other "built-in" Hyperconverged replication solutions.
Zerto is literally the best vendor that we deal with overall as an IT Department. They have always delivered, and have always been top-notch. I highly recommend them to ANYONE, regardless of whether or not they already have a DR solution. Zerto is better.
We use the solution for DR failover/testing on our DR site. We're a Windows/VMware environment and replicate 25 virtual machines from our primary data center to our disaster recovery site. The solution allows us to perform live failovers without shutting down our production systems.
Zerto has made the testing and the actual failover process of our replicated virtual machines seamless. The product has relieved the administrative burden for the IT staff responsible for the disaster recovery implementation of the organization. Adding new virtual machines is quick and easy, and managing the environment is straightforward.
Failover testing is the most valuable feature. The fact that we are able to test the failover of live systems during regular hours is invaluable to our organization. No longer do we have to schedule failovers of our systems, which brings down our production environment.
There are a couple of minor areas that could use improvement.
The GUI could be streamlined a bit more to enhance the administrative tasks. I would also like to be able to throttle the email alerts, as sometimes they become a bit noisy, and get tough to keep on top of.
Our company has been using the product for four years.
This is a very stable solution.
It is easy to add new licenses for growth.
Support has always been readily available if there were issues.
We never used a different solution.
The initial setup is pretty straightforward and seems to be in line with similar products.
We did not evaluate other products before choosing this solution.
We use Zerto to keep a replica copy of the core servers we have running at our backup site. In the case of an outage, we are able to flip over to our backup location. Zerto keeps these servers up to date within seconds and in the case of an outage at our core data center, we can flip over services with little to no data loss.
Mostly what Zerto gives us is peace of mind.
We do the normal backups and that data gets stored offsite, but unlike backups, Zerto gives us the ability to be back up and running within minutes on a copy of our servers that is an identical copy of what is no longer accessible.
The features we found most valuable is site-to-site replication. This is what we purchased the product for and what we use it primarily for. We are in the process of switching over our production data center and Zerto has been a true time-saver that has cost us zero downtime.
Some features are not up to what we need, although we have found alternatives and aren't really looking for Zerto to handle those items today.
The setup process is time-consuming.
We have been using Zerto for around four years.
With its built-in notifications and reporting, Zerto will alert you if there is anything wrong before it can become a problem.
Zerto works for one or one thousand machines and scaling out is an easy process. Zerto also seems to better support the major cloud vendors, with updates as well.
Zerto technical support has been very responsive and has always been able to help. They are available 24x7 and always have someone to contact you right away.
Prior to using Zerto, we were using VMware's SRS. It was not keeping a close enough copy of our servers.
The initial setup was very straightforward, but also a lot of information was needed. A fair bit of time was spent setting things up, but it was really just time-consuming. There was nothing that needed to be done by the vendor.
As far as setup and maintenance are concerned, you need to be sure to set it up properly, test it, and occasionally perform updates. For the most part, once it is in place it is pretty hands-off.
We implemented with the help of Zerto, who was very helpful in explaining the process and how everything works.
Zerto is not cheap but is an invaluable asset.
If you have the need for what Zerto can do for you then the cost really isn't a factor.
We only had experience with VMware's product and didn't know of anything other than Zerto. Once we tried the product we were hooked and never had a reason to look at anything else.
When we implemented Zerto, we only utilized some of the features. This was mostly because of our needs at the time and partially because the other parts were not up to what we needed. They have since greatly improved on these parts, like the backups features, but we aren't really looking for Zerto to handle those items today.
We have never regretted implementing Zerto and I would not trade it for any other product.
Good/accurate review of the product. As a fellow Zerto user, I concur with the findings.
We use Zerto for disaster recovery of our tier 1 applications from our primary data center to our secondary data center. We have also used Zerto to successfully perform server migrations from one site to another for data center moves and company acquisitions.
Our administrators love the product and it has been proven to be easier to use than VMware SRM which we were using before going with Zerto.
Zerto has given us the peace of mind to know that we have full DR protection for our critical applications.
Zerto is relatively easy to set up and administer.
We were able to create runbooks within Zerto to help with DR failovers, and testing DR failovers is pretty easy as well.
We used to use VMware SRM and it was very cumbersome to add in new virtual machines or storage volumes because they would basically "break" the SRM protection groups that were already out there. With Zerto, it takes on new additions to protection groups much easier and it saves our admins a lot of time having to care and feed it.
The most valuable feature of Zerto is its overall flexibility, where it can be used for standard DR or you can also use it for server migrations, data center consolidations, etc. You can also use it for data protection and physical to virtual migrations as well.
It is kind of a swiss-army knife.
I can't think of any major areas of improvement with Zerto. Make sure that they are building in cloud-friendly features in future releases because a lot of enterprises are starting to move workloads to the cloud and are seriously considering doing DR to the cloud as well. Our company may be moving in that direction also.
I wouldn't mind seeing Zerto sold at a cheaper price point, although the cost is comparable to VMware SRM.
We have been using Zerto for four Years.
Zerto has been rock solid for us in terms of stability.
The scalability of Zerto seems to be ok. This will depend on the size of your environment and how often you need your data replicated for BCP and SLAs.
Zerto customer service has been great so far. No complaints!
We used to use VMware SRM and we switched to Zerto because it is less expensive and easier to administer.
The initial setup of Zerto was very straight forward. The rest of the configuration will be as complex as your environment's DR needs and application stacks are.
We had an engineer from Zerto help us with the installation and initial configuration for thirty days.
It is good to do a full Disaster Recovery plan for your organization and doing a BCP plan as well. You need to figure out how many critical servers and applications you have in your environment so you will know how many Zerto licenses to buy, etc.
We only baked off VMware SRM and Zerto.
It is good to implement a proof of concept of Zerto to test it out. I highly recommend it for data center moves.
We use Zerto at our remote locations as a backup solution in environments where we don't have the infrastructure for redundancy. It allows us to use two HPE DL380 servers as stand-alone VMware hosts and replicate the VMs without needing shared storage.
Using Zerto, we are able to replicate / backup our VMs and between servers and locations, without the need for shared storage, which provides redundancy in case of hardware failure. We are able to fail the VMs over to the secondary host, which also allows us to patch or repair hardware without extended downtime.
We use to use VMware replication appliances to attempt to replicate our VMs to remote locations and servers, but Zerto's one-to-many replication options with deduplication have made the process much simpler without having to constantly worry about the versions of each driver.
The number one thing we have found we would like changed so far is the cost per VM. It would be great to get that pricing reduced.
The need for a VM to be spun up on every host is challenging. In our remote locations, it's not a big issue, but as we look to use that in our main data center where we have hundreds of hosts, it becomes more daunting.
I have been using this solution for six months.
In the six months that we have used it, we have not had any issues.
So far, it looks like it should scale to our entire environment of over three thousand VMs.
Prior to this solution, we use to use SRM. We were looking to switch because SRM continues to be troublesome and requires a select combination of drivers and versions across the environment to work correctly.
This initial setup was pretty straightforward with the exception of needing a VM on every single host.
The cost per VM is a bit high.
We also looked at RapidDR from HPE but it only works on our HPE SimpliVity servers and not across all of our hardware.
We primarily use Zerto for our critical applications and infrastructure to allow immediate failover at our DR site. We licensed our critical applications and database servers and standard backup the rest. In order to increase uptime, we replicate our entire Active Directory infrastructure as well.
We are able to pass many financial and IT audits because we have a solid system in place with zero RPO/RTO. Furthermore, we can train almost any tech or engineer on the process of flipping to the offsite primary. The button and some minor DNS changes and we are up and running.
The ease of failover and test environments has proven invaluable. It is literally as easy as pushing a button to flip to a contained test environment for staging roll-outs or verifying backup integrity. The upgrade process initially was tedious, making sure every VM host got updated separately, but now it is streamlined and a breeze.
I would like to see better notifications when the sync is off for an extended length of time. There is nothing worst then going to do an upgrade or test a restore and realizing some of the VPGs need to be fixed because their journal is too small causing bitmap syncing to be off.
Stability is tied to the latency of your offsite DR.
The scalability is directly correlated to your storage and compute. More licensing as you grow is all you need.
The technical support for this solution is great. Every time I have had an issue, I get a real person, quickly, who remotely takes over and repairs the issue.
We were using RecoverPoint by Dell EMC prior to this solution. We switched because it was extremely cumbersome and far from streamlined during failover.
The initial setup of this solution is straightforward. It's literally an install button and then next, next, next...
Zerto assisted us with the deployment.
Have not had to failover often but the ability to test product upgrades has been invaluable.
The cost is not dirt cheap but also is not terrible.
We thought about VMware Orchestration.
If you are looking for an extremely easy solution to implement and is highly effective then this is your baby.
We use Zerto to protect our staff information against ransomware and is outlined in our disaster recovery plan. We have a DR site that we failover to if anything happens at our primary data center. We have only our core services, that we could not live without, being protected.
It is very easy to use. Almost anyone in our IT team can manage it after not using it for months at a time. As the DR strategist here, I like that. I enjoy having a fast way to bring a server back up. It will take me longer to get to my desk and log into everything than it will to actually complete the failover.
I really like how you can test the failover as often as you need.
The reports it generates are very good at showing our protection state.
It is self-healing in case I mess up on something and need to re-sync. When you are protecting Terabytes of data, this comes in handy.
I think Zerto could do better with size planning because it would be nice to analyze a server for a week and give an estimate on sizing the Journal. I find myself estimating too high.
It would be nice if I had an option to dynamically restore to any host in a cluster. Right now, if we have multiple things happen and the main host is down it will not work.
We are only using a fraction of what it can do. If you add the backup function it scales very largely. I could see a hospital really finding this product useful.
My first experience with technical support was not good at all. In the last few years, it has improved quite a bit.
Prior to this solution, we used storage mirroring and DFS syncing. Our old way used far too much storage. Zerto compresses the data well.
The initial setup of this solution is very straightforward. We were making initial syncs in forty-five minutes.
We did both, with most over the phone. Their expertise was fine. I didn't in any way feel like I was not getting my questions answered.
Our ROI happened in nine seconds.
I don't remember it being cheap. We started out slow, which was a good call. We found that in an event that was massive enough to cause an entire cluster to go offline we would be happy with our core services up and running.
At the time, Zerto was the only product doing this so easily. It might still be.
Don't underestimate how good it feels to rollback data instantly. It makes me look like a Wizzard at my desk.
We primarily use this solution for disaster recovery. The initial sync was from Pure to Compellent, and DR with disparate storage was great. Once we identified our critical servers, vetted the Live and Test Failover, and got the necessary configuration at our DR site, we are now able to perform tests in a safe bubble.
This solution is an integral part of our DR plan. The data and servers we needed are protected in a group as I need them. Changing or adding servers is easy and fast. The ability to protect resources that are in hurricane zones has been fantastic. Being able to safely perform DR testing remotely has enabled us to meet DR goals that were put in place. The first DR test with everyone in a room was fun, as everyone who hadn't seen it was amazed at how fast a server can come online.
The replication has been outstanding, it was three times as fast as initial Compellent replication. The ability to copy new data and protect new servers without a significant delay in getting the data is very valuable.
The ability to perform DR testing to ensure data integrity is critical.
The mobile app is great.
Increased granularity in how long to keep the journal would be nice, currently you can only do hourly up to 1 day, after that it is only daily. The ability to test failover for a single VM in a VPG would be beneficial for testing purposes. Currently alerts come from both replication managers at times, creating a lot of alerts; reducing those would be good.
We did not use another solution prior to this one.
Setup is really easy and quick. Make sure that you have your desired networking for replication and testing in place.
You are getting what you pay for, as this is a solution that requires minimal management after it is configured.
We did not evaluate other options that are worth noting.
EMC was too expensive and everything else was tied to the storage vendor.
It has been a great purchase, and we have no regrets.
Not much can be improved in this solution as it has performed to what we had hoped and has the features we currently desire.
Our primary use for this solution is DR Replication to a separate data centre. We use VMware at both sites. We're currently replicating around eighty VMs from our primary data centre in London to our secondary data centre in Beccles. Most of these are SQL servers with the VSS agent installed.
We now have the ability to activate replicated VMs at our DR site within minutes of something happening to our primary data centre. The ability to have a RPO of seconds has enabled us to restore data to just before an incident has occurred, which certainly saves a lot of time and money.
The most valuable features of this solution are its ease of use and simple setup.
The ability to do a test failover to an isolated environment has been very useful, as this allows us to test servers without any implications to our live environment.
The dashboard is very clear and concise, showing any problems in different colours.
The VSS agent setup and configuration does seem to be a bit clunky compared to the rest of the software. We have had issues with licensing, where the license we've been given by Zerto support doesn't include VSS replication, which was a pain at the time.
We primarily use this solution for Replication and Disaster Recovery.
We now have the ability to replicate critical data to a secure, off-site location that can be brought back in seconds if needed.
The dashboard is very user-friendly and easy to navigate.
The email alerts can be excessive, so better control over frequency or resolution may be a worthwhile improvement.
We use this solution for disaster recovery.
Implementing this solution has given us sandboxing and the ability to move VMs. These are nice to have tools.
The most valuable feature is the Restore file, where you can go back in time on a file-level. This is very helpful.
The backup functions are in need of improvement.
Real-time replication for our Disaster Recovery site, which is currently hosted in another on-premise site but will shortly be moving to a Data Center in the Cloud.
I have also used this solution to do point-in-time restores of Exchange mailbox items and to check updates from Microsoft.
This solution has given us the confidence to know that our complex systems are being backed up off-site in real-time and are testable on demand. This gives me a real sense of ease when speaking to Management about our resilience, and being able to demonstrate it includes everyone in the processes to confirm that.
The ability to fully test your entire environment without actually performing a failover is invaluable.
I really appreciate the Mobile app that allows me to monitor, at any time of the day or night, whether the replications are up and running.
Creating Virtual Protection Groups allows us to treat business services as one.
There needs to be more flexibility in the licensing. I've mentioned to Zerto Management that I find the licensing at twenty-five VMs to be very restricting to an SME business, and could there be some flexibility here? Businesses like ours constantly change their IT due to the flexibility of Virtualisation and it would be great to get Zerto on board with that same flexibility.
It has been very stable and gives me minimal problems, especially when compared to other parts of the business network.
I would imagine that the product would not have an upper limit due to its architecture. It seems to cope easily with our own environment of approximately forty VMs.
The person I worked with was very friendly and had excellent knowledge of the product and understood the wider implications of the installation.
We did not use another solution prior to this one.
The initial setup of this solution is very straightforward. The installation and configuration are incredibly easy for someone who is reasonably familiar with IT Management.
We implemented this solution in-house.
We do not consider an ROI analysis to be relevant in this area.
While we find the twenty-five VM license somewhat inflexible, the actual setup costs are minimal as the product is so easy to install.
Before choosing this solution we extensively tested the built-in functionality of our EMC VNX SANs but they didn't function to an acceptable standard so we looked to third parties.
After researching the market thoroughly we decided that this was the one to go for.
I'd strongly suggest carrying out a proof of concept if you're looking at this part of your IT solution.
Our primary use case for Zerto was to enable replication at our DR site for virtual appliances and automation of the failover - failback process. This also gets utilized for recovery at our DR site at different timestamps using the journal history.
This solution is very light, with zero-touch deployment and very enhanced dashboards.
It has enabled DR protection for virtual appliances with minimal administrator time. This solution also provides a backup option at the DR site without any additional cost of licenses. The Dashboards are very intuitive and can be published to the CIO and CTO.
We loved the orchestrator, which allows us to specify IPs for our DR site in advance. It also allowed us to pre-configure the boot sequence for a failover test or actual recovery. Backup at the DR site is the icing on the cake. The concept of a journal history and keeping snapshots at intervals of seconds are quite good.
Mobile features are there only for visibility and not to take action. We would love to see the ability to perform actions through mobile apps.
It would be helpful if the reports can be generated periodically, on a schedule.
This solution is very stable.
This is a scalable solution that also supports multiple clouds.
Our earlier solution doesn't have a detailed orchestrator and didn't support appliances.
It is very easy to set-up this solution.
We implemented this solution in-house, without the need for any partner to assist with the set-up.
There is no need to think of ROI as this is a DC-DR solution.
The solution is very cost-effective and very easy to set-up but does not compromise on features. The features are much enhanced compared to any other DC-DR solution.
Before choosing this solution we evaluated VMware Replication & Sanovi.
Overall, this solution is quite enhanced compared to other, similar solutions in the market.
I recommend trying this solution.
Cloud-based disaster recovery. However, do your homework on your provider. There are several options besides Azure and AWS that don't have their surprise charges. Be sure to check them out.
Would not do a virtual based disaster recovery solution without it. Or would not do a virtual to virtual migration without it.
It just works. This sounds simple, but it is so true. So much of what we are sold in IT doesn't work as advertised. Zerto does.
It's coming, but I want to do my backups from my DR side without impacting my production side. This is supposed to come out in v7.0.
Rock solid, it just works. Make sure your Windows boxes are not previous in-place upgrades. Bugs between the Windows components create issues with assigning IP addresses. This is a Microsoft issue and not a Zerto issue.
It's being used for hundreds of machines. It just works.
Tech support is great. They help troubleshoot things that are not their issues. See Microsoft upgrade note above.
Yes, VMWare Site Recovery Manager. SRM is not as intuitive and is VMware version dependent. Zerto does not have those issues.
Very easy setup. Like all DR solutions, it requires planning. Specifically the network side. Don't skimp.
Use a Zerto cloud service provider. They generally know their stuff.
Amazing ROI considering I don't have to buy a second set of hardware for my DR site. I can use a cloud provider and only pay when I need it.
Check your cloud providers. You don't have to host the DR side yourself. Also, look at folks other than Azure and AWS. The hidden/surprise costs will knock your socks off.
Veeam (no CDP), SRM, RecoverPoint for VMs, Double-Take.
No.
Disaster Recovery and quick file recovery. We have used Zerto to recover from Ransomware three times. Between the attacks we have recovered over 2 million files. We have never paid a ransome. Our users were only affected just under two hours with the attacks, majority of the time is actually figuring out in the office which pc actually had the crypto locker program running.
Zerto was able to use an EMC PROD SAN, and a HP DR SAN with no issues or compatibility problems which allowed us to avoid buying expensive hardware for a DR site.
Zerto has allowed us to feel comfortable our data is being replicated, and the data is not corrupted.
Zerto is packed full of useful features. The main feature is easy to manage, you will not need to be in Zerto day to day as it runs flawless.
Zerto has a plugin to integrate seamlessly into VMware VSphere tab.
I would highlight is the fact Zerto was used to migrate all of our VMs from one data center to another data center (two different physical locations) when we upgraded to new hardware. The migration was a breeze.
My second valuable feature is being able to click just a few points in the program to initiate a test vm recovery at our DR location. After testing, all test VMs are deleted along with the data. This is automated, you do not need to interact with deleting VMs.
In Zerto depending on your hardware and internet connection between your production site and DR site you can expect an RPO of 15 seconds.
As of the summer 2016, Zerto just created an extremely fast way to recover files from their journal. Instead of going to our long term backup program and taking a half hour or longer to find a file, Zerto is getting me the file in 4min. This is making my life easier and keeping users happy.
I would like to see Zerto come up with an emergency line for support. Support does get back to you quickly, but when your heart is racing because something happened the calmness of a pro on the other end would be helpful. The reports also need TLC, as I do not really find them helpful.
With these two minor negatives the product and support are great and they get the job done. I am an extremely happy Zerto supporter.
The only issue with stability was when I caused the issue by changing VM settings in the DR site. This was a great test for support, they took care of the issue quickly and got me back up and running.
I consistently see 8-18 seconds RPO. We test DR by adding a file to a server, then 1minute later launching the test failover to a site that is 6 hours (drive time) away. The file will be there in the DR bubble.
We added new VMs with no issues, the only issue to keep in mind is the more VMs you add, the more storage your DR needs.
Customer Service:
Outstanding! Pre-sale we had someone with Zerto actually come to our office and make sure this was the right choice for us. After we went live we had a few calls (initiated by Zerto) making sure we were happy and everything was running smooth. We still get periodic emails checking on us. I have not seen that with other companies.
On the Zerto team, Jennifer and Ciana helping us in our times of need.
Technical Support:
AWESOME!
Tech Support is all initiated by email, but then Zerto will call you if needed. This avoids phone cues. The person I talked to the two times I called in were awesome, they were very knowledgeable. Even though I caused one of the issues that needed fixed by support, they did not treat me with any disrespect even though I caused the issue. The tech did explain how to correctly do the tasks.
We had a very experienced consultant set us up.
My advice for someone thinking about Zerto is to do the trial run that Zerto offers, you will be impressed. If you are looking to get RPO's of 30seconds or less, Zerto can do it. I typically see 8-18 seconds RPO. Needless to say this depends greatly on your WAN link.
I highly recommend going to Boston in 2017 and meeting the staff at Zertocon!
Virtual server replication, as well as a level of backup, to our disaster recovery site.
Zerto is the key to our DR strategy. With Zerto, we were able to replicate our virtual servers to a remote DR site across a WAN connection. Zerto has made it possible to have different hardware (processor and storage) at each site.
I would like them to add a VM host replication option. Being able to replicate host configuration between sites would be a huge benefit.
Zerto is very stable.
No scalability issues. We have added additional licensing over the years and haven't had any issues.
We have not had to use technical support too much, but we have always found the technicians helpful and knowledgeable.
We did not previously use another solution.
The initial setup was very straightforward and easy. We were able to start replication within minutes of the initial setup.
Zerto support helped us install and implement it initially. They were very knowledgeable.
We believe the pricing, setup costs, and licensing are easy to understand. The pricing seems very reasonable.
We did not evaluate other solutions.
For the most part, we are very satisfied with Zerto and its features.
We were able to replace most of VMware SRM with this solution. It allows us to failover individual machines or application clusters with ease. The one thing that it does not do nicely is a full site failover. We have never needed that aspect though (only for testing).
We have leveraged the individual server failovers a number of times, and it has saved us a lot of man hours (doing things such as rebuilding, fighting viruses, or forcing more servers to failover than we wanted). It has been a phenomenal addition, and proved its worth in the pilot phase, when it saved us from having to rebuild a machine that was included in our pilot trial.
Journaling allows us to leverage Zerto's journal for sub-minute recoveries, instead of having to wait for the storage array to replicate. The solution is well worth the money invested.
The full site recovery is not up to SRM standards. Within a VPG, you can do great failover timing as well as ordering and scripting, but if your site contains many VPGs (as mine does), then it is difficult to manage failing over between sites, especially if you are at the site that was impacted.
None. Even the upgrades are speedy and easy.
None. As long as you have the licenses, it goes smoothly.
I have contacted their vendor support in regards to backup performance of SQL databases. They provided me with adequate instruction and background information to be able to adjust my environment to better suit Zerto's processes. It's been smooth sailing since.
VMware Site Recovery Manager. We changed from this vendor because we hit the 75 license threshold and were forced to consider the conversion to Enterprise. We searched the marketplace and Zerto was a great fit for our needs.
It was straightforward and easy. I was able to install it myself without any help from Zerto.
In-house was all that was necessary. It only required one engineer to work for about two hours to install everything, and then a week to configure and protect the entire environment. This will vary depending on your link to your DR site.
The cost is steep, but once you experience recovering a single server along with its granular restore times, you will see that the cost is justified.
We evaluated Unitrends.
Make sure that you understand the limitations of any software before you dive in. Make sure you document your use cases and have the vendor show you how it can perform those tasks.
Virtualization, and Zerto improves business continuity and disaster recovery tremendously.
Adding or changing VPGs (Virtual Protection Groups) may require restarting replication.
The product is very stable; no issues with upgrading to new releases.
Adding additional VMs is fairly easy. Adding or changing VPGs (Virtual Protection Groups) may require restarting replication.
Very good.
Previously we used SRM (Site Recovery Manager). Zerto is much easier to set up and configure. Failover using Zerto is simply a one-button click, and it does everything else in restoring the VMs at a different datacenter (recovery site).
Initial setup is fairly easy and the environment can be protected in just a few hours.
You can find providers of a DRaaS solution with Zerto license fees for each VM. Zerto only sells to partners and they have a robust partner organization.
The product works and does what is says. Zerto provides enterprise-class, virtual replication and BC/DR solutions for private, hybrid, and public clouds. Future releases will provide multiple destinations/locations to store the replicated data.
Most companies have used backup software for their protection, or disk array replication. Zerto leapfrogs those data protection methods and provides a much more affordable BC/DR solution, with improved RPO and RTO.
Zerto is an excellent solution for cloud-based environments, but for DIY clients who have another site to recover their systems it also works well.
The setup is easier than most products, and for us as a cloud partner, once a customer is trained to create VPGs, they are good to go.
An integrated encryption would allow for faster initial install and connection to the remote cloud site.
Their offsite backup is a bit clunky, but it will probably improve.
No issues with stability. The delivered upgrades and major updates are stable.
No issues with scalability, it pretty much takes care of itself. One does have to watch where all the recovery site systems are located, to avoid running out of space on the datastores. We can control/move recovery VMs as necessary.
Awesome. Their helpdesk people are among the best.
The product needs a VPN tunnel from the customer site to ours. VPNs can be tricky depending on the compatibility of the hardware. The programs themselves are a snap, and surprisingly small.
Cloud providers get good pricing to encourage quick adoption. A new feature is the One-To-Many VPG allowing a VM to be replicated at up to three different locations, including local.
As a cloud service provider, we have many tools to satisfy the needs of the customer. We have used Asigra, Veeam, StorageCraft, as well as Zerto. Each has its strengths. The market is heating up because of CryptoLocker and other viruses.
There are many products on the market that perform Virtual Machine replication. The other products use Snapshot technology which can have issues with Hypervisors or large disk volumes. The datastore or shared disk (depending on Hypervisor) must have enough free space to allow the Snapshot to be open for as long as the backup runs. This can lead to crashes and consolidation issues, which are usually painful. Zerto is a log-based replication product, for that I give it a 10 out of 10.
I've been using it over a year now, and the product has kept improving. It is easy to upgrade to the next minor or major release.
In terms of advice, I would say become VPN, as well as VMware or Hyper-V, savvy.
Improved the DR RPO and BCP.
VMware VM replication over narrow WAN bandwidth.
It needs to support more public cloud, especially in China.
No stability issues so far.
No scalability issues so far.
Excellent.
No previous solution.
Very simple.
It's a little bit expensive.
Tested several other products such as NAKIVO, Veeam, VM Replication. Zerto proved to have better replication efficiency over WAN bandwidth.
It works only for a virtual platform, it does not support bare metal. If you're looking for a comprehensive solution for both virtual and physical platforms, then Veeam is a preferable solution.
In our case, we used Zerto Replicator mainly for DRP (Disaster Recovery Plan), but also for testing.
For example, journaling capability allows you to recover from a ransomware attack. Thus, it is not only used in DRP scenarios.
In addition, there are increasingly more environments (such as IBM BlueMix) that support Zerto replication, for public cloud contention environments.
Zerto allows RPO of seconds, without need of snapshots. It is agnostic to storage and allows journaling of up to 30 days.
For me, limiting the minimum licensing package for 15 virtual machines (VMs) is a issue. Not all environments (especially in Latam) start with 15 VMs.
No, not really.
No, not really.
The support is in English only, and I estimate it 4/5.
I know Veeam B & R and VMware SRM (along with vSphere Replication) and in environments with aggressive RPO, and non-reliance on snapshots, Zerto is a superior solution.
It is not really complicated, if you do a previous good design. Installation is non-invasive, does not require agents in the virtual environment.It is not really complicated, if you do a previous good design. Installation is non-invasive, does not require agents in the virtual environment.
The licensing is by virtual machines, start in 15, and grow in packs of 10. There is an annual support that must be contracted.
Yes. Veeam B & R and VMware SRM (along with vSphere Replication and storage-level replication) were evaluated.
It is important to have clear:
Zerto is used to provide real-time replication for the important virtual machines that my organisation uses to our Disaster Recovery (DR) site to ensure business continuity.
The ability to test which virtual machines can be failed over to our DR site without interruption of our production environment. Being able to do file level recovery in case you delete a file accidentally or want to recover from a ransomware attack.
It allows for my organization to quickly recover from any disaster with very little downtime utilizing a user interface that requires minimal knowledge or experience.
I cannot think of any new features that should be added at the moment. With time, I should be able to make suggestions.
None whatsoever.
The only issue that I observed was that depending on the number of virtual machines that are being replicated, you will have to provision the appropriate bandwidth for the link that the replicated systems will traverse. Zerto gives you a bandwidth calculation estimate, but in my case that still was not enough to handle the volume of traffic being generated by our virtual machines.
10 out of 10.
We used to use Veeam Backup and Replication. We encountered some loss of connectivity with Veeam when we replicated some of our larger virtual machines that we hosted on our older virtual machine hosts.
It was straightforward and easy to setup. Once the software was executed, all that was needed was the basic environment details as well as the hypervisor information.
We implemented through a vendor team. They were very experienced and were able to provide detailed answers to all of our questions.
We expect to achieve a ROI within four years of the purchase. However, the ability to almost instantly failover and the fast file level ransomware recovering times give you that peace of mind that allows for low stress levels.
The setup will require that you have a domain controller and DNS at your DR site as well as a second hypervisor product (VMware vCenter Server/Microsoft Hyper-V) there as well. So, the additional software licensing will have to be factored into your operational budget.
No, we did not. When we did our research, Zerto was the name that always came out as the market leader.
We use the ZERTO Implementation to pretend critical VM and Groups of VM (Application Consistency) from failing. The solution with ZERTO helpy us to TEST and Failover without pane. Installaion is based on local primary site and remote desaster site with a distance of a few 100km and a bandwith up to 30Mbit.
Managing the system is easy and reliable, you can choose any VM you want to replicate to your DR Site in Combination with other VM's. Testing a DR is easy and well reported.
Any business unit can define it's needs for SLA and the IT department is able to follow these needs with less management and overhead. If a problem occurs (like ransomware or db errors) IT department is able to roll Back to the right point without loosing productivity of other not effected VM. So for both business and IT it is much easier to use Zerto and profit from best function and best performance in these area of replication tools
Migration of complex VMware and Hyper-V solution. Using Zerto to replicate to azure and S3.
DR Solutions with less management and less space. Licensing of DR Site is not necessary until activation of VM. That are very good news for Db users.
As described above, only the WAN traffic regulation should be monitored, if it runs it works fine and absolutely stable
More VM more bandwidth over WAN, but this is normal. In competition with other replication tools, Zerto works well and compression is fast and stable. If you want to scale order license for it and go on.
Customer Service:
Really fast and helpful. The documentation is a good stuff to read before calling, most of the events are well described and could be solved easily by yourself
Technical Support:
very fast and very good
We uses before VMware Site Recovery. It is to complex and expensive at all.
Parallel to the primary replication tool Zerto, we are using VEEAM Always On Replication Version 9.5. It works but we can't replicate in the same manner as Zerto, because this tool works with events and they are queued so you will not be able to replicate in the same way as Zerto. Also the amount of VM's to replicate at the same time is limited to the VEEAM Environment of proxies. More Proxies more VM, but also more overhead and bandwidth usage.
It works fine for replicate a few times a day, but not in sec.
If you follow the documentation you need about 20 Minutes to first run of replication. This is fast and you can choose it if you want with the trail license from Zerto by yourself.
No we did by documentation and without external team.
Hopefully 50% less than with teh other solutions, we will have a look to it after a year production
Licensing is VM based so you can buy packages or single VM. Price is not low but the power of application is high, so you will get your money back, in case of Disaster situation. You will be so fast back in production and this is very rent-able for the business units you safe from outtakes.
Yes, Site Recovery and VEEAM Always On Solution
With the next generation Zerto5.5 they allow replication and production in azure, so cloud based DR comes reality.
Everybody who looks for alternative solutions in physical sync mirroring of data (Metro-cluster) should think about business needs and ABC (Application Business Continuity) Zerto can do it and helps you to keep business online with less cost than other solutions.
Including application license, support and maintenance, cost reductions and project non-app development labor costs, we see Zerto reducing overall project implementation costs by 20-25% and reducing project implementation time by 2-6 weeks. Farther along, DR test planning and execution is reduced from hundreds of hours to just a few hours. These are huge numbers, but with over 100 applications using Zerto, we have the track record to prove it.
Further savings will accrue over application lifecycles as we begin to use Zerto as an operational support tool for application and data migration, escalation of new releases into production, refreshing and cloning new dev/test environments. These are all tasks that previously took hundreds of planning and execution man-hours now can be reduced to 10 or 20 hours total. For example, one app team refreshes their dev environments 4X annually. By using Zerto, the reduced downtime, planning and manpower requirements for refreshes effectively will add another 4 to 6 weeks annually for work on new application enhancements.
Replication of business critical VMware VMs over WAN to the remote disaster recovery datacenter.
The benefits are obvious: The simplicity of the setup and the speed of replication.
The speed of WAN replication is great. It is faster compared to RecoverPoint and 20 VMs replicating over a 20Mbps VPN, which has a RPO less than one minute. Of course, your mileage will vary.
Compared to the previous DR replication solution, Zerto has decreased both RPO and RTO significantly.
In Zerto for vSphere, it is a pain to remove the virtual appliance if for some reason you lose the host.
Make sure to understand how Zerto supports Microsoft/SQL Clusters - more of an advice to the companies thinking of implementing Zerto.
An additional comment that Zerto has a Long Term Recovery option built in so you could eliminate Veeam. Basically we set up a storage array, assigned it a protected share, and created a Zerto repository on it. Now our back ups both short term and long term are covered. Zerto also has the ability to restore individual files. A nice software solution for whatever hardware you want to use.