Coming October 25: PeerSpot Awards will be announced! Learn more
Buyer's Guide
Backup and Recovery Software
September 2022
Get our free report covering Rubrik, Veeam Software, Commvault, and other competitors of Cohesity DataProtect. Updated: September 2022.
632,539 professionals have used our research since 2012.

Read reviews of Cohesity DataProtect alternatives and competitors

John Leitgeb - PeerSpot reviewer
IT Director at Kingston Technology
Real User
Top 20
Easy-to-use interface, good telemetry data, and the support is good
Pros and Cons
  • "If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day."
  • "The onset of configuring an environment in the cloud is difficult and could be easier to do."

What is our primary use case?

Originally, I was looking for a solution that allowed us to replicate our critical workloads to a cloud target and then pay a monthly fee to have it stored there. Then, if some kind of disaster happened, we would have the ability to instantiate or spin up those workloads in a cloud environment and provide access to our applications. That was the ask of the platform.

We are a manufacturing company, so our environment wouldn't be drastically affected by a webpage outage. However, depending on the applications that are affected, being a $15 billion dollar company, there could be a significant impact.

How has it helped my organization?

Zerto is very good in terms of providing continuous data protection. Now bear in mind the ability to do this in the cloud is newer to them than what they've always done traditionally on-premises. Along the way, there are some challenges when working with a cloud provider and having the connectivity methodology to replicate the VMs from on-premises to Azure, through the Zerto interface, and make sure that there's a healthy copy of Zerto in the cloud. For that mechanism, we spent several months working with Zerto, getting it dialed in to support what we needed to do. Otherwise, all of the other stuff that they've been known to do has worked flawlessly.

The interface is easy to use, although configuring the environment, and the infrastructure around it, wasn't so clear. The interface and its dashboard are very good and very nice to use. The interface is very telling in that it provides a lot of the telemetry that you need to validate that your backup is healthy, that it's current, and that it's recoverable.

A good example of how Zerto has improved the way our organization functions is that it has allowed us to decommission repurposed hardware that we were using to do the same type of DR activity. In the past, we would take old hardware and repurpose it as DR hardware, but along with that you have to have the administration expertise, and you have to worry about third-party support on that old hardware. It inevitably ends up breaking down or having problems, and by taking that out of the equation, with all of the DR going to the cloud, all that responsibility is now that of the cloud provider. It frees up our staff who had to babysit the old hardware. I think that, in and of itself, is enough reason to use Zerto.

We've determined that the ability to spin up workloads in Azure is the fastest that we've ever seen because it sits as a pre-converted VM. The speed to convert it and the speed to bring it back on-premises is compelling. It's faster than the other ways that we've tried or used in the past. On top of that, they employ their own compression and deduplication in terms of replicating to a target. As such, the whole capability is much more efficient than doing it the way we were doing it with Rubrik.

If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day. In the past, it was going to take us close to a month. 

Using Zerto does not mean that we can reduce the number of people involved in a failover.  You still need to have expertise with VMware, Zerto, and Azure. It may not need to be as in-depth, and it's not as complicated as some other platforms might be. The person may not have to be such an expert because the platform is intuitive enough that somebody of that level can administer it. Ultimately, you still need a human body to do it.

What is most valuable?

The most valuable feature is the speed at which it can instantiate VMs. When I was doing the same thing with Rubrik, if I had 30 VMs on Azure and I wanted to bring them up live, it would take perhaps 24 hours. Having 1,000 VMs to do, it would be very time-consuming. With Zerto, I can bring up almost 1,000 VMs in an hour. This is what I really liked about Zerto, although it can do a lot of other things, as well.

The deduplication capabilities are good.

What needs improvement?

The onset of configuring an environment in the cloud is difficult and could be easier to do. When it's on-premises, it's a little bit easier because it's more of a controlled environment. It's a Windows operating system on a server and no matter what server you have, it's the same.

However, when you are putting it on AWS, that's a different procedure than installing it on Azure, which is a different procedure than installing it on GCP, if they even support it. I'm not sure that they do. In any event, they could do a better job in how to build that out, in terms of getting the product configured in a cloud environment.

There are some other things they can employ, in terms of the setup of the environment, that would make things a little less challenging. For example, you may need to have an Azure expert on the phone because you require some middleware expertise. This is something that Zerto knew about but maybe could have done a better job of implementing it in their product.

Their long-term retention product has room for improvement, although that is something that they are currently working on.

For how long have I used the solution?

We have been with Zerto for approximately 10 years. We were probably one of the first adopters on the platform.

What do I think about the stability of the solution?

With respect to stability, on-premises, it's been so many years of having it there that it's baked in. It is stable, for sure. The cloud-based deployment is getting there. It's strong enough in terms of the uptime or resilience that we feel confident about getting behind a solution like this.

It is important to consider that any issues with instability could be related to other dependencies, like Azure or network connectivity or our on-premises environment. When you have a hybrid environment between on-premises and the cloud, it's never going to be as stable as a purely on-premises or purely cloud-based deployment. There are always going to be complications.

What do I think about the scalability of the solution?

This is a scalable product. We tested scalability starting with 10 VMs and went right up to 100, and there was no difference. We are an SMB, on the larger side, so I wouldn't know what would happen if you tried to run it with 50,000 VMs. However, in an SMB-sized environment, it can definitely handle or scale to what we do, without any problems.

This is a global solution for us and there's a potential that usage will increase. Right now, it is protecting all of our criticals but not everything. What I mean is that some VMs in a DR scenario would not need to be spun up right away. Some could be done a month later and those particular ones would just fall into our normal recovery process from our backup. 

The backup side is what we're waiting on, or relying on, in terms of the next ask from Zerto. Barring that, we could literally use any other backup solution along with Zerto. I'm perfectly fine doing that but I think it would be nice to use Zerto's backup solution in conjunction with their DR, just because of the integration between the two.  

How are customer service and technical support?

In general, the support is pretty good. They were just acquired by HP, and I'm not sure if that's going to make things better or worse. I've had experiences on both sides, but I think overall their support's been very good.

Which solution did I use previously and why did I switch?

Zerto has not yet replaced any of our legacy backup products but it has replaced our DR solution. Prior to Zerto, we were using Rubrik as our DR solution. We switched to Zerto and it was a much better solution to accommodate what we wanted to do. The reason we switched had to do with support for VMware.

When we were using Rubrik, one of the problems we had was that if I instantiated the VM on Azure, it's running as an Azure VM, not as a VMware VM. This meant that if I needed to bring it back on-premises from Azure, I needed to convert it back to a VMware VM. It was running as a Hyper-V VM in Azure, but I needed an ESX version or a VMware version. At the time, Rubrik did not have a method to convert it back, so this left us stuck.

There are not a lot of other DR solutions like this on the market. There is Site Recovery Manager from VMware, and there is Zerto. After so many years of using it, I find that it is a very mature platform and I consider it easy to use. 

How was the initial setup?

The initial setup is complex. It may be partly due to our understanding of Azure, which I would not put at an expert level. I would rate our skill at Azure between a neophyte and the mid-range in terms of understanding the connectivity points with it. In addition to that, we had to deal with a cloud service provider.

Essentially, we had to change things around, and I would not say that it was easy. It was difficult and definitely needed a third party to help get the product stood up.

Our deployment was completed within a couple of months of ending the PoC. Our PoC lasted between 30 and 60 days, over which time we were able to validate it. It took another 60 days to get it up and running after we got the green light to purchase it.

We're a multisite location, so the implementation strategy started with getting it baked at our corporate location and validating it. Then, build out an Azure footprint globally and then extend the product into those environments. 

What about the implementation team?

We used a company called Insight to assist us with implementation. We had a previous history with one of their engineers, from previous work that we had done. We felt that he would be a good person to walk us through the implementation of Zerto. That, coupled with the fact that Zerto engineers were working with us as well. We had a mix of people supporting the project.

We have an infrastructure architect who's heading the project. He validates the environment, builds it out with the business partners and the vendor, helps figure out how it should be operationalized, configure it, and then it gets passed to our data protection group who has admins that will basically administrate the platform and it maintains itself.

Once the deployment is complete, maintaining the solution is a half-person effort. There are admins who have a background in data protection, backup products, as well as virtualization and understanding of VMware. A typical infrastructure administrator is capable of administering the platform.

What was our ROI?

Zerto has very much saved us money by enabling us to do DR in the cloud, rather than in our physical data center. To do what we want to do and have that same type of hardware, to be able to stand up on it and have that hardware at the ready with support and maintenance, would be huge compared to what I'm doing.

By the way, we are doing what is considered a poor man's DR. I'm not saying that I'm poor, but that's the term I place on it because most people have a replica of their hardware in another environment. One needs to pay for those hardware costs, even though it's not doing anything other than sitting there, just in case. Using Zerto, I don't have to pay for that hardware in the cloud.

All I pay for is storage, and that's much less than what the hardware cost would be. To run that environment with everything on there, just sitting, would cost a factor of ten to one.

I would use this ratio with that because the storage that it replicates to is not the fastest. There's no VMs, there's no compute or memory associated with replicating this, so all I'm paying for is the storage.

So in one case, I'm paying only for storage, and in the other case, I have to pay for storage and for hardware, compute, and connectivity. If you add all that up into what storage would be, I think it would be that storage is inexpensive, but compute added up with maintenance and everything, and networking connectivity between there and the soft costs and man-hours to support that environment, just to have it ready, I would say ten to one is probably a fair assessment.

When it comes to DR, there is no real return on investment. The return comes in the form of risk mitigation. If the question is whether I think that I spent the least amount of money to provide a resilient environment then I would answer yes. Without question.

What's my experience with pricing, setup cost, and licensing?

If you are an IT person and you think that DR is too expensive then the cloud option from Zerto is good because anyone can afford to use it, as far as getting one or two of their criticals protected. The real value of the product is that if you didn't have any DR strategy, because you thought you couldn't afford it, you can at least have some form of DR, including your most critical apps up and running to support the business.

A lot of IT people roll the dice and they take chances that that day will never come. This way, they can save money. My advice is to look at the competition out there, such as VMware Site Recovery, and like anything else, try to leverage the best price you can.

There are no costs in addition to the standard licensing fees for the product itself. However, for the environment that it resides in, there certainly are. With Azure, for example, there are several additional costs including connectivity, storage, and the VPN. These ancillary costs are not trivial and you definitely have to spend some time understanding what they are and try to control them.

Which other solutions did I evaluate?

I looked at several solutions during the evaluation period. When Zerto came to the table, it was very good at doing backup. The other products could arguably instantiate and do the DR but they couldn't do everything that Zerto has been doing. Specifically, Zerto was handling that bubbling of the environment to be able to test it and ensure that there is no cross-contamination. That added feature, on top of the fact that it can do it so much faster than what Rubrik could, was the compelling reason why we looked there.

Along the way, I looked at Cohesity and Veeam and a few other vendors, but they didn't have an elegant solution or an elegant way of doing what I wanted to do, which is sending copies to an expensive cloud storage target, and then having the mechanism to instantiate them. The mechanism wasn't as elegant with some of those vendors.

What other advice do I have?

We initially started with the on-premises version, where we replicated our global DR from the US to Taiwan. Zerto recently came out with a cloud-based, enterprise variant that gives you the ability to use it on-premises or in the cloud. With this, we've migrated our licenses to a cloud-based strategy for disaster recovery.

We are in the middle of evaluating their long-term retention, or long-term backup solution. It's very new to us. In the same way that Veeam, and Rubrik, and others were trying to get into Zerto's business, Zerto's now trying to get into their business as far as the backup solution.

I think it's much easier to do backup than what Zerto does for DR, so I don't think it will be very difficult for them to do table stakes back up, which is file retention for multiple targets, and that kind of thing.

Right now, I would say they're probably at the 70% mark as far as what I consider to be a success, but each version they release gets closer and closer to being a certifiable, good backup solution.

We have not had to recover our data after a ransomware attack but if our whole environment was encrypted, we have several ways to recover it. Zerto is the last resort for us but if we ever have to do that, I know that we can recover our environment in hours instead of days.

If that day ever occurs, which would be a very bad day if we had to recover at that level, then Zerto will be very helpful. We've done recoveries in the past where the on-premises restore was not healthy, and we've been able to recover them very fast. It isn't the onesie twosies that are compelling in terms of recovery because most vendors can provide that. It's the sheer volume of being able to restore so many at once that's the compelling factor for Zerto.

My advice for anybody who is implementing Zerto is to get a good cloud architect. Spend the time to build out your design, including your IP scheme, to support the feature sets and capabilities of the product. That is where the work needs to be done, more so than the Zerto products themselves. Zerto is pretty simple to get up and running but it's all the work ahead in the deployment or delivery that needs to be done. A good architect or cloud person will help with this.

The biggest lesson that I have learned from using Zerto is that it requires good planning but at the end of it, you'll have a reasonable disaster recovery solution. If you don't currently have one then this is certainly something that you should consider.

I would rate Zerto a ten out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Storage Administrator at a healthcare company with 5,001-10,000 employees
Real User
Top 20
Cut our backup management time significantly, and near-instant recovery reduces our downtime
Pros and Cons
  • "We do like the instant recovery... Now, we say, "Okay, give me 15 seconds and I can get this back up for you." And within that 15 seconds it's on and the only thing that we have to do afterwards is vMotion it off of the Rubrik storage back to where it should rest."
  • "The interface is still slightly clunky and has room for improvement. They do work with us whenever we mention anything that needs to be done or anything that we want. We find that bringing up the management interface is a little slow and not as intuitive as we would like, but it's been getting better as it evolves."

What is our primary use case?

We came from two different systems. We had one product that was for our campus side and a different product for the hospital side. We wanted to bring those together and not have too many products in one environment. Rubrik covers everything in our VMware, for both campus and hospital. It does all of our backups. Anything that gets backed up for either side now goes through it.

We were siloed out into many different teams on both sides and we had a backup team on campus and a backup team on the hospital side. When those were brought together, the backup teams were dissolved and they were put into the VMware side where they're now managing hardware and server hardware refreshes.

My team is now the storage and backup team and we've taken on that task. Backups are offered as part of pretty much any ticket requesting a new server, for campus or hospital, that is a request for a new server. We spin up the backup at the server creation.

Our Rubrik is all on-prem. We back up our VMware environment and we also do a few physicals. We do some SQL and we do some Oracle.

How has it helped my organization?

It depends on what we're recovering, but some recoveries, before Rubrik, would take 30 minutes-plus. Now, similar recoveries that we've done have taken only seconds.

Also, when we first put this into place, we were actually moving to a hybrid cloud approach as well. We were trying to offer server creation as a simple ticket. We were doing this through offering the products, the catalog, and the automation behind everything to spin up the servers and deal out the storage. The two products that we actually have in our environment weren't very friendly with that automation piece but Rubrik, with its SLA policies, makes it very easy for us to say, "Hey, if this is a tier-zero application, we want this SLA applied globally," although there aren't very many of those in our environment. And if it's a tier-one application we can say, "Oh, we want this SLA applied." It does a very good job of keeping things clean in our environment. We also went through making sure we have everything tagged in VMware so that Rubrik can just pull that tag and apply that SLA. So things work pretty smoothly with all of that together.

We use the archival functionality. We tend to keep things on a Brik for a certain amount of time and, of course, it's a larger amount of time for tier-zero applications. And then we archive off to a private cloud that we have here at the university. That definitely keeps costs down because we have a deep and cheap storage solution for that cloud, Hitachi Content Platform. That was one of the main reasons that we went with Rubrik, as well, as it is compatible with HCP. We have quite a few petabytes of that and we wanted to make sure that we could leverage that and use it to our advantage.

Another benefit has been that management time has gone down significantly. Before, we had those two teams, one team for NetBackup and one team for Commvault. Each of those teams had two people on them. Now, we have one person on the storage team who is dedicated pretty much to backups, and the rest of us jump in as needed. We've really been able to consolidate that effort, and since it's an easy to use interface, we were able to pick up and run with that as a storage team. But with NetBackup before, we did have to build out quite a few servers and other stuff to get it into HCP. The whole model behind that, having lots of media servers, was very costly when you add in all of the hardware costs, licensing, et cetera. With this, it's quite a bit cheaper.

And Rubrik has definitely reduced downtime, because if we can spin up a recovery faster to that local CPU and the storage of Rubrik and have it up instantly, we can definitely get back to work sooner.

What is most valuable?

We do like the instant recovery because, beforehand, we would tell people, "Hey, it's going to take anywhere from 30 minutes to an hour to spin this up and, in that time, we're going to need your help with certain questions." We would sit there and work with them, but it always took quite a while. Now, we say, "Okay, give me 15 seconds and I can get this back up for you." And within that 15 seconds it's on and the only thing that we have to do afterwards is vMotion it off of the Rubrik storage back to where it should rest.

We also like the web interface. We mainly log in to the node and work from that, but occasionally we will log in and look at things when offsite. It's very intuitive and it works really well.

In addition, the solution's APIs play in with our automation piece for hybrid cloud. We wanted everything to work without manual interaction. We wanted everything to just play through when a ticket is submitted and automatically spin up the backup that we wanted, based on the tag in the VMware object. Our VMware team was the one that mainly looked at those APIs and built all of that out, but they haven't had any issues with it. It's worked exactly as designed.

What needs improvement?

The interface is still slightly clunky and has room for improvement. They do work with us whenever we mention anything that needs to be done or anything that we want. We find that bringing up the management interface is a little slow and not as intuitive as we would like, but it's been getting better as it evolves.

Rubrik is a somewhat new company, so it needs to become a little more established, and that just comes with time. It's not really too much of a concern or a weakness. It's just something that hasn't happened yet.

For how long have I used the solution?

We've been using Rubrik for about a year and a half to two years now.

What do I think about the stability of the solution?

The stability has been good. We don't run into a ton of issues on it.

What do I think about the scalability of the solution?

The scalability is wonderful. That is one of our biggest advantages with this. We can scale out as big or as small as we need to. We went with 20 nodes or so at the start and we've got over 40 now. We continue to expand as needed. We're still not all the way done with rolling this out to replace everything, but every year we're getting more and more nodes in there and replacing more and more.

We've covered about 85 percent of our environment. With the other 15 percent, it wasn't that Rubrik couldn't handle it, it's that the budget only allows for so many nodes to be purchased at a time. On top of that, we need to make sure that we do it in a way that's non-disruptive for work, and there are some teams that would be affected by disruption. We need to go a little bit at a time, which is what we've done. 

For the future, I do see us using it more. We have been doing a soft launch on Oracle, because we needed the tool that Rubrik has that allows for integration. That was still in something of an early stage of development, and we weren't comfortable putting it into production until it was in a more developed state. So we have used Rubrik to back up Oracle, but we've gone about using less of the automation pieces that Rubrik offers, and we're using it more as just a landing spot until that is fully developed. That's about the only piece that we're going to use more in the future.

How are customer service and technical support?

When we have run into issues, we've reached out to our support team at Rubrik and they've been very quick to respond. Whether they're in the office or not, they do take our calls and help us out. It's always a quick response.

They're a newer company, so I'm sure they're still establishing their place, but the escalation teams and everybody that we've worked with have been capable and they've been able to fix our problems without having to bring in too many people.

Which solution did I use previously and why did I switch?

We had Commvault and NetBackup before. Both of those were based on costly consumption-based licenses, and our CIO really disliked that model. The licenses that we had had been increasing in cost year after year and it just wasn't feasible to keep two separate products that weren't a good fit for the automation piece, for hybrid cloud. And they were on a slightly more pricey model. So rather than going to one or the other, we went out to see if there was anything that made more sense at the time. And that's when we found Rubrik.

With Rubrik, we have an agreement where it isn't license-based, and we are able to add more Briks as needed and more clusters as needed. It makes it extremely easy to expand our backup environment as the need arises.

With the other models out there, you would buy one quota and then you would hit it and prices would change and other things would happen. They have you locked in, no matter what. It was basically a situation where you had to pay whatever price they said you had to pay. With Rubrik, it's been very nice to have all of the equipment in our own data center and to have a little bit more control. For example, if we think we're going to need this much next year, this is what the hardware cost is going to be, and we can pay for any additional capacity that we need. That's been really nice with Rubrik.

How was the initial setup?

Setting up Rubrik was both a little bit straightforward and a little bit of complex. We had the team that sold us the product there with us during setup and we went to add in all of the nodes at the same time. That was something that even that team had thought we could do, and then they remembered, in the middle of adding all the nodes at the same time, that we needed to do it in groups. That does take time. We were putting in something like 16 or 20 nodes, and we had to do it four-at-a-time. We had already done the physical installation and all the cabling, and all that portion. But when we started to add in the nodes, we had to do four and then wait for it to finish on that, and then do another four and wait for it to finish on that.

I think that, with time, they may implement a system that cues them up and continues to add nodes as it can. But that seems to be a similar problem to what occurs with other products in the same category. We also have Cohesity in our environment, which we don't use as a backup product, we use it strictly as a NAS, and it suffers from that same issue.

Our Rubrik setup took a few days, between our getting network issues figured out on our side, getting all of the cable management figured out with our data center team, the physical installs, the configuration with the Rubrik partners, and then adding in those nodes four-at-a-time until we had them all in.

We could have done it with less staff but we did want to make sure that all of us were aware of how the implementation worked, so we brought in all five of our team, two Rubrik partners, and two of our reseller partners, as well.

For maintenance of Rubrik we require two to three people. One works on Rubrik pretty much all the time, and the other four of us just jump in as needed on little things here and there.

In terms of Rubrik users, in addition to the five of us who do administration, we've given out access to a few of our database groups, so far, where there are 10 to 15 people.

What about the implementation team?

Our reseller was ASG at that time, now it's Sirius. Everything was fine with them. On the Rubrik side, we had an engineer and a sales engineer, and that worked really well.

What was our ROI?

With Rubrik, we have been able to allocate FTEs to the other areas. We could have eliminated them but we chose to reallocate them. As we've had people either retire or move on to something different, we've either not replaced some, or we've been able to replace some of them with lower-level staff, simply because of the ease of use of this product.

On the hospital side, the ROI is from the lower cost, less work to manage it, and the smaller footprint in the data center, which means less power and cooling.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing of Rubrik is better than products that we've had in the past. It was quite a bit cheaper than Commvault and NetBackup.

Which other solutions did I evaluate?

We actually reached out with our VAR and we evaluated anybody that could use the HCP that we have for archive storage. There weren't too many on the market that could do that. Rubrik was really the only solid option that we had at the time, other than Commvault and NetBackup. We weren't too happy with the latter two because of how much they were costing at that time.

What other advice do I have?

We did physical PoCs in our environment and we did have Cohesity and Rubrik side-by-side, as well as NetBackup and Commvault. We did PoCs for moving to public cloud as well, for some of these services. The PoC with Rubrik stood out. 

Make sure that you work with your support team that's going to support you after your purchase and make sure that you're able to work with them well, before you pull the trigger on it. We like to build partnerships. When we have those partnerships, we're able to really rely on them for a long time.

I am a fairly new entry into the backup field. Before, we had Commvault and NetBackup, and when they were showing us how to use those, and trying to teach us some of the terms in the backup world, it felt like backup was a very niche piece of IT, and that there was a lingo and a language behind it. It seemed that there were definite things that people had experienced before that were common among all backup products, and things that they were left wanting or hating. With this new product, Rubrik, we walked into it blind, not being backup admins, and it made a lot of sense to us. And when we did bring in a backup admin, they said it was quite different to anything that they had worked on previously, and that it made more sense and that it was just quite a bit easier to manage.

Rubrik is something that everybody can understand fairly easily, and when we have given others access to it, such as the database teams, and we've let them run with it and see what they can do, they've been able to implement it really well. They've been able to figure out how to implement the tool in exactly the way that they wanted, whereas before there may have been limitations.

We haven't used the ransomware recovery at this point. We've got some protection behind that, where they are locked down and require additional effort to delete and to change. We follow guidelines from our IT security team and Rubrik together. We just haven't seen a scenario yet where we've actually needed to use that.

We have used Rubrik's predictive search, although we don't use it too much right now. Mainly, the way that we've used it so far has been the traditional backup and restore, where we get tickets stating that a backup needs to be spun up and it's done automatically. Then, when somebody comes back later on and says, "Hey, we need this item restored," we're able to call them up and restore it with them on the phone, within a matter of minutes. We haven't really had to use the file search too much or a lot of the tools that they have available for us, just because the need hasn't been there yet.

When it comes to recovery, we usually spin it up and turn it over to the team that asked us to recover that data. The information and identity access management team had to spin one up recently. They said that they had a bad patch and wanted us to spin back to that morning. We did that, and it had lost some of the network settings and some of that stuff that they were used to getting. We spent about 15 to 30 minutes with them and everything was back exactly the way that it should be. But that was pretty much exactly the same with other products that we had so it wasn't something new for us.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
David Nahtigal - PeerSpot reviewer
IT System Engineer at a real estate/law firm with 10,001+ employees
Real User
Perfect match for complex environments, as it supports all types of infrastructure
Pros and Cons
  • "We have VMware, Hyper-V, Oracle, and Microsoft SQL. We have a lot of different systems, and all of them are supported under one licensing agreement. That's one of the benefits."
  • "We had some small issues with the reporting, but that was just a matter of fine-tuning the kinds of messages we receive by email. It was a little overwhelming in the initial configuration. So we reviewed our configuration with our partner and customized the reports so that we only get the important reports. I haven't seen any big issues or things that the solution is missing."

What is our primary use case?

The primary use case is as a backup and recovery solution. We have two data centers and we have a Commvault server for replication in both. We back up all our infrastructure with this solution, from Active Directory to SQL, web servers, file servers, databases, et cetera.

How has it helped my organization?

Commvault helps to ensure broad coverage with the discovery of unprotected workloads. The Discovery feature lists all the resources that we have, all the virtual servers and all the physical servers. You can also automatically deploy agents or set up schedules. At first, we did some manual tuning to customize it before deployment. Now, the virtual infrastructure administrator just has to add the VM tag on the virtual machine and that machine will automatically be backed up in the next schedule. It's a good automation feature.

It also helps by minimizing the time our admins spend on backup tasks so that they can spend time on other projects. Before Commvault, we had two backup administrators who were using a backup and restore application to restore every test that we had to do. It was a full-time job just monitoring the backups and doing the restores. With our new solution from Commvault, we have successfully implemented web-based backup and restore management for our different teams, including our file server, database, and Exchange teams. We split operations among those teams and each one has access to the backup Web Console. This console from Commvault is very useful for segmenting the restore options. That way, the database backup administrator only has access to the database servers and can only do backups and restores of databases and does not have access to Active Directory or file servers. The web-based backup and restore is a really great option.

Whereas before, we had one full-time engineer doing backups and restores, now that engineer is only working on it for two to four hours per week. Across our four teams, it's saving us about 10 to 12 hours a week.

The solution has helped to reduce storage costs as well. Commvault has an option to move data from primary storage. When you do a backup, it scans all the files from the file server and you can set a policy to remove all files that are more than, say, three years old from the primary storage. And on the primary storage, there is only a link that connects to the backup source. When a user needs a file on secondary storage, there is no problem because it only reads the file. When the user opens that old file, it's automatically restored and the user can access it. For our IT team, it has saved us between 5 and 10 percent of storage. It depends on how widely you implement the solution and the policies you set. You could save 50 percent if you have a broader policy.

We have also saved on infrastructure costs because Commvault takes less time to do the backup jobs, due to the deduplication. Also, the background tasks that are used to copy the backup jobs to tape are deduplicated. The full backup of our infrastructure can now be done in a couple of hours during the night. Before, some backup tasks would take more than a day, on the weekend. There has been a reduction of 80 or 90 percent in the backup window.

What is most valuable?

Commvault's most valuable features are its 

  • deduplication
  • encryption
  • support for many OSs
  • support for different infrastructures. 

We have VMware, Hyper-V, Oracle, and Microsoft SQL. We have a lot of different systems, and all of them are supported under one licensing agreement. That's one of the benefits.

We use two user interfaces on a regular basis. One is the Web Console, which is simple and has all the necessary functionality. You can add servers, back up servers, and restore. We also have a replication solution implemented and we use the Web Console for that as well. But for the initial configuration and for some deeper configurations, we also use the Commvault application. It's big and has all the fine-tuning options.

The solution's Command Center is very straightforward. It has an intuitive user interface with graphs, tables, alerts, as well as many options for alerting and messaging. Of course, you have to get used to the environment, but it's easy to use.

It is also important that Commvault provides a single platform to move, manage, and recover data across on-premises locations. That's because we have different storage and virtualization platforms. We have no problem if the file resides, say, on NetApp storage and we have to restore data to a workstation or some kind of Windows Server. Also, when we did some migrations from our old Hyper-V cluster to the new VMware cluster, those integrations between different infrastructures were successfully accomplished with the Commvault solution. We have no issues with different types of resources we need to back up.

In addition, the recovery options are pretty straightforward. For example, if you choose a virtual machine, you can restore the full virtual machine, you can restore the virtual machine on a different platform, you can restore just a virtual disk, or you can restore just a file within the virtual machine. You have all the options. In the web-based user interface, you can also restore using download options. You can browse through the files or virtual machines and download the file from the backup. They have a great range of restore options.

What needs improvement?

We had some small issues with the reporting, but that was just a matter of fine-tuning the kinds of messages we receive by email. It was a little overwhelming in the initial configuration. So we reviewed our configuration with our partner and customized the reports so that we only get the important reports. I haven't seen any big issues or things that the solution is missing.

For how long have I used the solution?

We implemented Commvault at the start of the year, so we have been using it for almost a year now.

What do I think about the stability of the solution?

We had one issue. The Commvault server is an Active-Passive cluster and the Active node had some hiccups. It wasn't something serious, but the Commvault server was unable to connect to one of the agents. I believe our partner discovered it because they also receive messages from our Commvault solution. They just informed us that the Commvault server had to be restarted. We did so during working hours because backups are done at night, and there were no issues. It was a standard procedure and we have had no other big issues.

What do I think about the scalability of the solution?

At the start of the Commvault project, we put together a list of all the resources that we have. They counted our resources and gave us the exact number of clients we needed to buy to cover all of our infrastructure and we had no issue there. Of course, we also have some plans for the growth of our infrastructure. If we have any big upgrades, we will also upgrade the Commvault infrastructure.

We have a lot of Commvault's features implemented. We're also in the process of testing the backup of endpoints, such as laptops and devices from end-users. There are just a few features from Commvault that we don't use.

How are customer service and support?

We use technical support through our partner because our partner has a lot of inside knowledge. For the majority of issues our partner gives us the solution, but they have had to report some small issues to Commvault support. They spoke directly with Commvault support and the solution was available in a few days. It was a very good troubleshooting experience.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We used NetWorker and Veeam. The NetWorker solution was the older solution and, in some very old clusters, we also used TSM (Tivoli Storage Manager) from IBM. The TSM solution was no longer supported and the Dell EMC NetWorker solution, which we used for our physical servers, was difficult to maintain. Veeam was a good solution for our VMware infrastructure, but we needed a solution with support for a wider variety of infrastructure types. One of our major goals was to eliminate our multiple backup solutions by going with Commvault.

How was the initial setup?

If we had to do the initial setup ourselves, it would be complex, of course, because we have a big infrastructure with different types of targets. But our partners helped and they managed to cover all the tests that we implemented at the start of the project. So, overall, the setup went really well. It took just a few days, maybe a week, to add our agents. After the initial configuration, it was really easy to roll out the solution to our entire infrastructure.

What about the implementation team?

Our partners, called Our Space Appliances, are system integrators in backup and storage solutions. They know our infrastructure.

Which other solutions did I evaluate?

We had a process for choosing a vendor. We called a number of vendors and had proposals from the Veeam, NetWorker, Cohesity, and Commvault.

The big pro for Commvault was that it was a single solution for our entire infrastructure. The licensing model was also an advantage and the experience of the partner was also a big plus. Some of the other solutions we evaluated did not make it to the second round because they did not support all the infrastructure we have in our environment. In the last round, the battle came down to pricing, as well as some small features, and Commvault was the best in all the criteria.

What other advice do I have?

Commvault is a pretty comprehensive but, maybe, complex solution when you first start with it. But that's why it is a perfect match for complex infrastructure, as it supports all types of infrastructure. Commvault is not appropriate for small businesses with just one type of virtual environment. There are different vendors that may be better for that use case. But when looking at enterprise backup and recovery options, Commvault is the easiest to use, and it has the widest range of features.

We are currently moving to Exchange Online. We have between 1,500 and 2,000 users. We have already deployed Teams on the cloud, and now we are migrating user mailboxes to cloud. Our next step, in the following month, will be a backup of Microsoft cloud solutions through Commvault.

In terms of the coverage of Commvault, we have a big Oracle Database and the Oracle administrators are a separate team. They do their own backups using RMAN. They then move the backup to the separate Sun ZFS  storage. We also tried that backup with Commvault, using the Commvault agent to run RMAN. The test went well, the backup was good, but the database team was used to their old solution. So we agreed to implement a backup of the ZFS file server.

Ours is an all-on-prem solution so we don't have any other networks being backed up. We do have a DMZ with different VLANs and so there were some problems. We had to install an agent on the DMZ zone, an agent that has access to resources in the demilitarized network. But it's a no-brainer. We just have to open a specific port so that the backup agent can communicate with the CommCell server, and the resources are backed up successfully.

In addition, to protect against ransomware we use Commvault's alert options because Commvault can predict big changes in the network with its AI solution. This is the first line of defense. The second line of defense is that we are now in the process of implementing secondary, offline storage to ensure an air gap between the primary backup, the replicated backup, and the offline backup storage. In case of a ransomware attack we will have off-site backup storage.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Principal at a venture capital & private equity firm with 51-200 employees
Real User
Top 10
Is especially flexible for tape environments
Pros and Cons
  • "If you are running on a legacy tape environment NetBackup is best."
  • "The flip side about NetBackup is that it is not policy-based."

What is most valuable?

In terms of most valuable features, I like the fact that if you have a bunch of backups, NetBackup gives you the ability to have one master and multiple media servers. What that means is you can have a bunch of sites that all have libraries and you have one master server that controls all the functionality of all the jobs. You don't have to deploy a standup NetBackup solution at each site. You can just deploy the media version for their tape library and have one master server that controls all the jobs.

What I also like about NetBackup, as opposed to most solutions like Rubrik and Cohesity, which don't really support backing up to tape environments, is that NetBackup does. If you are running on a legacy tape environment NetBackup is best. Most of the guys I've seen that use NetBackup have a tape environment.

What needs improvement?

The flip side about NetBackup is that it is not policy-based. NetBackup doesn't give you that feature. For example, Rubrik is a policy-based type of app, so when you create a backup job with it, say you have 30 servers in that backup, you can make one policy and apply it to them all. NetBackup doesn't do that. With NetBackup, you need to create a backup job for each server you want to back up and for each server you have. That is the only thing I don't really like about NetBackup. I can use Rubrik or Cohesity where you can create one policy, and apply it to many servers at one time where with NetBackup, you can't do that. You create a backup for each server. That takes more time.

If they can improve on policy-based backups, that would be great.

For how long have I used the solution?

I have been using Veritas NetBackup for about 10 or 11 years.

I think that the last version I used was version six. They're probably up to eight or 10 now. But really nothing has changed. Maybe additional features from the last time I saw it, but not really much has changed. I think they made a version 10.

The last time I went online I didn't really see much difference from a feature perspective since I began using it. I think the GUI interface looks a little different, a little cleaner, but functionality-wise, I didn't really see much change.

What do I think about the stability of the solution?

In terms of stability, no problem. Like I said, if you have multiple tape libraries, you can have one master that has a bunch of multiple media services. So you can have tape libraries all scattered at different sites. The one master server you set up controls all the job functions. When you log into it, it just kicks off the jobs and you can pause jobs. For different sites, you can keep the job turned off. It controls all the functions and all the backup jobs for all the multiple sites. That's all the master server does. It doesn't actually do any backup. It's responsible for making the kicked off jobs to get backed up.

How are customer service and support?

Their customer support is not bad. I don't have any issues with technical support. Technical support is okay.

How was the initial setup?

The initial setup is very easy. Commvault has a lot more convoluted setup. NetBackup is really easy to set up. I've never used Commvault, but from other colleagues I know who use it, you need professional services because it's so convoluted to set up. NetBackup is not that convoluted. Commvault is nice. It's a very nice application, don't get me wrong. I'm not going to put it down or anything like that. Once it's running, it's a good product. But from being exposed to Commvault a little, I like NetBackup better. I just think the downside to NetBackup is that it's not policy driven. That's the only thing I don't like about it.

What's my experience with pricing, setup cost, and licensing?

Pricing depends on the number of licenses and on the number of servers you have. It varies based on the number of servers that you're trying to back up.

What other advice do I have?

My advice to anyone considering Veritas NetBackup is to validate. If you have multiple sites, it's better to have the setup. If you have multiple sites that are running a tape library and media servers, you can set up one master server. But if you only have one site, you can set up a backup as a media server and a master server. If you have multiple sites, you want to look at how many sites you are backing up. If it's multiple sites, then you want to set them up with one master server.

If you only have one site, then you have the media server and the master, and it does both. That would be my suggestion - to validate if there is more than one site you're going to be backing up. If you are going to be backing up more than one site, you want to properly set up the first time. If you only have one site you're backing up, set it up as a master media. If you have multiple sites to set up, you want to set them up as media servers and then set up one master server that controls all the functions for the remaining sites. That is really the biggest thing, to be honest with you.

You might want to confirm if it supports backing up to Azure or AWS. Some people want to do long-term archiving. You want to confirm whether or not NetBackup supports backup to Azure or Google Cloud or AWS from a long-term archiving perspective.

Some people backup to tape. Some people are going to say that you can't back up the disk with NetBackup. I just don't know if it supports backing up to cloud providers.

On a scale of one to ten, I'd say NetBackup is an eight. It's pretty strong. I don't have other problems. I would say it's definitely a strong eight. It's a pretty good product.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Senior Engineer, Disaster Recovery at a financial services firm with 1,001-5,000 employees
Real User
Rock solid, does its job, but needs better UI, deduplication, and ease of doing certain things
Pros and Cons
  • "Scheduling is valuable. It does a good job of backing up, and it does a good job of restoring. Nobody has got a problem with that. The agents are well supported."
  • "When you get down to doing certain things, such as somebody wants a particular file restored, the process by which you do that is stupid. You kind of have to know exactly where to look for in order to find it. Even on older backup products that I've used, I didn't have that kind of problem. If we were looking for a file with a particular kind of a name, the solution would find that file anywhere irrespective of where it resides within the backup system. So, we didn't have to know the name of the specific server, the specific timeframe, almost all the characters of the file name, and all kinds of data in order to find a file. In Avamar, we got to know these details. We've gone around and around with them on that, and their attitude seems to be that it is working just fine. There is nothing for them to improve. The organizational system of other products that I'm working with, such as Zerto and Cohesity, seems to be centered around the tasks that you would most commonly do and want to do, as opposed to we've laid it out in a really neat technical hierarchy."

What is our primary use case?

It is our main backup system while we're in the middle of switching over to Cohesity.

What is most valuable?

Scheduling is valuable. It does a good job of backing up, and it does a good job of restoring. Nobody has got a problem with that. The agents are well supported. 

In terms of functionality, it is rock solid. It does its job.

What needs improvement?

The UI is a complete mess. It is graphic, but it might as well be a CLI considering how difficult it is to work with. It takes an entire person and a significant amount of time to manage backups within the company. It really shouldn't be that hard.

When you get down to doing certain things, such as somebody wants a particular file restored, the process by which you do that is stupid. You kind of have to know exactly where to look for in order to find it. Even on older backup products that I've used, I didn't have that kind of problem. If we were looking for a file with a particular kind of a name, the solution would find that file anywhere irrespective of where it resides within the backup system. So, we didn't have to know the name of the specific server, the specific timeframe, almost all the characters of the file name, and all kinds of data in order to find a file. In Avamar, we got to know these details. We've gone around and around with them on that, and their attitude seems to be that it is working just fine. There is nothing for them to improve. The organizational system of other products that I'm working with, such as Zerto and Cohesity, seems to be centered around the tasks that you would most commonly do and want to do, as opposed to we've laid it out in a really neat technical hierarchy. 

There should be some kind of greater granularity in the way it is storing backups. The reason why we're using things like Zerto and going to Cohesity, at least in the DR environment, and this will work in terms of backups as well, is that we need to be able to have a recovery point objective with some kind of granularity, such as every 15 minutes, every half hour, or every hour in case of a disaster recovery scenario, ransomware scenario, etc. We're pretty much allowed to do our once-in-a-day backup every 24 hours or however we schedule them. In most cases, we don't do anything different for basic backups, but it seems very difficult within Avamar to do anything if we want to have an image of a system every so often or at least an incremental point of reference or an RPO point. 

The other thing is that the way that it locks files seems to make those systems unavailable while it is operating the backup. So, we have to very carefully schedule our backups after hours or over periods of time when there is low bandwidth of the transactions happening. With the other products we have, we don't have this problem. I certainly don't have that problem with Zerto. I've got a recovery point of every few seconds, and it doesn't seem to take a lot of storage room to do that. Storage is a big thing for us. It is very expensive, and that's always an issue for us. So, things like deduplication would be really nice to have.

For how long have I used the solution?

I have been using this solution for at least six years.

What do I think about the stability of the solution?

It is rock solid. We don't ever have any problems with backups being lost or anything like that.

What do I think about the scalability of the solution?

All of the data in the company is used by one person or another, so there are a couple of thousand users.

How are customer service and support?

Their technical support is excellent. We've never had any problem dealing with Avamar in terms of technical support. We've had some nasty instances too where they've not been able to drill down on things and support their own product.

How was the initial setup?

I've only been with the company for about five years, and it was present when I came on board.

What other advice do I have?

I would rate Dell EMC Avamar a six out of 10. It is a pretty basic backup system in terms of features. It does its job. However, its UI is just ridiculous.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Backup and Recovery Software
September 2022
Get our free report covering Rubrik, Veeam Software, Commvault, and other competitors of Cohesity DataProtect. Updated: September 2022.
632,539 professionals have used our research since 2012.