IT Central Station is now PeerSpot: Here's why
Buyer's Guide
Backup and Recovery Software
July 2022
Get our free report covering Rubrik, Microsoft, Commvault, and other competitors of Veeam Backup & Replication. Updated: July 2022.
621,548 professionals have used our research since 2012.

Read reviews of Veeam Backup & Replication alternatives and competitors

Stuart-Smith - PeerSpot reviewer
Technical Director at Computer Driven Solutions
Real User
If the physical hardware has a problem, then we can utilize this appliance, turn on the virtual machine, and carry on running the business while correcting the issue
Pros and Cons
  • "The recovery of data is the most valuable feature. The software backup, which is just a program that gets installed on a server, can back up to the cloud. You can install that on a server or PC, and that will simply back up a user's files and folders. If it is installed on the server, we just back up the relevant data. Recovering that, if there has been a malicious attack on a business or anything like that, has been invaluable in the past. The good features about that are obviously, if the physical hardware has a problem, then we can utilize the appliance, turn on the virtual machine, and carry on running the business while we put the hardware back and correct the issue."
  • "When you are ordering hardware appliances, they have to be delivered from America. In the past, hard drives on the appliance have been simple SSD drives that are installed. However, they don't have a local supply for the SSD drives in the UK. They have to be exported from the US, arrive, and then I have to go and install them. Then, they will rebuild it from their side of things. However, I could order that same SSD drive online and get it the next day. So, I have to wait days for things to come when I could get the exact same drive the next day in the UK, if I wanted to. That causes a bit of a problem. I don't know how many businesses they have in the UK, but I do think that having to import stuff from the US is a time-consuming problem. If there was a holding in the UK, then we wouldn't have that delay in time."

What is our primary use case?

We are an IT support company, so we resell IT and support services. It is our customers who have installations of disaster recovery appliances and cloud backup solutions.

We are our customer's IT support company. We are the ones who implement and support it. Our customer pays the bill, but we do all the support, looking after it. I do not back my own data up on Infrascale, but I definitely back up my customers' data.

We sell and support two products for our customers. We have four disaster recovery appliances onsite that then back up to the cloud from the appliance. We also have quite a few people on just the cloud backup. So, we use cloud backup and DRasS, which is disaster recovery as a service.

When there is an appliance-type installation, which is a physical hardware installation, we go to the site and install a piece of hardware. That piece of hardware communicates with their servers onsite. Their servers are hosts with virtual servers built onto them. The complete virtual machine is backed up maybe twice or three times daily to the hardware appliance provided by Infrascale. That could then become a replacement for the server, if the server had a physical problem and needed to be shut down. We can turn the physical server off onsite, go to the appliance provided by Infrascale, boot up the virtual machine on the appliance, and then it would run the business as if the server were still running. So, it is hardware redundancy for the server.

It backs up the virtual machines, backing them up and all the files. So, it can be a data recovery tool as well. Also, if the entire building burnt down, we could jump onto the version in the cloud, boot that up, and people from all around the world could log into that server to carry on working.

Imagine an appliance, similar to installing a second server, that backs up a virtual machine to the appliance. It has disk space on the appliance, then it backs up that virtual machine from the appliance to the cloud. Our cloud is based in the UK, which is also provisioned by Infrascale. So, we implement that sort of system, which is a little bit like SaaS, but it is a disaster recovery solution. We also have cloud backup, which is a software installation to a server, that then backs up certain files and folders through a cloud provision somewhere in the world.

We are the actual customer because we sign for these products, and our customer doesn't. We are the actual people who lease these things from them.

How has it helped my organization?

Now, if one of our businesses has an issue, I am confident that we would be able to get them booted and running within a couple of hours. It would affect their business, but it is a disaster recovery scenario, so there has obviously been a disaster. 

What is most valuable?

The recovery of data is the most valuable feature. The software backup, which is just a program that gets installed on a server, can back up to the cloud. You can install that on a server or PC, and that will simply back up a user's files and folders. If it is installed on the server, we just back up the relevant data. Recovering that, if there has been a malicious attack on a business or anything like that, has been invaluable in the past. The good features about that are obviously, if the physical hardware has a problem, then we can utilize the appliance, turn on the virtual machine, and carry on running the business while we put the hardware back and correct the issue.

The Boot Verification feature gives you a snapshot picture to tell you what would happen if a virtual machine was booted. From that, you can tell whether the backup was successful. 

I find the dashboard fairly straightforward. It is fairly in-depth from day one. The more you use it, the more you get used to it. I find it fairly straightforward now for making some limited changes that won't really cause any problems. It has a good user interface.

The speed of the solution’s restore functionality is very quick. It works a treat and does the job perfectly. I don't think we have ever come to the point of thinking the product isn't quick.

What needs improvement?

We did have a major problem last year in March. Somebody attacked some servers being supported by Infrascale and managed to wipe the servers as well as wiping the appliance. However, they didn't manage to wipe the cloud. So, what was on the cloud had to be downloaded to the appliance again. The customer was probably down for about three days. This was a very difficult situation for us to be in.

I think somebody had accessed the server, could get onto the appliance, and see what it was because we had saved the password. Now, I know better than to save the password. Also, the password was still the default password. With the new implementation, it makes you change the password so you can't keep the default password. However, four years ago, when we implemented it, the default password was still on that appliance when that appliance got wiped by somebody.

While this would be a very worst-case scenario, I don't blame Infrascale for the amount of time that it took. However, it was difficult for us because we were trying to placate our customer, which was difficult, because they have a 25 million pound turnover business. They were not happy, but we are still working with them. 

I think I'd be more confident now dealing with the problem. Plus, we monitor the systems more closely. Whereas, previously, I presumed that everything was going well without really checking. Now, I have learned that I need to be on top of any issues. So, I am checking the appliances and cloud solution backups daily. So, we are a bit better switched on with supporting it.

When you are ordering hardware appliances, they have to be delivered from America. In the past, hard drives on the appliance have been simple SSD drives that are installed. However, they don't have a local supply for the SSD drives in the UK. They have to be exported from the US, arrive, and then I have to go and install them. Then, they will rebuild it from their side of things. However, I could order that same SSD drive online and get it the next day. So, I have to wait days for things to come when I could get the exact same drive the next day in the UK, if I wanted to. That causes a bit of a problem. I don't know how many businesses they have in the UK, but I do think that having to import stuff from the US is a time-consuming problem. If there was a holding in the UK, then we wouldn't have that delay in time.

When they export stuff to me from the US, invariably the delivery company (called DHL) is looking for EORI numbers that we don't have. So, they try to involve us in the export of it, and it has nothing at all to do with us. We are simply the customer. If I had to moan about anything, that would be it.

Where the dashboard is concerned, I am okay with it. I am looking at one now and understand what I am looking at. When you first get in it is difficult, but I do believe that they now offer certain training for it. Given the fact that we are trying to support our customers in the UK, it is good to have the knowledge about it, know what you are looking at, see what the size of the protected bytes are, and understand it a little bit better. I have been doing IT support for implementations for the best part of 30 years, and it took me a bit of time to get my head around some of the way things are done.

For how long have I used the solution?

I have been using Infrascale for over four years, since April 2017.

What do I think about the stability of the solution?

It is very stable. I have never had any problems with the software. If we install the software on anything, it does what it says on the tin, which is that it will run a backup at a certain time. 

There are some issues with the software backup. Those issues are it cannot back up from an installation onto a server nor can it back up redirected folders on a server. Because when you install the software onto the server. you are installing it as an administrator on the server. However, an administrator account on a server does not have access to a user's redirected folders. 

The user is actually the owner of that folder on the server, even the administrator can't break into that. While you can force your way in, it basically breaks the policy, so we don't try to do that. If we had to, we could. However, out-of-the-box, installing software onto a server, it does not back up people's redirected folders. Now, if you are a user that saves a lot of files and folders onto their desktop, you get redirected back to the server. Essentially, you are missing quite a bit of data to be backed up. So, we do get a lot of errors based on that. We have found a little bit of a workaround. However, that workaround doesn't always work either, causing problems.

I check daily that the backups have gone through. If the backups aren't working correctly, I log onto the appliances and check why the appliances haven't backed up correctly. If it is something that I don't quite understand, then I will pass that down to the support at Infrascale. Where the appliance and cloud backup are concerned, there is very little maintenance to do. 

What do I think about the scalability of the solution?

We have had problems with this. Again, this is down to us not really understanding the product in the first place. In the case of the company, where we had problems with last year, when it came to bringing the VMs onto the appliance, we had to download them from the cloud. We got the VMs back onto the appliance, then we found that trying to boot them and the hardware was insufficient to support their product. So, the product ran like a dog when it was booted up on the appliance. So, it was unusable for the customer. That was down to us. When we were trying to size the product, it was a misunderstanding of, "What would be the minimum that would allow this server to run?" Well, we know that the minimum a server will run on is 8 GB. However, when the customer's product is on there, 8 GB is insufficient to run the server. So, it caused some problems.

One of our companies, who has had the product for three years, is at the point where I don't think the appliance has now got enough space to back up what they have. The only way that we can do anything with it is to keep getting rid of backups, which we probably shouldn't do, but that is what we are going through at the moment. So, we have to really micromanage the backup that is happening so it doesn't go over. The customer could upgrade it, but our customer isn't realistically going to put his hand in his pocket and pay any more at the moment.

For all the people that we have on it, we have six terabytes of space in the cloud for the cloud backup solution as well as four appliances. I am the one who looks after all of it. I monitor it, and if there is a problems, then I deal with it. The guys who work for us run the IT support side of things, and I look after the backup side of things.

We have a couple of thousand endpoints. We are quite a small IT support company.

How are customer service and technical support?

I am perfectly happy with the support that I receive from Infrascale. The technical support is very good. I deal mostly with one guy there called Maxim. He understands what the issues are and is very helpful. I mostly deal with one guy there, who is fantastic. 

They work in the Ukraine, so there is a little bit of a language barrier. When we first started working with them, I found it very slow to get my message across. Things have improved. What they have done is given a certain person for me to always deal with, which suits me and I'm happy with that. You develop a bond with the person and know that they understand your systems. Previously, when we first started, we could get anybody. Then, we would have to go through the same process of explaining things all over again. To start with, it was like pulling your own teeth out, but now it's improved.

They are proactive to the point that Maxim will check my system. If he sees something he will log a call with his own support desk. He phoned me up the other day, and I said, "Hello Max, how are you doing?" He said, "Yeah, good." I said, "I've seen a ticket was logged, but I haven't logged it." He said, "No, I logged it for you. Because I noticed something was getting high, and I wanted to have a chat with you about it." That is great, because that is proactive monitoring.

When we first started to do the appliances, I really didn't understand what the service was. I thought it was a managed service for backups. I didn't realize that it would be me who would be managing it. But, the more I have been involved in it, the more I have become accustomed to managing all of its appliances and installations. I take the responsibility for making sure that it works. Support-wise, it has improved. I have seen the business improve over the last 12 months. I think the business got sold or bought out. However, there have definitely been recent improvements with the support.

If I am ever going to do anything that I think is outside of my remit, I will contact support and go through one of their support guys.

Nobody from Infrascale has ever phoned me up and said that they want to test anything.

Which solution did I use previously and why did I switch?

We used Veeam Backup. Mostly, we would use Windows Backup to USB drives.

I switched to Infrascale because I wanted a solution that gave me what I was looking for. Backing up to the cloud directly from your server causes slowdown. I wanted something that would allow me to have an item onsite where we could quickly back up, then slowly back it up as needed for it to be in the cloud, as long as it did the job. I checked online, had a look at what customers were using, and read through some stuff for Infrascale. It just clicked that it seemed to have what I was looking for. So, I contacted them and never had a problem. I did the due diligence myself, so I am happy with it.

How was the initial setup?

Now, the initial setup is simple. Years ago, when we first started, it wasn't simple.

I installed an appliance yesterday and had it installed within an hour. From the box to the customer's site, it was installed and ready for the next stage. Then, one of the implementation guys from Infrascale gave me a call, and we liaised with each other because they like us to set it up. 

They tell me what they would like me to do. Now, the setup is much easier than it was when I very first had an appliance. It seemed to take forever when I first had an appliance. However, yesterday when I did it, it took no longer than an hour.

Because I have implemented these a few times now, I have a bit of knowledge based on some of the things that you come up against when it comes to making it all work exactly the way you want. There were things yesterday that I knew we should check to make sure it was working, because a couple of the backups failed to start. I actually told the guy at the other end, "Let's try this." So, we tried something and made it work because of the knowledge that I have built up. It is a lot easier than what it used to be, because it used to be difficult. Now, out-of-the-box, it is fairly easy.

When we install a hardware server, we build that hardware server with virtual machines on it. The hardware physical server would be clustered at the host with Hyper-V installed on it. Then, we would build the virtual machines on it, and those virtual machines are essentially the servers that are onsite. For customers of a certain size, we wouldn't suggest this because it is quite expensive for our customers per month. It is a fairly expensive product. However, for customers of a certain turnover, we would suggest this. We would explain to them in full what the disaster recovery solution offers. What we say to our customer is, "Our backup strategy would be to backup between two and three times a day, which would be before business starts, as business ends, and during business hours." So, any one of those would be bootable and the files are recoverable from three sections of each day. So, that is our backup strategy, which is really based on the appliance. 

What about the implementation team?

I help with the implementation. In fact, I implemented one yesterday with some of the guys. Sergei is the implementation guy. So, we did an implementation of an appliance yesterday. I do a lot of the support for my customer, utilizing the support services of Infrascale. I deal with a guy called Maxim on a lot of cases. I deal with it directly with the guys at InfraScale. 

They have never physically tested any of our systems. Infrascale has never tested anything. I have never had a phone call with them to say, "We want to run some tests on something." We have installed the stuff, but they have never tested it. There are no tests.

What was our ROI?

I make sure that this all works. While it does cost a lot of money, it is a good service. I am well bought into what it can do, because as much as it protects the customer, it protects me as well. If we had not had this solution 12 months ago, then the company that we support would no longer want us to work with them, because they would have nothing at all. So, it saved us.

What's my experience with pricing, setup cost, and licensing?

We pay 600 pounds per month for six terabytes of cloud storage and backup. This is a fixed cost of 100 pounds per terabyte.

We pay 752.50 pounds for two appliances, one of them costs 301 pounds per month and the other one costs 451.50 pounds per month. 

Another appliance costs us 327.60 pounds. 

The newest appliance that we installed yesterday is costing us 511 pounds per month because it has better speed and memory. 

The appliances have different prices because of storage, size, and memory. For example, the older machines support more virtual machines, whereas the new one only supports one virtual machine. As we have purchased the later appliances, they have probably been a little bit more expensive because they have to be good enough to keep the business running if the physical server goes down. We learned our lesson from the one that went down when we tried to run products and it wasn't quick enough.

What other advice do I have?

I push it in my own business. I wouldn't do that if I didn't think it was any good. I would definitely advise others that it is a good product.

If you want to back up redirected profiles on a server, you have to go into the scheduler and change the event in there to be a system event, and not run it as an administrator. That is the best thing that I learned, because that enables you to back up redirected folders. However, if you sign up for Infrascale, even they don't know that. So, you can get it to work, but it takes a bit of messing about to do it. If I had to say to somebody, "Get Infrascale, put it on your server, you can back the data up, and then you can back up the user profiles. You will need to go into the scheduler and change it for the task to run as a system event, and not as with administrator rights. It has to run as a system, and it will then work."

The speed of Infrascale’s backup functionality is good. I have had no complaints. The speed of backing up to the cloud using the software solution is based on the speed of the server at one end, how quick it can run the program, and the upload speed of the site. Realistically, that has nothing to do with the solution provided by Infrascale. Where the appliance is concerned, because the virtual machine is backed up to the appliance, there is no lag on the servers from the appliance, which then backs up to the cloud. That is based on the speed of the customer's bandwidth, because that is how it gets into the cloud. Solution-wise, I think the speed of it is just fine.

The speed of recovered documents is more based on a customer's broadband.

After a few weeks, anyone working with Infrascale should really understand the product, and it should be fairly straightforward. 

We offer it to all our customers. It depends on whether they are prepared to spend any extra money on the solution. So, any new customer who comes onto us, we suggest that they have an offsite cloud data backup that will protect their data only in the cloud. Then, should anything happen, it's recoverable to a drive and we would be able to give it back to them. Backup is a service, and it's also something that they can do themselves locally. We do try and get as many customers onto it as possible because it helps us.

There is a slight language problem. It is a bit hard for people to initially get used to, because support is in the Ukraine. I think the language barrier would mark it down one point, but it is a nine out of 10 for me.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Server Engineering Services Lead at a mining and metals company with 10,001+ employees
Real User
Top 20Leaderboard
Good OR and DR capabilities, performs well, offers data security, and continuous file versioning helps recover from hardware failures
Pros and Cons
  • "The biggest and most impressive thing for us is the operational recovery (OR) and disaster recovery (DR) capabilities that Nasuni has. If a filer goes down, or an ESX server goes down, then we can quickly recover."
  • "When we have to rebuild a filer or put a new one at a site, one of the things that I would like to be able to do is just repoint the data from Azure to it. As it is now, you need to copy it using a method like Robocopy."

What is our primary use case?

We use Nasuni to provide storage at various locations. It is for office-type files that they would use for day-to-day office work, such as spreadsheets. None of it is critical data.

Each group at each site has its own data store. For example, HR has its own, and finance has its own. All of these different groups at different locations use this data, and they use these filers to store it.

The Nasuni filers are on-site, and we have virtual edge appliances on ESX servers at about 35 sites globally. The data stored at these sites is then fed up into Azure and we have all of our data stored there.

How has it helped my organization?

The OR and DR capabilities have been a very big help for us. Previously, with the solutions we had, it would have taken weeks sometimes to get things fixed and back up and running for people. Now, it only takes a matter of minutes.

It used to be a lot of trouble to bring data back up and a lot of the time, it was read-only, so the people couldn't use it very well. Now, with Nasuni, we're able to pretty much keep their experience seamless, no matter how much trouble the hardware is in at the site.

The Nasuni filers are easy to manage, although the process is similar to what we had before. We have a report that comes out three times a day that gives us the amount of data that's in the queue to be uploaded to Azure on each individual filer. We keep track of that to make sure nothing is getting out of hand. It also tells us if the filer has been restarted and how long ago that happened. It gives us a quick view of everything and how much total we're using within Nasuni. This report is something we created on our own to keep track of things.

If a user deletes a file or a file becomes corrupted, it's easy for them to get it restored. There is very little chance that the data is going to be done. We've had a few people delete things, or they have become corrupted, and we were able to get that file back to them in the states that it was in about five minutes before they had a problem. We were able to do this without any issues. Overall, the continuous file versioning is really helpful.

What is most valuable?

The biggest and most impressive thing for us is the operational recovery (OR) and disaster recovery (DR) capabilities that Nasuni has. If a filer goes down, or an ESX server goes down, then we can quickly recover. For example, we lost a controller the other day and all of the drives were corrupted. We were able to quickly repoint all of the users to a backup filer that we have at our data center, they were back up and running within minutes, and they still have read-write capabilities. Once that ESX server was fixed, we were able to repoint everything back to it in a matter of minutes. People were then again using their local filer to connect.

Nasuni provides continuous file versioning and we take snapshots on a regular basis. Right now, we have them stored forever, but we're trying to reign that in a little bit and keep them only for a period of time. Certainly, at this point, we have a lot of file versions.

We have not had a problem with ransomware but if we did, we would be able to restore the data pretty quickly by going back to an older version of the file before the ransomware took over. It is a similar process to the DR, although a little bit different. For us, OR and DR are pretty much the same thing. We haven't had any disasters that we've had to recover from but we've had three or four hardware failures a year that we've had to deal with. The continuous file versioning has helped to fix these problems pretty quickly.

Continuous file versioning also makes it easier for our operations group. The support team is able to restore files quickly, 24/7, and it is less work for them. They have more time to focus on other problems. The end-user also has access to shadow copies through Windows, and they've used that extensively at the sites.

Nasuni has helped to eliminate our on-premises infrastructure. When we moved to Nasuni, we moved to Azure. Before that, we had a large SAN storage that we were using, and we were able to get rid of it. That was a big difference for us.

We were definitely able to save some money because we've eliminated those expensive SAN disks completely. There were some servers at our old data center that we were able to get rid of, as well. There are some new expenses with Azure because we have to pay for the space taken by the snapshots, which is why we're going to put a retention limit in place. Overall, I don't have an exact number but we were able to save money.

Nasuni is transparent to our end-users. We have it all set up as a file server through Microsoft DFS. If you were to ask one of our end-users how they like Nasuni, they would have no idea what you're talking about.

What needs improvement?

One issue that we have is related to copying data out of Nasuni. We just sold a site and it was split into two pieces. One part of it was sold to another company and we kept the other part. At the site, they have a Nasuni filer with about eight terabytes of data. Now, we have to split that data and the problem stems from the fact that the other company doesn't have Nasuni.

This means that we have to copy all of that data back to the site and into a format that they can use, which is probably just a Windows file server, and then we have to split it somehow. I'm not really sure that there's an easy way to do that. It's going to take us a little bit longer to separate this other location, and we're having to invent things as we go along.  

In these areas, it's not as simple as it could be, but it doesn't happen very often. As such, we haven't had to worry about it too often. Although it's not affecting us too much at this point, if there's a problem such that we have trouble getting data out of Nasuni, then that could be an issue. However, for the time being, it seems fine.

When we have to rebuild a filer or put a new one at a site, one of the things that I would like to be able to do is just repoint the data from Azure to it. As it is now, you need to copy it using a method like Robocopy. To me, this seems counterintuitive or like we're going backward a little bit. I would like to see a way to be able to switch them around without any problem. That said, I'm not sure if it would then cause other issues because of how Nasuni works, so it may not be possible.

For how long have I used the solution?

We started using Nasuni in 2018 and it's been running ever since.

What do I think about the stability of the solution?

Up until about a week ago, the stability has been rock solid. We've actually had a few issues after upgrading to version 9.3 that we're trying to deal with. We have a couple of sites that we're still not sure if Nasuni is the problem, or if it's VMware ESX, and we're working on that. At this point, we're not thinking about rolling back because of all of our sites, only two of them have problems. As such, we think that something else may be going on.

For the most part, it's been extremely stable, with no issues whatsoever. With Nasuni, there has been very little downtime, if any. Most of the sites have never gone down and with the sites that have, there's usually some other external problem.

Overall, it's been very stable for us.

What do I think about the scalability of the solution?

We are limited to the amount of space that we have purchased from Nasuni. If we get close to running out then we just buy more. We still have to pay for the storage within Azure, so we're trying to make sure that it doesn't get out of control. In general, we don't need to add any on demand.

Scalability is not a problem and we can add as many servers and as many filers as we need to, which is really nice. For example, instead of buying tape drives and using that type of backup system, we decided to take a few sites where we have some smaller servers and we use Nasuni to back them up. We use a separate filer to back up all of that data. It's been nice in that way, where we've been able to do things with it that we hadn't originally thought of.

If it should happen that we make a large acquisition, and we bought 10 sites, we could easily put in 10 more filers. It wouldn't be a problem.

Amongst our 35 sites, we have between 10,000 and 12,000 users. A lot of them are office-type people such as those from HR and finance. All of us, including administrators and developers, use it for this kind of thing. The developers wouldn't store code on these because that's not what it's used for. Our Nasuni environment is specifically for data to help the business run, which isn't critical to producing goods or shipping them or anything like that. That is a completely different system. Anybody who works for the company that needs to access simple office data is going to be going through Nasuni.

We have approximately 210 terabytes stored in Nasuni right now. That continues to grow at perhaps a terabyte or two per month. I don't think we'll be moving it anywhere else at this point. Down the road, we do have a very large file system at our data center that we're considering moving, but it's going to take a lot of time to do that one because it's 400 terabytes and it's a lot of old data that we have to clean up first. But that's pretty much the only area that I would see us doing something.

Later this year, we're going to start refreshing some of the hardware because we're approaching five years on some of the older stuff. As we replace it, we'll do another rollout, but it's not going to be like before. We're just going to put a new server in and put a new filer and connect to the data.

How are customer service and technical support?

Up until recently, I would have rated the technical support a seven out of ten. We had to open a case in Australia for a problem with one of the Nasuni filers, and I haven't got a response for it yet. We had one of the support people answer a question at about three in the morning, US East Coast time, and he said something to the effect that he would send an email giving an update. After that, we didn't hear back from him until about 25 hours later, which was a little concerning for me.

Part of the problem seems to be that Nasuni currently is not set up to do 24/7 support. They said that they were going to do that, so that was a little disappointing. Typically when we call in a problem, they jump all over it and they get it fixed in no time.

Which solution did I use previously and why did I switch?

From the perspective of our end-users, the servers function the same way when they're working. We had Windows filers before and now they're Nasuni, so it's basically the same thing to them.

Although we mostly used Microsoft, we did use a backup solution called Double-Take, which is now owned by Carbonite. It did the job but it had a lot of idiosyncrasies that were very difficult to deal with at times. That was the only non-Microsoft thing that we used for the data before Nasuni, and we have since stopped using it.

How was the initial setup?

In the beginning, the setup was kind of complex. We did have help from Nasuni, which was great. They were with us the whole time. We had some growing pains at the beginning, but once we figured out the first three or four sites, we were able to get everything done very quickly and efficiently, with very few problems moving to Nasuni.

When we first started with Nasuni, we had never used it before, and we had never used anything like that. We were used to using Windows servers, and there was a learning curve there to figure out the best way to set up the Nasuni filers. We really had to rely a lot on Nasuni for that. Some of it was trial and error, seeing what worked best as we started rolling it out.

We were replacing a single server that was responsible for doing everything. It was a file server, a domain controller, a print server, and an SCCM distribution point. It was all of these different things and we replaced that with one ESX server, which had multiple guest servers on it, doing all those functions separately. It is much better security-wise and much better operationally.

We started with a very slow implementation. We implemented one site, and then we waited two months before moving to the second site. We tried to start with some of the smaller sites first, with the least amount of data, to get our feet wet. Also, the first site we did was the one that I sit at. The team was all there and it was our site, so we figured we should do our site first. We staggered deployment, so it was not very quick. Then, once we had three or four completed, we did three a week for three months and we were done.

After completing the first site, choosing the next sites had to do with the hardware. We had some old hardware that we repurposed, so we did those sites next. After that, we moved to the sites that necessitated purchasing new hardware. 

From beginning to end, our implementation took a little more than a year. It began in August of 2018 and finished at the end of Q3 in 2019. The time it took was not because of Nasuni. Rather, it revolved around different ordering cycles in our company. Buying the new hardware was what stretched out the deployment time.

What about the implementation team?

I was in charge of the team that did the implementation.

For purchasing and the initial negotiations with Nasuni, we used CDW. We still interact with them when it's time to do renewals, and they are great to deal with. They really help out quite a bit. They were the ones that brought us Nasuni in the first place and suggested that we take a look at it.

We're very happy with CDW. We use them for all of our hardware orders, and a couple of different infrastructure tools. We use them quite extensively.

We had four people responsible for the deployments, with one guy who was in charge of the group as the lead architect. Once it was deployed, we turned it over to our operations group, which is outsourced to TCS. Although they have supported us since then, they come to us if there's anything that's still an issue. We have a couple of guys that still work with Nasuni a little bit, but that's basically how the maintenance is done.

For the most part, there is little maintenance to do. There are situations such as when a controller card goes down, or like the issues we have been having since the upgrade. Otherwise, it's very hands-off and you really don't have to do a lot.

What was our ROI?

We don't plan on calculating a return on investment with this solution. In the grand scheme of things, it's really not very much money for what we're doing. We spend more money on the hardware, for example.

What's my experience with pricing, setup cost, and licensing?

Our agreement is set up such that we pay annually per terabyte, and we buy a chunk of it at a time. Then if we run out of space, we go back to them and buy another chunk.

We thought about an agreement with a three-year plan, where we would get a small increase every year, but we decided not to take that approach at this time. We go through CDW for these agreements and they help us get all of the quotes together.

In addition to what we pay Nasuni, there is the cost of storage in Azure or whatever cloud service you're using. It can get pretty pricey if you have a lot of snapshots, which is something we've found and we're now trying to scale back on. That's the biggest thing that is extra and you may not think of right at the beginning.

Which other solutions did I evaluate?

We looked at a few different products that year, and we decided that Nasuni was the best way to go. It has really worked well for us.

One of the products that we looked at was Veeam, the backup software, but it would have been used a little bit differently. We also looked at Backup Exec and a tool from Microsoft. We didn't look at anything that was exactly like Nasuni. We looked at these other things that would create backups of the primary data, which would have stayed at the site. Nasuni was a completely different way of looking at it.

The difference with Nasuni is that rather than having a backup in the cloud, the primary copy of the data is what's in the cloud. For us, it's stored in Azure, whereas with the other tools, the primary copy stays at the site. If you had a major problem, for instance, this issue with the controller card, the problem with these other solutions or the way it was before was that you're down and out at least until you can get the controller card replaced.

Then, once you're back up, you're going to have to copy all of the data back. For that, it would probably need at least a week. Some of these sites have very poor connections. For example, we have a site that's in the Amazon jungle in Brazil and they are notorious for being very slow, yet we've used Nasuni there and it works fine. Some of these other solutions probably wouldn't have worked. In fact, we probably would have had to buy a tape drive and back up the servers that way.

What other advice do I have?

We have a hosted data center where we don't pay for individual items, such as servers. Instead, we pay for a service. The service might include a server or storage, and Nasuni has not eliminated that because we still need our physical servers at the locations. We debated on whether or not to put the filer in Azure for each site, but we decided that it was better to have something local at this point.

For our company, we were a little ahead of the curve. We didn't have internet connections directly from each site, and they all routed through a central internet connection. Because of that, it was difficult to eliminate any hardware at the site. We needed something there physically. But, having the virtual appliance for Nasuni really helps out quite a bit, because then we only have to have one piece of hardware and we can put all of the other servers that we need for infrastructure on the same ESX server. We have five or six different servers that are doing different functions that at one point, would maybe have been three or four different physical servers. Now we've reduced it to one.

We use Microsoft SCOM as a monitoring tool to keep track of all of the filers and make sure that they are running. 

We don't use the Nasuni dashboard because we don't have to. Everything is working the way it is. We do have a management console set up and we do go into that occasionally, but it's not something that's a regular thing that our support people use.

If I had a colleague at another company with concerns about migration to the cloud and Nasuni's performance, I would talk about the fact that the OR capabilities are so different than anything else that I've seen. The performance has actually not been too bad. You would think that there would be an issue with the cloud stores, but we set up a local cache on each filer that allows it to store up to a terabyte or two of regularly used data. That gets probably 80% of what people use, which means that they're accessing a local copy that's synced with what's in the cloud. This means that they don't really have to go to the cloud to get a lot of it. But when they do, it's pretty quick. It may not be as fast as if it were a local copy, but it's not too bad.

My advice for anybody who is considering Nasuni is that they definitely want to look at all of the options, but that Nasuni does have the best setup at this point. It offers the ability to recover things and provides data security. Especially with ransomware and all of these other new things that are causing lots of problems out there, it really helps mitigate some of that.

The biggest thing that I have learned from using Nasuni is that you shouldn't be afraid of the cloud.

I would rate this solution an eight out of ten.

Which deployment model are you using for this solution?

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
John Leitgeb - PeerSpot reviewer
IT Director at Kingston Technology
Real User
Top 20
Easy-to-use interface, good telemetry data, and the support is good
Pros and Cons
  • "If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day."
  • "The onset of configuring an environment in the cloud is difficult and could be easier to do."

What is our primary use case?

Originally, I was looking for a solution that allowed us to replicate our critical workloads to a cloud target and then pay a monthly fee to have it stored there. Then, if some kind of disaster happened, we would have the ability to instantiate or spin up those workloads in a cloud environment and provide access to our applications. That was the ask of the platform.

We are a manufacturing company, so our environment wouldn't be drastically affected by a webpage outage. However, depending on the applications that are affected, being a $15 billion dollar company, there could be a significant impact.

How has it helped my organization?

Zerto is very good in terms of providing continuous data protection. Now bear in mind the ability to do this in the cloud is newer to them than what they've always done traditionally on-premises. Along the way, there are some challenges when working with a cloud provider and having the connectivity methodology to replicate the VMs from on-premises to Azure, through the Zerto interface, and make sure that there's a healthy copy of Zerto in the cloud. For that mechanism, we spent several months working with Zerto, getting it dialed in to support what we needed to do. Otherwise, all of the other stuff that they've been known to do has worked flawlessly.

The interface is easy to use, although configuring the environment, and the infrastructure around it, wasn't so clear. The interface and its dashboard are very good and very nice to use. The interface is very telling in that it provides a lot of the telemetry that you need to validate that your backup is healthy, that it's current, and that it's recoverable.

A good example of how Zerto has improved the way our organization functions is that it has allowed us to decommission repurposed hardware that we were using to do the same type of DR activity. In the past, we would take old hardware and repurpose it as DR hardware, but along with that you have to have the administration expertise, and you have to worry about third-party support on that old hardware. It inevitably ends up breaking down or having problems, and by taking that out of the equation, with all of the DR going to the cloud, all that responsibility is now that of the cloud provider. It frees up our staff who had to babysit the old hardware. I think that, in and of itself, is enough reason to use Zerto.

We've determined that the ability to spin up workloads in Azure is the fastest that we've ever seen because it sits as a pre-converted VM. The speed to convert it and the speed to bring it back on-premises is compelling. It's faster than the other ways that we've tried or used in the past. On top of that, they employ their own compression and deduplication in terms of replicating to a target. As such, the whole capability is much more efficient than doing it the way we were doing it with Rubrik.

If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day. In the past, it was going to take us close to a month. 

Using Zerto does not mean that we can reduce the number of people involved in a failover.  You still need to have expertise with VMware, Zerto, and Azure. It may not need to be as in-depth, and it's not as complicated as some other platforms might be. The person may not have to be such an expert because the platform is intuitive enough that somebody of that level can administer it. Ultimately, you still need a human body to do it.

What is most valuable?

The most valuable feature is the speed at which it can instantiate VMs. When I was doing the same thing with Rubrik, if I had 30 VMs on Azure and I wanted to bring them up live, it would take perhaps 24 hours. Having 1,000 VMs to do, it would be very time-consuming. With Zerto, I can bring up almost 1,000 VMs in an hour. This is what I really liked about Zerto, although it can do a lot of other things, as well.

The deduplication capabilities are good.

What needs improvement?

The onset of configuring an environment in the cloud is difficult and could be easier to do. When it's on-premises, it's a little bit easier because it's more of a controlled environment. It's a Windows operating system on a server and no matter what server you have, it's the same.

However, when you are putting it on AWS, that's a different procedure than installing it on Azure, which is a different procedure than installing it on GCP, if they even support it. I'm not sure that they do. In any event, they could do a better job in how to build that out, in terms of getting the product configured in a cloud environment.

There are some other things they can employ, in terms of the setup of the environment, that would make things a little less challenging. For example, you may need to have an Azure expert on the phone because you require some middleware expertise. This is something that Zerto knew about but maybe could have done a better job of implementing it in their product.

Their long-term retention product has room for improvement, although that is something that they are currently working on.

For how long have I used the solution?

We have been with Zerto for approximately 10 years. We were probably one of the first adopters on the platform.

What do I think about the stability of the solution?

With respect to stability, on-premises, it's been so many years of having it there that it's baked in. It is stable, for sure. The cloud-based deployment is getting there. It's strong enough in terms of the uptime or resilience that we feel confident about getting behind a solution like this.

It is important to consider that any issues with instability could be related to other dependencies, like Azure or network connectivity or our on-premises environment. When you have a hybrid environment between on-premises and the cloud, it's never going to be as stable as a purely on-premises or purely cloud-based deployment. There are always going to be complications.

What do I think about the scalability of the solution?

This is a scalable product. We tested scalability starting with 10 VMs and went right up to 100, and there was no difference. We are an SMB, on the larger side, so I wouldn't know what would happen if you tried to run it with 50,000 VMs. However, in an SMB-sized environment, it can definitely handle or scale to what we do, without any problems.

This is a global solution for us and there's a potential that usage will increase. Right now, it is protecting all of our criticals but not everything. What I mean is that some VMs in a DR scenario would not need to be spun up right away. Some could be done a month later and those particular ones would just fall into our normal recovery process from our backup. 

The backup side is what we're waiting on, or relying on, in terms of the next ask from Zerto. Barring that, we could literally use any other backup solution along with Zerto. I'm perfectly fine doing that but I think it would be nice to use Zerto's backup solution in conjunction with their DR, just because of the integration between the two.  

How are customer service and technical support?

In general, the support is pretty good. They were just acquired by HP, and I'm not sure if that's going to make things better or worse. I've had experiences on both sides, but I think overall their support's been very good.

Which solution did I use previously and why did I switch?

Zerto has not yet replaced any of our legacy backup products but it has replaced our DR solution. Prior to Zerto, we were using Rubrik as our DR solution. We switched to Zerto and it was a much better solution to accommodate what we wanted to do. The reason we switched had to do with support for VMware.

When we were using Rubrik, one of the problems we had was that if I instantiated the VM on Azure, it's running as an Azure VM, not as a VMware VM. This meant that if I needed to bring it back on-premises from Azure, I needed to convert it back to a VMware VM. It was running as a Hyper-V VM in Azure, but I needed an ESX version or a VMware version. At the time, Rubrik did not have a method to convert it back, so this left us stuck.

There are not a lot of other DR solutions like this on the market. There is Site Recovery Manager from VMware, and there is Zerto. After so many years of using it, I find that it is a very mature platform and I consider it easy to use. 

How was the initial setup?

The initial setup is complex. It may be partly due to our understanding of Azure, which I would not put at an expert level. I would rate our skill at Azure between a neophyte and the mid-range in terms of understanding the connectivity points with it. In addition to that, we had to deal with a cloud service provider.

Essentially, we had to change things around, and I would not say that it was easy. It was difficult and definitely needed a third party to help get the product stood up.

Our deployment was completed within a couple of months of ending the PoC. Our PoC lasted between 30 and 60 days, over which time we were able to validate it. It took another 60 days to get it up and running after we got the green light to purchase it.

We're a multisite location, so the implementation strategy started with getting it baked at our corporate location and validating it. Then, build out an Azure footprint globally and then extend the product into those environments. 

What about the implementation team?

We used a company called Insight to assist us with implementation. We had a previous history with one of their engineers, from previous work that we had done. We felt that he would be a good person to walk us through the implementation of Zerto. That, coupled with the fact that Zerto engineers were working with us as well. We had a mix of people supporting the project.

We have an infrastructure architect who's heading the project. He validates the environment, builds it out with the business partners and the vendor, helps figure out how it should be operationalized, configure it, and then it gets passed to our data protection group who has admins that will basically administrate the platform and it maintains itself.

Once the deployment is complete, maintaining the solution is a half-person effort. There are admins who have a background in data protection, backup products, as well as virtualization and understanding of VMware. A typical infrastructure administrator is capable of administering the platform.

What was our ROI?

Zerto has very much saved us money by enabling us to do DR in the cloud, rather than in our physical data center. To do what we want to do and have that same type of hardware, to be able to stand up on it and have that hardware at the ready with support and maintenance, would be huge compared to what I'm doing.

By the way, we are doing what is considered a poor man's DR. I'm not saying that I'm poor, but that's the term I place on it because most people have a replica of their hardware in another environment. One needs to pay for those hardware costs, even though it's not doing anything other than sitting there, just in case. Using Zerto, I don't have to pay for that hardware in the cloud.

All I pay for is storage, and that's much less than what the hardware cost would be. To run that environment with everything on there, just sitting, would cost a factor of ten to one.

I would use this ratio with that because the storage that it replicates to is not the fastest. There's no VMs, there's no compute or memory associated with replicating this, so all I'm paying for is the storage.

So in one case, I'm paying only for storage, and in the other case, I have to pay for storage and for hardware, compute, and connectivity. If you add all that up into what storage would be, I think it would be that storage is inexpensive, but compute added up with maintenance and everything, and networking connectivity between there and the soft costs and man-hours to support that environment, just to have it ready, I would say ten to one is probably a fair assessment.

When it comes to DR, there is no real return on investment. The return comes in the form of risk mitigation. If the question is whether I think that I spent the least amount of money to provide a resilient environment then I would answer yes. Without question.

What's my experience with pricing, setup cost, and licensing?

If you are an IT person and you think that DR is too expensive then the cloud option from Zerto is good because anyone can afford to use it, as far as getting one or two of their criticals protected. The real value of the product is that if you didn't have any DR strategy, because you thought you couldn't afford it, you can at least have some form of DR, including your most critical apps up and running to support the business.

A lot of IT people roll the dice and they take chances that that day will never come. This way, they can save money. My advice is to look at the competition out there, such as VMware Site Recovery, and like anything else, try to leverage the best price you can.

There are no costs in addition to the standard licensing fees for the product itself. However, for the environment that it resides in, there certainly are. With Azure, for example, there are several additional costs including connectivity, storage, and the VPN. These ancillary costs are not trivial and you definitely have to spend some time understanding what they are and try to control them.

Which other solutions did I evaluate?

I looked at several solutions during the evaluation period. When Zerto came to the table, it was very good at doing backup. The other products could arguably instantiate and do the DR but they couldn't do everything that Zerto has been doing. Specifically, Zerto was handling that bubbling of the environment to be able to test it and ensure that there is no cross-contamination. That added feature, on top of the fact that it can do it so much faster than what Rubrik could, was the compelling reason why we looked there.

Along the way, I looked at Cohesity and Veeam and a few other vendors, but they didn't have an elegant solution or an elegant way of doing what I wanted to do, which is sending copies to an expensive cloud storage target, and then having the mechanism to instantiate them. The mechanism wasn't as elegant with some of those vendors.

What other advice do I have?

We initially started with the on-premises version, where we replicated our global DR from the US to Taiwan. Zerto recently came out with a cloud-based, enterprise variant that gives you the ability to use it on-premises or in the cloud. With this, we've migrated our licenses to a cloud-based strategy for disaster recovery.

We are in the middle of evaluating their long-term retention, or long-term backup solution. It's very new to us. In the same way that Veeam, and Rubrik, and others were trying to get into Zerto's business, Zerto's now trying to get into their business as far as the backup solution.

I think it's much easier to do backup than what Zerto does for DR, so I don't think it will be very difficult for them to do table stakes back up, which is file retention for multiple targets, and that kind of thing.

Right now, I would say they're probably at the 70% mark as far as what I consider to be a success, but each version they release gets closer and closer to being a certifiable, good backup solution.

We have not had to recover our data after a ransomware attack but if our whole environment was encrypted, we have several ways to recover it. Zerto is the last resort for us but if we ever have to do that, I know that we can recover our environment in hours instead of days.

If that day ever occurs, which would be a very bad day if we had to recover at that level, then Zerto will be very helpful. We've done recoveries in the past where the on-premises restore was not healthy, and we've been able to recover them very fast. It isn't the onesie twosies that are compelling in terms of recovery because most vendors can provide that. It's the sheer volume of being able to restore so many at once that's the compelling factor for Zerto.

My advice for anybody who is implementing Zerto is to get a good cloud architect. Spend the time to build out your design, including your IP scheme, to support the feature sets and capabilities of the product. That is where the work needs to be done, more so than the Zerto products themselves. Zerto is pretty simple to get up and running but it's all the work ahead in the deployment or delivery that needs to be done. A good architect or cloud person will help with this.

The biggest lesson that I have learned from using Zerto is that it requires good planning but at the end of it, you'll have a reasonable disaster recovery solution. If you don't currently have one then this is certainly something that you should consider.

I would rate Zerto a ten out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
BurhanShakil - PeerSpot reviewer
Systems Engineer at Harvard University
Real User
Top 5
Automates most of our backup workflow, automatically adding VMs and assigning the SLA, and provides instant recovery
Pros and Cons
  • "There is a live-restore feature, their Live Mount, and the way it works we can instantly recover a VM, a past backup, to be directly attached to our VMware environment. Rubrik will act as a disk for it. It's like an instant restore. Within a few minutes our VM is up and running. And then, if we want to restore it, we can just migrate it to our actual storage."
  • "Capacity reports could definitely be improved. It's hard to determine what is using the space and why. For instance, you can see that some host is using 2 TB on the Rubrik node and the disk space on that host is 400 GBs. It's hard to explain how there can be 2 TBs of data on local storage when nothing has changed on the host for the past three days."

What is our primary use case?

This is our main backup system. All of our VMs, our hardware hosts; everything is backed up using Rubrik.

Disaster recovery is one of the options we have explored, so that in case of a big disaster we could utilize their image conversion to run our VMs on AWS, but that is just a proof of concept at this stage. We have tested it. It works. But we don't have a proper plan in place for that.

We have only one physical server that we are protecting with it and the rest are all virtual servers. We have around 400 server VMs and all of them are protected using Rubrik. Most of our environment, around 90 percent, is VMware, while 10 percent of our environment is Hyper-V. 

With the VMs we are also taking backups of our CIFS shares. We have our file clusters running Windows Servers so we are taking backups using the SMB mount. We have NFS clusters as well, for the Linux side, which we're backing up using the built-in NFS connectors. We explored SQL backups, but right now we are using our SQL Server to dump the data and then the files are being backed up. We're not directly backing up SQL using Rubrik.

How has it helped my organization?

The SLA-based policy automation has had a very good effect on our data protection operations. We came from Commvault and we used to have tape backups. It was a full-time job for one of our sys admins to update the tape library, replace the tape cartridges, recycle them, scratch them, and then bring them back. It was a huge process. We were using offsite storage to store our tape backups which were continuously going back and forth from our campus. Now it's all automated. We barely have to manage anything. We are now consumers instead of actually setting this up. It was one set up and we just maintain it now.

It saves us time when it comes to managing backups because we barely do anything, other than just verify. We get a daily report to see if any of the VMs are out of our SLA. The only action item we have, if something is out of SLA, is to verify what happened, why the backup failed or missed its window. Given that it was tape before, it has gone from hours to minutes. It used to be more proactive, where we were continuously checking everything and replacing the tapes and making sure that everything went through. Now, it's more of a reactive situation, where we only look at a backup when there is an issue.

It has also definitely reduced the time we spend on recovery testing, because it can do Live Mounts and that does not require an actual recovery. So our VMs are instantly available. And the file restore feature allows us to explore the file system of every VM, instead of restoring it, and then just restore the files that we need, and that has been amazing so far as well. Within a few minutes, we have either the VM or the files available. I don't even know how to compare it to Commvault and the tape backups. When I joined Harvard, they already were on Rubrik and we were decommissioning Commvault, so I know a little bit about the process. We do classroom recordings in Harvard Law School and those were still going to Commvault. That was the last project that I was involved in and I saw the crazy amount of work involved where we had to bring all the tape libraries from safe.

And when it comes to recovery time itself, it's an instant recovery in most circumstances, even if we have to recover something that's more than three days old. In our environment, after something is more than three days old it goes to an archival location on S3. When we restore data that is between three and 42 days old it is downloaded from S3 and then made available. For us, that situation is a little bit slower compared to the Live Mount. Depending on the size of the VM, it could range between a few minutes to a few hours. But if the data is on premises, it downloads the data instantly.

We don't have to worry about the solution too much, which definitely has helped our productivity. Most of our workflow is automated, where VMs are automatically added. The SLA is automatically assigned. Things are automatically archived. Anyone can take action. We have on-call people who look at the reports and take action as needed.

What is most valuable?

There is a live-restore feature, their Live Mount, and the way it works we can instantly recover a VM, a past backup, to be directly attached to our VMware environment. Rubrik will act as a disk for it. It's like an instant restore. Within a few minutes our VM is up and running. And then, if we want to restore it, we can just migrate it to our actual storage.

Rubrik's web interface is very simple to use. We have a very simple SLA configured so that everything is backed up every day. Any new VMs we configure in our environment automatically get added, the SLA is automatically assigned to them. All the VMs, after three days, are archived to AWS S3, and then there's a life cycle on the AWS side to work with that.

The archival functionality is one of the main features because the Rubrik that we have has about 60 or 70 TB of total local storage, which is definitely not enough for our data. We have around 140 TB of data stored on AWS and, without the archival feature, we would have to buy at least three times the number of nodes that we currently have to keep all the data secure for 42 days, based on our SLA. It's definitely saving us on costs. It also gets us away from having to keep redundancy on the data, because if we were storing it on-premises we would have to make sure that we have redundancy and offsite storage. Now, all of that is AWS. We no longer have to worry about that.

What needs improvement?

Capacity reports could definitely be improved. It's hard to determine what is using the space and why. For instance, you can see that some host is using 2 TB on the Rubrik node and the disk space on that host is 400 GBs. It's hard to explain how there can be 2 TBs of data on local storage when nothing has changed on the host for the past three days. 

They have improved a lot on the SLA reports. We used to get a lot of false alerts before, because a snapshot was missed. In the reports it would remain a "non-compliant to an SLA for 42 days, until the 42 cycles were done. They've removed that. If it misses an SLA and if you take another snapshot or to take an automatic backup, it automatically fixes the SLA report to show us it's protected.

Most of their documentation for cloud stuff can be improved. This could be old information, as we did the PoC last year and maybe their documentation has been updated now, but we literally had to contact support every day, and at every step for things like, "Okay, what do we do with the AMIs? How do we get Rubrik configured? How do we convert the image?" None of that was available in a single documentation format. It was spread around in different documentation.

For how long have I used the solution?

I've been using Rubrik for the past three years, since I joined Harvard, but I think it was deployed on-premises four or five years back.

What do I think about the stability of the solution?

It's very stable. We have had an instance where one of the nodes was offline for no reason, but working with their support it was determined that there was a cache issue and they fixed it.

We don't have to worry about backups. We have been using it for more than four years and so far there hasn't been a single incident where we have had any issues recovering any of the files or VMs. It is very robust, continuously updating.

What do I think about the scalability of the solution?

They have everything available by API, which is a good thing because this is the way that things are going forward with an API-first infrastructure. In terms of their physical nodes you can also scale them, but there's a requirement of always increasing in sets of three more nodes. We have one Brik and four nodes currently, and to increase our storage we would have to buy three more nodes, which is kind of a limitation. It would have been nice if we could just buy one node and increase that way, gradually, instead of buying three large nodes. But I can't complain about it. That's probably their infrastructure.

We are using it for everything except our media storage. Our classroom recordings are directly archived to glacier and everything else goes through Rubrik. The reason for that is that we don't want on-premises storage of the media. These are large video recordings and it would be very expensive to store them locally. Rubrik keeps a local copy for three days, for regular backups. We are actually testing a new feature where you can connect to NAS storage and there will be no local data, only metadata, stored locally. Everything else is archived. We have tested this feature with their support. They showed it to us but we haven't acquired the license to start using it yet.

Only sys admins have access to Rubrik in our organization. Currently, 10 of our sys admins have access to the system. 

How are customer service and technical support?

Rubrik support is amazing. When we are involved in upgrades we always open a ticket and there is a tech person joined through a tunnel and looking at the upgrade while it's being done. It's like everything is off our shoulders in terms of managing it. If something goes wrong, they're always available to support us.

Every time we've opened a ticket with them, even to explore new features, we have always gotten an instant response, and even when it comes to trial licenses. The whole proof of concept project we did on AWS for DR was provided from their support, and it has been amazing. The experience has been really good.

Most of the time, their turnaround time for tickets is less than 24 hours, especially with high-priority tickets. Recently, we have had some issues with our VM storage sizes not reflected properly. We were looking at a capacity report and we were seeing some of the VMs using way more storage on Rubrik than they should. This has been a difficult problem and they have continued to escalate it to different engineers. That is the longest interaction we have had and the issue is still pending.

We are not running the bleeding edge, so there is a possibility that if we do switch to 5.2 we might see an improvement already on that deduplication; that might be the reason that this is happening. They are looking into it. They have suggested a couple of actions from our end to actually delete those backups, archive them, and restart the backups, but they're still looking into it.

Which solution did I use previously and why did I switch?

We wanted to get away from tapes. We tried Veeam but it did not work very well for us. There were a couple of shortcomings which we couldn't maintain, plus it wasn't cloud-ready at that moment, at least not to the extent that Rubrik was.

Rubrik was very fresh in the market at that time, but it was bringing features that we were looking for. We were already set on using either Azure or AWS and it had the needed support for them.

How was the initial setup?

I've been involved with upgrades but not an install because we just have the one on-premise device. I've been involved in multiple proofs of concepts. For example, they launched a couple of features along the way where we were testing cloud workloads and converting our images to native AWS images so that we could use it as a disaster recovery site in the future, if needed. All of our backups are going to AWS.

Upgrades are very straightforward. Their support is always with us, so we haven't had any hiccups during the upgrades. They go very smoothly. I've been involved in multiple upgrades, and we were at some point running the bleeding edge software, when we were looking for some features that were available, without any issues. So we did upgrade to the latest and greatest version. Our general policy is to stay one version behind to iron out all the bugs. But with Rubrik we have attempted to run the latest version, to use the features, and it has been stable enough for us and the upgrades have gone smoothly.

We usually block out a two-hour maintenance window for upgrades. There have been major upgrades which required some database work, and they have taken more time. In the move from version 4 to version 5 their whole database infrastructure was changed.

What's my experience with pricing, setup cost, and licensing?

We got grandfathered in the licensing terms. Their licensing is much more narrow now and you have to buy licenses for every cloud feature, but we got most of those things as a package.

We got really good pricing because we're in the education sector and we were one of the first big organizations to start using Rubrik.

Which other solutions did I evaluate?

Recently, when we were looking for direct backup to glacier, we started using CloudBerry which is a very basic product. It's a standalone install on our media servers and it's directly backing up to glacier. It's a single unit license on the single server; there's no hardware involved with it.

The only advantage of CloudBerry is that we're not keeping an on-premises copy. When we take a backup with Rubrik it creates an on-premises copy of all of our media files and then uploads them, and that requires more storage on Briks that we don't want to spend money on. The Rubrik feature we tested, where you connect to NAS storage, wasn't available when we acquired the license from CloudBerry.

What other advice do I have?

Rubrik is an amazing product. There are some features still missing. For example, you cannot do a granular backup or restore of Active Directory. That has been on my wish list. I have posted that on their tech forum where people discuss new features and new things that they are launching. I know that it will come because they have been adding other granular backup support with VSS. The AD-level granular backup, so we can restore a single account or a single computer, is the one of the last features that we are requesting. They usually do bring out whatever features we request in their next update.

We have not used the solution's ransomware recovery. I have attended a couple of seminars where they have recently been talking about that, but we haven't tested it. We haven't had any incidents which would require us to use that feature.

We have also not used its pre-built integrations or API support for integrations with other solutions. We played with a couple of features, such as the organization features to segregate some of our VMs, but we found that it was not possible the way we handle the system. We wanted to make our domain controller backups inaccessible to our backup administrators, because we wanted that to be part of the DCA job. So we explored the organizations, but the way it works we would have had to move everything into an organization and our backup administrators were taking care of everything except domain controllers. So we dropped the idea of using organizations.

In terms of downtime, I don't think Rubrik has reduced that in a meaningful way. We have a pretty redundant environment anyway. If something happens to our VMware hosts, the VMs automatically fail over to other hosts so there is rarely any downtime. We have been off physical servers for quite some time. If there were physical servers, Rubrik could help reduce downtime, but since we don't have physical servers we don't even know what the recovery would look like with Rubrik. With tapes it was crazy when something happened. If someone did not look at RAID and we had a two-drive failure or a three-drive failure, then it would be a full recovery from tape. But now, because everything is running on VMs, we have no downtime, most of the time.

Overall the product is really good. Rubrik is very competitive. Even if you now look at their positioning on the industry review sites, they are doing really well. It's a very good product. We recommended the product to our Central IT department. We are Harvard Law School, but Harvard has a Central IT which manages other schools, and they are doing a PoC right now. It's a good product to recommend.

Which deployment model are you using for this solution?

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
David Nahtigal - PeerSpot reviewer
IT System Engineer at a real estate/law firm with 10,001+ employees
Real User
Perfect match for complex environments, as it supports all types of infrastructure
Pros and Cons
  • "We have VMware, Hyper-V, Oracle, and Microsoft SQL. We have a lot of different systems, and all of them are supported under one licensing agreement. That's one of the benefits."
  • "We had some small issues with the reporting, but that was just a matter of fine-tuning the kinds of messages we receive by email. It was a little overwhelming in the initial configuration. So we reviewed our configuration with our partner and customized the reports so that we only get the important reports. I haven't seen any big issues or things that the solution is missing."

What is our primary use case?

The primary use case is as a backup and recovery solution. We have two data centers and we have a Commvault server for replication in both. We back up all our infrastructure with this solution, from Active Directory to SQL, web servers, file servers, databases, et cetera.

How has it helped my organization?

Commvault helps to ensure broad coverage with the discovery of unprotected workloads. The Discovery feature lists all the resources that we have, all the virtual servers and all the physical servers. You can also automatically deploy agents or set up schedules. At first, we did some manual tuning to customize it before deployment. Now, the virtual infrastructure administrator just has to add the VM tag on the virtual machine and that machine will automatically be backed up in the next schedule. It's a good automation feature.

It also helps by minimizing the time our admins spend on backup tasks so that they can spend time on other projects. Before Commvault, we had two backup administrators who were using a backup and restore application to restore every test that we had to do. It was a full-time job just monitoring the backups and doing the restores. With our new solution from Commvault, we have successfully implemented web-based backup and restore management for our different teams, including our file server, database, and Exchange teams. We split operations among those teams and each one has access to the backup Web Console. This console from Commvault is very useful for segmenting the restore options. That way, the database backup administrator only has access to the database servers and can only do backups and restores of databases and does not have access to Active Directory or file servers. The web-based backup and restore is a really great option.

Whereas before, we had one full-time engineer doing backups and restores, now that engineer is only working on it for two to four hours per week. Across our four teams, it's saving us about 10 to 12 hours a week.

The solution has helped to reduce storage costs as well. Commvault has an option to move data from primary storage. When you do a backup, it scans all the files from the file server and you can set a policy to remove all files that are more than, say, three years old from the primary storage. And on the primary storage, there is only a link that connects to the backup source. When a user needs a file on secondary storage, there is no problem because it only reads the file. When the user opens that old file, it's automatically restored and the user can access it. For our IT team, it has saved us between 5 and 10 percent of storage. It depends on how widely you implement the solution and the policies you set. You could save 50 percent if you have a broader policy.

We have also saved on infrastructure costs because Commvault takes less time to do the backup jobs, due to the deduplication. Also, the background tasks that are used to copy the backup jobs to tape are deduplicated. The full backup of our infrastructure can now be done in a couple of hours during the night. Before, some backup tasks would take more than a day, on the weekend. There has been a reduction of 80 or 90 percent in the backup window.

What is most valuable?

Commvault's most valuable features are its 

  • deduplication
  • encryption
  • support for many OSs
  • support for different infrastructures. 

We have VMware, Hyper-V, Oracle, and Microsoft SQL. We have a lot of different systems, and all of them are supported under one licensing agreement. That's one of the benefits.

We use two user interfaces on a regular basis. One is the Web Console, which is simple and has all the necessary functionality. You can add servers, back up servers, and restore. We also have a replication solution implemented and we use the Web Console for that as well. But for the initial configuration and for some deeper configurations, we also use the Commvault application. It's big and has all the fine-tuning options.

The solution's Command Center is very straightforward. It has an intuitive user interface with graphs, tables, alerts, as well as many options for alerting and messaging. Of course, you have to get used to the environment, but it's easy to use.

It is also important that Commvault provides a single platform to move, manage, and recover data across on-premises locations. That's because we have different storage and virtualization platforms. We have no problem if the file resides, say, on NetApp storage and we have to restore data to a workstation or some kind of Windows Server. Also, when we did some migrations from our old Hyper-V cluster to the new VMware cluster, those integrations between different infrastructures were successfully accomplished with the Commvault solution. We have no issues with different types of resources we need to back up.

In addition, the recovery options are pretty straightforward. For example, if you choose a virtual machine, you can restore the full virtual machine, you can restore the virtual machine on a different platform, you can restore just a virtual disk, or you can restore just a file within the virtual machine. You have all the options. In the web-based user interface, you can also restore using download options. You can browse through the files or virtual machines and download the file from the backup. They have a great range of restore options.

What needs improvement?

We had some small issues with the reporting, but that was just a matter of fine-tuning the kinds of messages we receive by email. It was a little overwhelming in the initial configuration. So we reviewed our configuration with our partner and customized the reports so that we only get the important reports. I haven't seen any big issues or things that the solution is missing.

For how long have I used the solution?

We implemented Commvault at the start of the year, so we have been using it for almost a year now.

What do I think about the stability of the solution?

We had one issue. The Commvault server is an Active-Passive cluster and the Active node had some hiccups. It wasn't something serious, but the Commvault server was unable to connect to one of the agents. I believe our partner discovered it because they also receive messages from our Commvault solution. They just informed us that the Commvault server had to be restarted. We did so during working hours because backups are done at night, and there were no issues. It was a standard procedure and we have had no other big issues.

What do I think about the scalability of the solution?

At the start of the Commvault project, we put together a list of all the resources that we have. They counted our resources and gave us the exact number of clients we needed to buy to cover all of our infrastructure and we had no issue there. Of course, we also have some plans for the growth of our infrastructure. If we have any big upgrades, we will also upgrade the Commvault infrastructure.

We have a lot of Commvault's features implemented. We're also in the process of testing the backup of endpoints, such as laptops and devices from end-users. There are just a few features from Commvault that we don't use.

How are customer service and support?

We use technical support through our partner because our partner has a lot of inside knowledge. For the majority of issues our partner gives us the solution, but they have had to report some small issues to Commvault support. They spoke directly with Commvault support and the solution was available in a few days. It was a very good troubleshooting experience.

How would you rate customer service and support?


Which solution did I use previously and why did I switch?

We used NetWorker and Veeam. The NetWorker solution was the older solution and, in some very old clusters, we also used TSM (Tivoli Storage Manager) from IBM. The TSM solution was no longer supported and the Dell EMC NetWorker solution, which we used for our physical servers, was difficult to maintain. Veeam was a good solution for our VMware infrastructure, but we needed a solution with support for a wider variety of infrastructure types. One of our major goals was to eliminate our multiple backup solutions by going with Commvault.

How was the initial setup?

If we had to do the initial setup ourselves, it would be complex, of course, because we have a big infrastructure with different types of targets. But our partners helped and they managed to cover all the tests that we implemented at the start of the project. So, overall, the setup went really well. It took just a few days, maybe a week, to add our agents. After the initial configuration, it was really easy to roll out the solution to our entire infrastructure.

What about the implementation team?

Our partners, called Our Space Appliances, are system integrators in backup and storage solutions. They know our infrastructure.

Which other solutions did I evaluate?

We had a process for choosing a vendor. We called a number of vendors and had proposals from the Veeam, NetWorker, Cohesity, and Commvault.

The big pro for Commvault was that it was a single solution for our entire infrastructure. The licensing model was also an advantage and the experience of the partner was also a big plus. Some of the other solutions we evaluated did not make it to the second round because they did not support all the infrastructure we have in our environment. In the last round, the battle came down to pricing, as well as some small features, and Commvault was the best in all the criteria.

What other advice do I have?

Commvault is a pretty comprehensive but, maybe, complex solution when you first start with it. But that's why it is a perfect match for complex infrastructure, as it supports all types of infrastructure. Commvault is not appropriate for small businesses with just one type of virtual environment. There are different vendors that may be better for that use case. But when looking at enterprise backup and recovery options, Commvault is the easiest to use, and it has the widest range of features.

We are currently moving to Exchange Online. We have between 1,500 and 2,000 users. We have already deployed Teams on the cloud, and now we are migrating user mailboxes to cloud. Our next step, in the following month, will be a backup of Microsoft cloud solutions through Commvault.

In terms of the coverage of Commvault, we have a big Oracle Database and the Oracle administrators are a separate team. They do their own backups using RMAN. They then move the backup to the separate Sun ZFS  storage. We also tried that backup with Commvault, using the Commvault agent to run RMAN. The test went well, the backup was good, but the database team was used to their old solution. So we agreed to implement a backup of the ZFS file server.

Ours is an all-on-prem solution so we don't have any other networks being backed up. We do have a DMZ with different VLANs and so there were some problems. We had to install an agent on the DMZ zone, an agent that has access to resources in the demilitarized network. But it's a no-brainer. We just have to open a specific port so that the backup agent can communicate with the CommCell server, and the resources are backed up successfully.

In addition, to protect against ransomware we use Commvault's alert options because Commvault can predict big changes in the network with its AI solution. This is the first line of defense. The second line of defense is that we are now in the process of implementing secondary, offline storage to ensure an air gap between the primary backup, the replicated backup, and the offline backup storage. In case of a ransomware attack we will have off-site backup storage.

Which deployment model are you using for this solution?

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Backup and Recovery Software
July 2022
Get our free report covering Rubrik, Microsoft, Commvault, and other competitors of Veeam Backup & Replication. Updated: July 2022.
621,548 professionals have used our research since 2012.