Buyer's Guide
Backup and Recovery Software
May 2023
Get our free report covering Microsoft, Acronis, Rubrik, and other competitors of Veeam Backup & Replication. Updated: May 2023.
708,461 professionals have used our research since 2012.

Read reviews of Veeam Backup & Replication alternatives and competitors

Technical Director at Computer Driven Solutions
Real User
If the physical hardware has a problem, then we can utilize this appliance, turn on the virtual machine, and carry on running the business while correcting the issue
Pros and Cons
  • "The recovery of data is the most valuable feature. The software backup, which is just a program that gets installed on a server, can back up to the cloud. You can install that on a server or PC, and that will simply back up a user's files and folders. If it is installed on the server, we just back up the relevant data. Recovering that, if there has been a malicious attack on a business or anything like that, has been invaluable in the past. The good features about that are obviously, if the physical hardware has a problem, then we can utilize the appliance, turn on the virtual machine, and carry on running the business while we put the hardware back and correct the issue."
  • "When you are ordering hardware appliances, they have to be delivered from America. In the past, hard drives on the appliance have been simple SSD drives that are installed. However, they don't have a local supply for the SSD drives in the UK. They have to be exported from the US, arrive, and then I have to go and install them. Then, they will rebuild it from their side of things. However, I could order that same SSD drive online and get it the next day. So, I have to wait days for things to come when I could get the exact same drive the next day in the UK, if I wanted to. That causes a bit of a problem. I don't know how many businesses they have in the UK, but I do think that having to import stuff from the US is a time-consuming problem. If there was a holding in the UK, then we wouldn't have that delay in time."

What is our primary use case?

We are an IT support company, so we resell IT and support services. It is our customers who have installations of disaster recovery appliances and cloud backup solutions.

We are our customer's IT support company. We are the ones who implement and support it. Our customer pays the bill, but we do all the support, looking after it. I do not back my own data up on Infrascale, but I definitely back up my customers' data.

We sell and support two products for our customers. We have four disaster recovery appliances onsite that then back up to the cloud from the appliance. We also have quite a few people on just the cloud backup. So, we use cloud backup and DRasS, which is disaster recovery as a service.

When there is an appliance-type installation, which is a physical hardware installation, we go to the site and install a piece of hardware. That piece of hardware communicates with their servers onsite. Their servers are hosts with virtual servers built onto them. The complete virtual machine is backed up maybe twice or three times daily to the hardware appliance provided by Infrascale. That could then become a replacement for the server, if the server had a physical problem and needed to be shut down. We can turn the physical server off onsite, go to the appliance provided by Infrascale, boot up the virtual machine on the appliance, and then it would run the business as if the server were still running. So, it is hardware redundancy for the server.

It backs up the virtual machines, backing them up and all the files. So, it can be a data recovery tool as well. Also, if the entire building burnt down, we could jump onto the version in the cloud, boot that up, and people from all around the world could log into that server to carry on working.

Imagine an appliance, similar to installing a second server, that backs up a virtual machine to the appliance. It has disk space on the appliance, then it backs up that virtual machine from the appliance to the cloud. Our cloud is based in the UK, which is also provisioned by Infrascale. So, we implement that sort of system, which is a little bit like SaaS, but it is a disaster recovery solution. We also have cloud backup, which is a software installation to a server, that then backs up certain files and folders through a cloud provision somewhere in the world.

We are the actual customer because we sign for these products, and our customer doesn't. We are the actual people who lease these things from them.

How has it helped my organization?

Now, if one of our businesses has an issue, I am confident that we would be able to get them booted and running within a couple of hours. It would affect their business, but it is a disaster recovery scenario, so there has obviously been a disaster. 

What is most valuable?

The recovery of data is the most valuable feature. The software backup, which is just a program that gets installed on a server, can back up to the cloud. You can install that on a server or PC, and that will simply back up a user's files and folders. If it is installed on the server, we just back up the relevant data. Recovering that, if there has been a malicious attack on a business or anything like that, has been invaluable in the past. The good features about that are obviously, if the physical hardware has a problem, then we can utilize the appliance, turn on the virtual machine, and carry on running the business while we put the hardware back and correct the issue.

The Boot Verification feature gives you a snapshot picture to tell you what would happen if a virtual machine was booted. From that, you can tell whether the backup was successful. 

I find the dashboard fairly straightforward. It is fairly in-depth from day one. The more you use it, the more you get used to it. I find it fairly straightforward now for making some limited changes that won't really cause any problems. It has a good user interface.

The speed of the solution’s restore functionality is very quick. It works a treat and does the job perfectly. I don't think we have ever come to the point of thinking the product isn't quick.

What needs improvement?

We did have a major problem last year in March. Somebody attacked some servers being supported by Infrascale and managed to wipe the servers as well as wiping the appliance. However, they didn't manage to wipe the cloud. So, what was on the cloud had to be downloaded to the appliance again. The customer was probably down for about three days. This was a very difficult situation for us to be in.

I think somebody had accessed the server, could get onto the appliance, and see what it was because we had saved the password. Now, I know better than to save the password. Also, the password was still the default password. With the new implementation, it makes you change the password so you can't keep the default password. However, four years ago, when we implemented it, the default password was still on that appliance when that appliance got wiped by somebody.

While this would be a very worst-case scenario, I don't blame Infrascale for the amount of time that it took. However, it was difficult for us because we were trying to placate our customer, which was difficult, because they have a 25 million pound turnover business. They were not happy, but we are still working with them. 

I think I'd be more confident now dealing with the problem. Plus, we monitor the systems more closely. Whereas, previously, I presumed that everything was going well without really checking. Now, I have learned that I need to be on top of any issues. So, I am checking the appliances and cloud solution backups daily. So, we are a bit better switched on with supporting it.

When you are ordering hardware appliances, they have to be delivered from America. In the past, hard drives on the appliance have been simple SSD drives that are installed. However, they don't have a local supply for the SSD drives in the UK. They have to be exported from the US, arrive, and then I have to go and install them. Then, they will rebuild it from their side of things. However, I could order that same SSD drive online and get it the next day. So, I have to wait days for things to come when I could get the exact same drive the next day in the UK, if I wanted to. That causes a bit of a problem. I don't know how many businesses they have in the UK, but I do think that having to import stuff from the US is a time-consuming problem. If there was a holding in the UK, then we wouldn't have that delay in time.

When they export stuff to me from the US, invariably the delivery company (called DHL) is looking for EORI numbers that we don't have. So, they try to involve us in the export of it, and it has nothing at all to do with us. We are simply the customer. If I had to moan about anything, that would be it.

Where the dashboard is concerned, I am okay with it. I am looking at one now and understand what I am looking at. When you first get in it is difficult, but I do believe that they now offer certain training for it. Given the fact that we are trying to support our customers in the UK, it is good to have the knowledge about it, know what you are looking at, see what the size of the protected bytes are, and understand it a little bit better. I have been doing IT support for implementations for the best part of 30 years, and it took me a bit of time to get my head around some of the way things are done.

For how long have I used the solution?

I have been using Infrascale for over four years, since April 2017.

What do I think about the stability of the solution?

It is very stable. I have never had any problems with the software. If we install the software on anything, it does what it says on the tin, which is that it will run a backup at a certain time. 

There are some issues with the software backup. Those issues are it cannot back up from an installation onto a server nor can it back up redirected folders on a server. Because when you install the software onto the server. you are installing it as an administrator on the server. However, an administrator account on a server does not have access to a user's redirected folders. 

The user is actually the owner of that folder on the server, even the administrator can't break into that. While you can force your way in, it basically breaks the policy, so we don't try to do that. If we had to, we could. However, out-of-the-box, installing software onto a server, it does not back up people's redirected folders. Now, if you are a user that saves a lot of files and folders onto their desktop, you get redirected back to the server. Essentially, you are missing quite a bit of data to be backed up. So, we do get a lot of errors based on that. We have found a little bit of a workaround. However, that workaround doesn't always work either, causing problems.

I check daily that the backups have gone through. If the backups aren't working correctly, I log onto the appliances and check why the appliances haven't backed up correctly. If it is something that I don't quite understand, then I will pass that down to the support at Infrascale. Where the appliance and cloud backup are concerned, there is very little maintenance to do. 

What do I think about the scalability of the solution?

We have had problems with this. Again, this is down to us not really understanding the product in the first place. In the case of the company, where we had problems with last year, when it came to bringing the VMs onto the appliance, we had to download them from the cloud. We got the VMs back onto the appliance, then we found that trying to boot them and the hardware was insufficient to support their product. So, the product ran like a dog when it was booted up on the appliance. So, it was unusable for the customer. That was down to us. When we were trying to size the product, it was a misunderstanding of, "What would be the minimum that would allow this server to run?" Well, we know that the minimum a server will run on is 8 GB. However, when the customer's product is on there, 8 GB is insufficient to run the server. So, it caused some problems.

One of our companies, who has had the product for three years, is at the point where I don't think the appliance has now got enough space to back up what they have. The only way that we can do anything with it is to keep getting rid of backups, which we probably shouldn't do, but that is what we are going through at the moment. So, we have to really micromanage the backup that is happening so it doesn't go over. The customer could upgrade it, but our customer isn't realistically going to put his hand in his pocket and pay any more at the moment.

For all the people that we have on it, we have six terabytes of space in the cloud for the cloud backup solution as well as four appliances. I am the one who looks after all of it. I monitor it, and if there is a problems, then I deal with it. The guys who work for us run the IT support side of things, and I look after the backup side of things.

We have a couple of thousand endpoints. We are quite a small IT support company.

How are customer service and technical support?

I am perfectly happy with the support that I receive from Infrascale. The technical support is very good. I deal mostly with one guy there called Maxim. He understands what the issues are and is very helpful. I mostly deal with one guy there, who is fantastic. 

They work in the Ukraine, so there is a little bit of a language barrier. When we first started working with them, I found it very slow to get my message across. Things have improved. What they have done is given a certain person for me to always deal with, which suits me and I'm happy with that. You develop a bond with the person and know that they understand your systems. Previously, when we first started, we could get anybody. Then, we would have to go through the same process of explaining things all over again. To start with, it was like pulling your own teeth out, but now it's improved.

They are proactive to the point that Maxim will check my system. If he sees something he will log a call with his own support desk. He phoned me up the other day, and I said, "Hello Max, how are you doing?" He said, "Yeah, good." I said, "I've seen a ticket was logged, but I haven't logged it." He said, "No, I logged it for you. Because I noticed something was getting high, and I wanted to have a chat with you about it." That is great, because that is proactive monitoring.

When we first started to do the appliances, I really didn't understand what the service was. I thought it was a managed service for backups. I didn't realize that it would be me who would be managing it. But, the more I have been involved in it, the more I have become accustomed to managing all of its appliances and installations. I take the responsibility for making sure that it works. Support-wise, it has improved. I have seen the business improve over the last 12 months. I think the business got sold or bought out. However, there have definitely been recent improvements with the support.

If I am ever going to do anything that I think is outside of my remit, I will contact support and go through one of their support guys.

Nobody from Infrascale has ever phoned me up and said that they want to test anything.

Which solution did I use previously and why did I switch?

We used Veeam Backup. Mostly, we would use Windows Backup to USB drives.

I switched to Infrascale because I wanted a solution that gave me what I was looking for. Backing up to the cloud directly from your server causes slowdown. I wanted something that would allow me to have an item onsite where we could quickly back up, then slowly back it up as needed for it to be in the cloud, as long as it did the job. I checked online, had a look at what customers were using, and read through some stuff for Infrascale. It just clicked that it seemed to have what I was looking for. So, I contacted them and never had a problem. I did the due diligence myself, so I am happy with it.

How was the initial setup?

Now, the initial setup is simple. Years ago, when we first started, it wasn't simple.

I installed an appliance yesterday and had it installed within an hour. From the box to the customer's site, it was installed and ready for the next stage. Then, one of the implementation guys from Infrascale gave me a call, and we liaised with each other because they like us to set it up. 

They tell me what they would like me to do. Now, the setup is much easier than it was when I very first had an appliance. It seemed to take forever when I first had an appliance. However, yesterday when I did it, it took no longer than an hour.

Because I have implemented these a few times now, I have a bit of knowledge based on some of the things that you come up against when it comes to making it all work exactly the way you want. There were things yesterday that I knew we should check to make sure it was working, because a couple of the backups failed to start. I actually told the guy at the other end, "Let's try this." So, we tried something and made it work because of the knowledge that I have built up. It is a lot easier than what it used to be, because it used to be difficult. Now, out-of-the-box, it is fairly easy.

When we install a hardware server, we build that hardware server with virtual machines on it. The hardware physical server would be clustered at the host with Hyper-V installed on it. Then, we would build the virtual machines on it, and those virtual machines are essentially the servers that are onsite. For customers of a certain size, we wouldn't suggest this because it is quite expensive for our customers per month. It is a fairly expensive product. However, for customers of a certain turnover, we would suggest this. We would explain to them in full what the disaster recovery solution offers. What we say to our customer is, "Our backup strategy would be to backup between two and three times a day, which would be before business starts, as business ends, and during business hours." So, any one of those would be bootable and the files are recoverable from three sections of each day. So, that is our backup strategy, which is really based on the appliance. 

What about the implementation team?

I help with the implementation. In fact, I implemented one yesterday with some of the guys. Sergei is the implementation guy. So, we did an implementation of an appliance yesterday. I do a lot of the support for my customer, utilizing the support services of Infrascale. I deal with a guy called Maxim on a lot of cases. I deal with it directly with the guys at InfraScale. 

They have never physically tested any of our systems. Infrascale has never tested anything. I have never had a phone call with them to say, "We want to run some tests on something." We have installed the stuff, but they have never tested it. There are no tests.

What was our ROI?

I make sure that this all works. While it does cost a lot of money, it is a good service. I am well bought into what it can do, because as much as it protects the customer, it protects me as well. If we had not had this solution 12 months ago, then the company that we support would no longer want us to work with them, because they would have nothing at all. So, it saved us.

What's my experience with pricing, setup cost, and licensing?

We pay 600 pounds per month for six terabytes of cloud storage and backup. This is a fixed cost of 100 pounds per terabyte.

We pay 752.50 pounds for two appliances, one of them costs 301 pounds per month and the other one costs 451.50 pounds per month. 

Another appliance costs us 327.60 pounds. 

The newest appliance that we installed yesterday is costing us 511 pounds per month because it has better speed and memory. 

The appliances have different prices because of storage, size, and memory. For example, the older machines support more virtual machines, whereas the new one only supports one virtual machine. As we have purchased the later appliances, they have probably been a little bit more expensive because they have to be good enough to keep the business running if the physical server goes down. We learned our lesson from the one that went down when we tried to run products and it wasn't quick enough.

What other advice do I have?

I push it in my own business. I wouldn't do that if I didn't think it was any good. I would definitely advise others that it is a good product.

If you want to back up redirected profiles on a server, you have to go into the scheduler and change the event in there to be a system event, and not run it as an administrator. That is the best thing that I learned, because that enables you to back up redirected folders. However, if you sign up for Infrascale, even they don't know that. So, you can get it to work, but it takes a bit of messing about to do it. If I had to say to somebody, "Get Infrascale, put it on your server, you can back the data up, and then you can back up the user profiles. You will need to go into the scheduler and change it for the task to run as a system event, and not as with administrator rights. It has to run as a system, and it will then work."

The speed of Infrascale’s backup functionality is good. I have had no complaints. The speed of backing up to the cloud using the software solution is based on the speed of the server at one end, how quick it can run the program, and the upload speed of the site. Realistically, that has nothing to do with the solution provided by Infrascale. Where the appliance is concerned, because the virtual machine is backed up to the appliance, there is no lag on the servers from the appliance, which then backs up to the cloud. That is based on the speed of the customer's bandwidth, because that is how it gets into the cloud. Solution-wise, I think the speed of it is just fine.

The speed of recovered documents is more based on a customer's broadband.

After a few weeks, anyone working with Infrascale should really understand the product, and it should be fairly straightforward. 

We offer it to all our customers. It depends on whether they are prepared to spend any extra money on the solution. So, any new customer who comes onto us, we suggest that they have an offsite cloud data backup that will protect their data only in the cloud. Then, should anything happen, it's recoverable to a drive and we would be able to give it back to them. Backup is a service, and it's also something that they can do themselves locally. We do try and get as many customers onto it as possible because it helps us.

There is a slight language problem. It is a bit hard for people to initially get used to, because support is in the Ukraine. I think the language barrier would mark it down one point, but it is a nine out of 10 for me.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Server Engineering Services Lead at a mining and metals company with 10,001+ employees
Real User
Top 20
Good OR and DR capabilities, performs well, offers data security, and continuous file versioning helps recover from hardware failures
Pros and Cons
  • "The biggest and most impressive thing for us is the operational recovery (OR) and disaster recovery (DR) capabilities that Nasuni has. If a filer goes down, or an ESX server goes down, then we can quickly recover."
  • "When we have to rebuild a filer or put a new one at a site, one of the things that I would like to be able to do is just repoint the data from Azure to it. As it is now, you need to copy it using a method like Robocopy."

What is our primary use case?

We use Nasuni to provide storage at various locations. It is for office-type files that they would use for day-to-day office work, such as spreadsheets. None of it is critical data.

Each group at each site has its own data store. For example, HR has its own, and finance has its own. All of these different groups at different locations use this data, and they use these filers to store it.

The Nasuni filers are on-site, and we have virtual edge appliances on ESX servers at about 35 sites globally. The data stored at these sites is then fed up into Azure and we have all of our data stored there.

How has it helped my organization?

The OR and DR capabilities have been a very big help for us. Previously, with the solutions we had, it would have taken weeks sometimes to get things fixed and back up and running for people. Now, it only takes a matter of minutes.

It used to be a lot of trouble to bring data back up and a lot of the time, it was read-only, so the people couldn't use it very well. Now, with Nasuni, we're able to pretty much keep their experience seamless, no matter how much trouble the hardware is in at the site.

The Nasuni filers are easy to manage, although the process is similar to what we had before. We have a report that comes out three times a day that gives us the amount of data that's in the queue to be uploaded to Azure on each individual filer. We keep track of that to make sure nothing is getting out of hand. It also tells us if the filer has been restarted and how long ago that happened. It gives us a quick view of everything and how much total we're using within Nasuni. This report is something we created on our own to keep track of things.

If a user deletes a file or a file becomes corrupted, it's easy for them to get it restored. There is very little chance that the data is going to be done. We've had a few people delete things, or they have become corrupted, and we were able to get that file back to them in the states that it was in about five minutes before they had a problem. We were able to do this without any issues. Overall, the continuous file versioning is really helpful.

What is most valuable?

The biggest and most impressive thing for us is the operational recovery (OR) and disaster recovery (DR) capabilities that Nasuni has. If a filer goes down, or an ESX server goes down, then we can quickly recover. For example, we lost a controller the other day and all of the drives were corrupted. We were able to quickly repoint all of the users to a backup filer that we have at our data center, they were back up and running within minutes, and they still have read-write capabilities. Once that ESX server was fixed, we were able to repoint everything back to it in a matter of minutes. People were then again using their local filer to connect.

Nasuni provides continuous file versioning and we take snapshots on a regular basis. Right now, we have them stored forever, but we're trying to reign that in a little bit and keep them only for a period of time. Certainly, at this point, we have a lot of file versions.

We have not had a problem with ransomware but if we did, we would be able to restore the data pretty quickly by going back to an older version of the file before the ransomware took over. It is a similar process to the DR, although a little bit different. For us, OR and DR are pretty much the same thing. We haven't had any disasters that we've had to recover from but we've had three or four hardware failures a year that we've had to deal with. The continuous file versioning has helped to fix these problems pretty quickly.

Continuous file versioning also makes it easier for our operations group. The support team is able to restore files quickly, 24/7, and it is less work for them. They have more time to focus on other problems. The end-user also has access to shadow copies through Windows, and they've used that extensively at the sites.

Nasuni has helped to eliminate our on-premises infrastructure. When we moved to Nasuni, we moved to Azure. Before that, we had a large SAN storage that we were using, and we were able to get rid of it. That was a big difference for us.

We were definitely able to save some money because we've eliminated those expensive SAN disks completely. There were some servers at our old data center that we were able to get rid of, as well. There are some new expenses with Azure because we have to pay for the space taken by the snapshots, which is why we're going to put a retention limit in place. Overall, I don't have an exact number but we were able to save money.

Nasuni is transparent to our end-users. We have it all set up as a file server through Microsoft DFS. If you were to ask one of our end-users how they like Nasuni, they would have no idea what you're talking about.

What needs improvement?

One issue that we have is related to copying data out of Nasuni. We just sold a site and it was split into two pieces. One part of it was sold to another company and we kept the other part. At the site, they have a Nasuni filer with about eight terabytes of data. Now, we have to split that data and the problem stems from the fact that the other company doesn't have Nasuni.

This means that we have to copy all of that data back to the site and into a format that they can use, which is probably just a Windows file server, and then we have to split it somehow. I'm not really sure that there's an easy way to do that. It's going to take us a little bit longer to separate this other location, and we're having to invent things as we go along.  

In these areas, it's not as simple as it could be, but it doesn't happen very often. As such, we haven't had to worry about it too often. Although it's not affecting us too much at this point, if there's a problem such that we have trouble getting data out of Nasuni, then that could be an issue. However, for the time being, it seems fine.

When we have to rebuild a filer or put a new one at a site, one of the things that I would like to be able to do is just repoint the data from Azure to it. As it is now, you need to copy it using a method like Robocopy. To me, this seems counterintuitive or like we're going backward a little bit. I would like to see a way to be able to switch them around without any problem. That said, I'm not sure if it would then cause other issues because of how Nasuni works, so it may not be possible.

For how long have I used the solution?

We started using Nasuni in 2018 and it's been running ever since.

What do I think about the stability of the solution?

Up until about a week ago, the stability has been rock solid. We've actually had a few issues after upgrading to version 9.3 that we're trying to deal with. We have a couple of sites that we're still not sure if Nasuni is the problem, or if it's VMware ESX, and we're working on that. At this point, we're not thinking about rolling back because of all of our sites, only two of them have problems. As such, we think that something else may be going on.

For the most part, it's been extremely stable, with no issues whatsoever. With Nasuni, there has been very little downtime, if any. Most of the sites have never gone down and with the sites that have, there's usually some other external problem.

Overall, it's been very stable for us.

What do I think about the scalability of the solution?

We are limited to the amount of space that we have purchased from Nasuni. If we get close to running out then we just buy more. We still have to pay for the storage within Azure, so we're trying to make sure that it doesn't get out of control. In general, we don't need to add any on demand.

Scalability is not a problem and we can add as many servers and as many filers as we need to, which is really nice. For example, instead of buying tape drives and using that type of backup system, we decided to take a few sites where we have some smaller servers and we use Nasuni to back them up. We use a separate filer to back up all of that data. It's been nice in that way, where we've been able to do things with it that we hadn't originally thought of.

If it should happen that we make a large acquisition, and we bought 10 sites, we could easily put in 10 more filers. It wouldn't be a problem.

Amongst our 35 sites, we have between 10,000 and 12,000 users. A lot of them are office-type people such as those from HR and finance. All of us, including administrators and developers, use it for this kind of thing. The developers wouldn't store code on these because that's not what it's used for. Our Nasuni environment is specifically for data to help the business run, which isn't critical to producing goods or shipping them or anything like that. That is a completely different system. Anybody who works for the company that needs to access simple office data is going to be going through Nasuni.

We have approximately 210 terabytes stored in Nasuni right now. That continues to grow at perhaps a terabyte or two per month. I don't think we'll be moving it anywhere else at this point. Down the road, we do have a very large file system at our data center that we're considering moving, but it's going to take a lot of time to do that one because it's 400 terabytes and it's a lot of old data that we have to clean up first. But that's pretty much the only area that I would see us doing something.

Later this year, we're going to start refreshing some of the hardware because we're approaching five years on some of the older stuff. As we replace it, we'll do another rollout, but it's not going to be like before. We're just going to put a new server in and put a new filer and connect to the data.

How are customer service and technical support?

Up until recently, I would have rated the technical support a seven out of ten. We had to open a case in Australia for a problem with one of the Nasuni filers, and I haven't got a response for it yet. We had one of the support people answer a question at about three in the morning, US East Coast time, and he said something to the effect that he would send an email giving an update. After that, we didn't hear back from him until about 25 hours later, which was a little concerning for me.

Part of the problem seems to be that Nasuni currently is not set up to do 24/7 support. They said that they were going to do that, so that was a little disappointing. Typically when we call in a problem, they jump all over it and they get it fixed in no time.

Which solution did I use previously and why did I switch?

From the perspective of our end-users, the servers function the same way when they're working. We had Windows filers before and now they're Nasuni, so it's basically the same thing to them.

Although we mostly used Microsoft, we did use a backup solution called Double-Take, which is now owned by Carbonite. It did the job but it had a lot of idiosyncrasies that were very difficult to deal with at times. That was the only non-Microsoft thing that we used for the data before Nasuni, and we have since stopped using it.

How was the initial setup?

In the beginning, the setup was kind of complex. We did have help from Nasuni, which was great. They were with us the whole time. We had some growing pains at the beginning, but once we figured out the first three or four sites, we were able to get everything done very quickly and efficiently, with very few problems moving to Nasuni.

When we first started with Nasuni, we had never used it before, and we had never used anything like that. We were used to using Windows servers, and there was a learning curve there to figure out the best way to set up the Nasuni filers. We really had to rely a lot on Nasuni for that. Some of it was trial and error, seeing what worked best as we started rolling it out.

We were replacing a single server that was responsible for doing everything. It was a file server, a domain controller, a print server, and an SCCM distribution point. It was all of these different things and we replaced that with one ESX server, which had multiple guest servers on it, doing all those functions separately. It is much better security-wise and much better operationally.

We started with a very slow implementation. We implemented one site, and then we waited two months before moving to the second site. We tried to start with some of the smaller sites first, with the least amount of data, to get our feet wet. Also, the first site we did was the one that I sit at. The team was all there and it was our site, so we figured we should do our site first. We staggered deployment, so it was not very quick. Then, once we had three or four completed, we did three a week for three months and we were done.

After completing the first site, choosing the next sites had to do with the hardware. We had some old hardware that we repurposed, so we did those sites next. After that, we moved to the sites that necessitated purchasing new hardware. 

From beginning to end, our implementation took a little more than a year. It began in August of 2018 and finished at the end of Q3 in 2019. The time it took was not because of Nasuni. Rather, it revolved around different ordering cycles in our company. Buying the new hardware was what stretched out the deployment time.

What about the implementation team?

I was in charge of the team that did the implementation.

For purchasing and the initial negotiations with Nasuni, we used CDW. We still interact with them when it's time to do renewals, and they are great to deal with. They really help out quite a bit. They were the ones that brought us Nasuni in the first place and suggested that we take a look at it.

We're very happy with CDW. We use them for all of our hardware orders, and a couple of different infrastructure tools. We use them quite extensively.

We had four people responsible for the deployments, with one guy who was in charge of the group as the lead architect. Once it was deployed, we turned it over to our operations group, which is outsourced to TCS. Although they have supported us since then, they come to us if there's anything that's still an issue. We have a couple of guys that still work with Nasuni a little bit, but that's basically how the maintenance is done.

For the most part, there is little maintenance to do. There are situations such as when a controller card goes down, or like the issues we have been having since the upgrade. Otherwise, it's very hands-off and you really don't have to do a lot.

What was our ROI?

We don't plan on calculating a return on investment with this solution. In the grand scheme of things, it's really not very much money for what we're doing. We spend more money on the hardware, for example.

What's my experience with pricing, setup cost, and licensing?

Our agreement is set up such that we pay annually per terabyte, and we buy a chunk of it at a time. Then if we run out of space, we go back to them and buy another chunk.

We thought about an agreement with a three-year plan, where we would get a small increase every year, but we decided not to take that approach at this time. We go through CDW for these agreements and they help us get all of the quotes together.

In addition to what we pay Nasuni, there is the cost of storage in Azure or whatever cloud service you're using. It can get pretty pricey if you have a lot of snapshots, which is something we've found and we're now trying to scale back on. That's the biggest thing that is extra and you may not think of right at the beginning.

Which other solutions did I evaluate?

We looked at a few different products that year, and we decided that Nasuni was the best way to go. It has really worked well for us.

One of the products that we looked at was Veeam, the backup software, but it would have been used a little bit differently. We also looked at Backup Exec and a tool from Microsoft. We didn't look at anything that was exactly like Nasuni. We looked at these other things that would create backups of the primary data, which would have stayed at the site. Nasuni was a completely different way of looking at it.

The difference with Nasuni is that rather than having a backup in the cloud, the primary copy of the data is what's in the cloud. For us, it's stored in Azure, whereas with the other tools, the primary copy stays at the site. If you had a major problem, for instance, this issue with the controller card, the problem with these other solutions or the way it was before was that you're down and out at least until you can get the controller card replaced.

Then, once you're back up, you're going to have to copy all of the data back. For that, it would probably need at least a week. Some of these sites have very poor connections. For example, we have a site that's in the Amazon jungle in Brazil and they are notorious for being very slow, yet we've used Nasuni there and it works fine. Some of these other solutions probably wouldn't have worked. In fact, we probably would have had to buy a tape drive and back up the servers that way.

What other advice do I have?

We have a hosted data center where we don't pay for individual items, such as servers. Instead, we pay for a service. The service might include a server or storage, and Nasuni has not eliminated that because we still need our physical servers at the locations. We debated on whether or not to put the filer in Azure for each site, but we decided that it was better to have something local at this point.

For our company, we were a little ahead of the curve. We didn't have internet connections directly from each site, and they all routed through a central internet connection. Because of that, it was difficult to eliminate any hardware at the site. We needed something there physically. But, having the virtual appliance for Nasuni really helps out quite a bit, because then we only have to have one piece of hardware and we can put all of the other servers that we need for infrastructure on the same ESX server. We have five or six different servers that are doing different functions that at one point, would maybe have been three or four different physical servers. Now we've reduced it to one.

We use Microsoft SCOM as a monitoring tool to keep track of all of the filers and make sure that they are running. 

We don't use the Nasuni dashboard because we don't have to. Everything is working the way it is. We do have a management console set up and we do go into that occasionally, but it's not something that's a regular thing that our support people use.

If I had a colleague at another company with concerns about migration to the cloud and Nasuni's performance, I would talk about the fact that the OR capabilities are so different than anything else that I've seen. The performance has actually not been too bad. You would think that there would be an issue with the cloud stores, but we set up a local cache on each filer that allows it to store up to a terabyte or two of regularly used data. That gets probably 80% of what people use, which means that they're accessing a local copy that's synced with what's in the cloud. This means that they don't really have to go to the cloud to get a lot of it. But when they do, it's pretty quick. It may not be as fast as if it were a local copy, but it's not too bad.

My advice for anybody who is considering Nasuni is that they definitely want to look at all of the options, but that Nasuni does have the best setup at this point. It offers the ability to recover things and provides data security. Especially with ransomware and all of these other new things that are causing lots of problems out there, it really helps mitigate some of that.

The biggest thing that I have learned from using Nasuni is that you shouldn't be afraid of the cloud.

I would rate this solution an eight out of ten.

Which deployment model are you using for this solution?

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
IT Director at Kingston Technology
Real User
Top 20
Easy-to-use interface, good telemetry data, and the support is good
Pros and Cons
  • "If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day."
  • "The onset of configuring an environment in the cloud is difficult and could be easier to do."

What is our primary use case?

Originally, I was looking for a solution that allowed us to replicate our critical workloads to a cloud target and then pay a monthly fee to have it stored there. Then, if some kind of disaster happened, we would have the ability to instantiate or spin up those workloads in a cloud environment and provide access to our applications. That was the ask of the platform.

We are a manufacturing company, so our environment wouldn't be drastically affected by a webpage outage. However, depending on the applications that are affected, being a $15 billion dollar company, there could be a significant impact.

How has it helped my organization?

Zerto is very good in terms of providing continuous data protection. Now bear in mind the ability to do this in the cloud is newer to them than what they've always done traditionally on-premises. Along the way, there are some challenges when working with a cloud provider and having the connectivity methodology to replicate the VMs from on-premises to Azure, through the Zerto interface, and make sure that there's a healthy copy of Zerto in the cloud. For that mechanism, we spent several months working with Zerto, getting it dialed in to support what we needed to do. Otherwise, all of the other stuff that they've been known to do has worked flawlessly.

The interface is easy to use, although configuring the environment, and the infrastructure around it, wasn't so clear. The interface and its dashboard are very good and very nice to use. The interface is very telling in that it provides a lot of the telemetry that you need to validate that your backup is healthy, that it's current, and that it's recoverable.

A good example of how Zerto has improved the way our organization functions is that it has allowed us to decommission repurposed hardware that we were using to do the same type of DR activity. In the past, we would take old hardware and repurpose it as DR hardware, but along with that you have to have the administration expertise, and you have to worry about third-party support on that old hardware. It inevitably ends up breaking down or having problems, and by taking that out of the equation, with all of the DR going to the cloud, all that responsibility is now that of the cloud provider. It frees up our staff who had to babysit the old hardware. I think that, in and of itself, is enough reason to use Zerto.

We've determined that the ability to spin up workloads in Azure is the fastest that we've ever seen because it sits as a pre-converted VM. The speed to convert it and the speed to bring it back on-premises is compelling. It's faster than the other ways that we've tried or used in the past. On top of that, they employ their own compression and deduplication in terms of replicating to a target. As such, the whole capability is much more efficient than doing it the way we were doing it with Rubrik.

If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day. In the past, it was going to take us close to a month. 

Using Zerto does not mean that we can reduce the number of people involved in a failover.  You still need to have expertise with VMware, Zerto, and Azure. It may not need to be as in-depth, and it's not as complicated as some other platforms might be. The person may not have to be such an expert because the platform is intuitive enough that somebody of that level can administer it. Ultimately, you still need a human body to do it.

What is most valuable?

The most valuable feature is the speed at which it can instantiate VMs. When I was doing the same thing with Rubrik, if I had 30 VMs on Azure and I wanted to bring them up live, it would take perhaps 24 hours. Having 1,000 VMs to do, it would be very time-consuming. With Zerto, I can bring up almost 1,000 VMs in an hour. This is what I really liked about Zerto, although it can do a lot of other things, as well.

The deduplication capabilities are good.

What needs improvement?

The onset of configuring an environment in the cloud is difficult and could be easier to do. When it's on-premises, it's a little bit easier because it's more of a controlled environment. It's a Windows operating system on a server and no matter what server you have, it's the same.

However, when you are putting it on AWS, that's a different procedure than installing it on Azure, which is a different procedure than installing it on GCP, if they even support it. I'm not sure that they do. In any event, they could do a better job in how to build that out, in terms of getting the product configured in a cloud environment.

There are some other things they can employ, in terms of the setup of the environment, that would make things a little less challenging. For example, you may need to have an Azure expert on the phone because you require some middleware expertise. This is something that Zerto knew about but maybe could have done a better job of implementing it in their product.

Their long-term retention product has room for improvement, although that is something that they are currently working on.

For how long have I used the solution?

We have been with Zerto for approximately 10 years. We were probably one of the first adopters on the platform.

What do I think about the stability of the solution?

With respect to stability, on-premises, it's been so many years of having it there that it's baked in. It is stable, for sure. The cloud-based deployment is getting there. It's strong enough in terms of the uptime or resilience that we feel confident about getting behind a solution like this.

It is important to consider that any issues with instability could be related to other dependencies, like Azure or network connectivity or our on-premises environment. When you have a hybrid environment between on-premises and the cloud, it's never going to be as stable as a purely on-premises or purely cloud-based deployment. There are always going to be complications.

What do I think about the scalability of the solution?

This is a scalable product. We tested scalability starting with 10 VMs and went right up to 100, and there was no difference. We are an SMB, on the larger side, so I wouldn't know what would happen if you tried to run it with 50,000 VMs. However, in an SMB-sized environment, it can definitely handle or scale to what we do, without any problems.

This is a global solution for us and there's a potential that usage will increase. Right now, it is protecting all of our criticals but not everything. What I mean is that some VMs in a DR scenario would not need to be spun up right away. Some could be done a month later and those particular ones would just fall into our normal recovery process from our backup. 

The backup side is what we're waiting on, or relying on, in terms of the next ask from Zerto. Barring that, we could literally use any other backup solution along with Zerto. I'm perfectly fine doing that but I think it would be nice to use Zerto's backup solution in conjunction with their DR, just because of the integration between the two.  

How are customer service and technical support?

In general, the support is pretty good. They were just acquired by HP, and I'm not sure if that's going to make things better or worse. I've had experiences on both sides, but I think overall their support's been very good.

Which solution did I use previously and why did I switch?

Zerto has not yet replaced any of our legacy backup products but it has replaced our DR solution. Prior to Zerto, we were using Rubrik as our DR solution. We switched to Zerto and it was a much better solution to accommodate what we wanted to do. The reason we switched had to do with support for VMware.

When we were using Rubrik, one of the problems we had was that if I instantiated the VM on Azure, it's running as an Azure VM, not as a VMware VM. This meant that if I needed to bring it back on-premises from Azure, I needed to convert it back to a VMware VM. It was running as a Hyper-V VM in Azure, but I needed an ESX version or a VMware version. At the time, Rubrik did not have a method to convert it back, so this left us stuck.

There are not a lot of other DR solutions like this on the market. There is Site Recovery Manager from VMware, and there is Zerto. After so many years of using it, I find that it is a very mature platform and I consider it easy to use. 

How was the initial setup?

The initial setup is complex. It may be partly due to our understanding of Azure, which I would not put at an expert level. I would rate our skill at Azure between a neophyte and the mid-range in terms of understanding the connectivity points with it. In addition to that, we had to deal with a cloud service provider.

Essentially, we had to change things around, and I would not say that it was easy. It was difficult and definitely needed a third party to help get the product stood up.

Our deployment was completed within a couple of months of ending the PoC. Our PoC lasted between 30 and 60 days, over which time we were able to validate it. It took another 60 days to get it up and running after we got the green light to purchase it.

We're a multisite location, so the implementation strategy started with getting it baked at our corporate location and validating it. Then, build out an Azure footprint globally and then extend the product into those environments. 

What about the implementation team?

We used a company called Insight to assist us with implementation. We had a previous history with one of their engineers, from previous work that we had done. We felt that he would be a good person to walk us through the implementation of Zerto. That, coupled with the fact that Zerto engineers were working with us as well. We had a mix of people supporting the project.

We have an infrastructure architect who's heading the project. He validates the environment, builds it out with the business partners and the vendor, helps figure out how it should be operationalized, configure it, and then it gets passed to our data protection group who has admins that will basically administrate the platform and it maintains itself.

Once the deployment is complete, maintaining the solution is a half-person effort. There are admins who have a background in data protection, backup products, as well as virtualization and understanding of VMware. A typical infrastructure administrator is capable of administering the platform.

What was our ROI?

Zerto has very much saved us money by enabling us to do DR in the cloud, rather than in our physical data center. To do what we want to do and have that same type of hardware, to be able to stand up on it and have that hardware at the ready with support and maintenance, would be huge compared to what I'm doing.

By the way, we are doing what is considered a poor man's DR. I'm not saying that I'm poor, but that's the term I place on it because most people have a replica of their hardware in another environment. One needs to pay for those hardware costs, even though it's not doing anything other than sitting there, just in case. Using Zerto, I don't have to pay for that hardware in the cloud.

All I pay for is storage, and that's much less than what the hardware cost would be. To run that environment with everything on there, just sitting, would cost a factor of ten to one.

I would use this ratio with that because the storage that it replicates to is not the fastest. There's no VMs, there's no compute or memory associated with replicating this, so all I'm paying for is the storage.

So in one case, I'm paying only for storage, and in the other case, I have to pay for storage and for hardware, compute, and connectivity. If you add all that up into what storage would be, I think it would be that storage is inexpensive, but compute added up with maintenance and everything, and networking connectivity between there and the soft costs and man-hours to support that environment, just to have it ready, I would say ten to one is probably a fair assessment.

When it comes to DR, there is no real return on investment. The return comes in the form of risk mitigation. If the question is whether I think that I spent the least amount of money to provide a resilient environment then I would answer yes. Without question.

What's my experience with pricing, setup cost, and licensing?

If you are an IT person and you think that DR is too expensive then the cloud option from Zerto is good because anyone can afford to use it, as far as getting one or two of their criticals protected. The real value of the product is that if you didn't have any DR strategy, because you thought you couldn't afford it, you can at least have some form of DR, including your most critical apps up and running to support the business.

A lot of IT people roll the dice and they take chances that that day will never come. This way, they can save money. My advice is to look at the competition out there, such as VMware Site Recovery, and like anything else, try to leverage the best price you can.

There are no costs in addition to the standard licensing fees for the product itself. However, for the environment that it resides in, there certainly are. With Azure, for example, there are several additional costs including connectivity, storage, and the VPN. These ancillary costs are not trivial and you definitely have to spend some time understanding what they are and try to control them.

Which other solutions did I evaluate?

I looked at several solutions during the evaluation period. When Zerto came to the table, it was very good at doing backup. The other products could arguably instantiate and do the DR but they couldn't do everything that Zerto has been doing. Specifically, Zerto was handling that bubbling of the environment to be able to test it and ensure that there is no cross-contamination. That added feature, on top of the fact that it can do it so much faster than what Rubrik could, was the compelling reason why we looked there.

Along the way, I looked at Cohesity and Veeam and a few other vendors, but they didn't have an elegant solution or an elegant way of doing what I wanted to do, which is sending copies to an expensive cloud storage target, and then having the mechanism to instantiate them. The mechanism wasn't as elegant with some of those vendors.

What other advice do I have?

We initially started with the on-premises version, where we replicated our global DR from the US to Taiwan. Zerto recently came out with a cloud-based, enterprise variant that gives you the ability to use it on-premises or in the cloud. With this, we've migrated our licenses to a cloud-based strategy for disaster recovery.

We are in the middle of evaluating their long-term retention, or long-term backup solution. It's very new to us. In the same way that Veeam, and Rubrik, and others were trying to get into Zerto's business, Zerto's now trying to get into their business as far as the backup solution.

I think it's much easier to do backup than what Zerto does for DR, so I don't think it will be very difficult for them to do table stakes back up, which is file retention for multiple targets, and that kind of thing.

Right now, I would say they're probably at the 70% mark as far as what I consider to be a success, but each version they release gets closer and closer to being a certifiable, good backup solution.

We have not had to recover our data after a ransomware attack but if our whole environment was encrypted, we have several ways to recover it. Zerto is the last resort for us but if we ever have to do that, I know that we can recover our environment in hours instead of days.

If that day ever occurs, which would be a very bad day if we had to recover at that level, then Zerto will be very helpful. We've done recoveries in the past where the on-premises restore was not healthy, and we've been able to recover them very fast. It isn't the onesie twosies that are compelling in terms of recovery because most vendors can provide that. It's the sheer volume of being able to restore so many at once that's the compelling factor for Zerto.

My advice for anybody who is implementing Zerto is to get a good cloud architect. Spend the time to build out your design, including your IP scheme, to support the feature sets and capabilities of the product. That is where the work needs to be done, more so than the Zerto products themselves. Zerto is pretty simple to get up and running but it's all the work ahead in the deployment or delivery that needs to be done. A good architect or cloud person will help with this.

The biggest lesson that I have learned from using Zerto is that it requires good planning but at the end of it, you'll have a reasonable disaster recovery solution. If you don't currently have one then this is certainly something that you should consider.

I would rate Zerto a ten out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
System Engineer at Netwitz Sdn Bhd
Real User
Top 20
Reliable and fast, easy to use, good alerting, and backups can be mounted for use as drives
Pros and Cons
  • "The compression and deduplication features have helped to save on storage costs."
  • "It is quite surprising to me that the configuration cannot be backed up automatically, and I think that Rapid Recovery should have an option for scheduled configuration backup."

What is our primary use case?

Our primary use is backup and restore, which we use to protect our customers' data centers.

How has it helped my organization?

From an operations standpoint, the email notifications have helped me a lot.

The mount function has helped us because it is a straightforward process that we can explain to a customer over the phone. When they need to restore a file or a folder, there are only a few steps involved.

Many times, we have been able to recover data with little disruption to our customer's work environment. In a few cases, it helped me to recover all of the data for a customer, which was a big help.

In cases where we have had to restore a failed server, the process has been quite fast. The timeframe depends on the size of the data but it is faster than other products we have worked with.

There are multiple choices for restoring data. For example, first, you can create a virtual standby, and then restore data while it is being used. Alternatively, you can bring up another server and then restore data to it. I have not yet used virtual standby in production.

What is most valuable?

The most valuable feature is the ability to mount a backup as a drive, where you can access the data.

Quest Rapid Recovery has good speed and reliability.

Rapid Recovery has a feature that will archive backups.

This product is straightforward and easy to use.

I'm quite impressed with the usability, and I am comparing this with other backup and data protection software that I have used, such as NetVault. Rapid Recovery is easier to use because of its user-friendly interface.

An example of another product that I use currently, that is different from Quest products, is Veeam. I prefer to use Rapid Recovery because the number of steps required in the process is minimal. With fewer steps required to complete my tasks, compared to other products, Rapid Recovery is the easiest one to use.

This product has absolutely reduced the admin time involved in my backup and recovery operations. The amount of time it saves depends on each customer's environment. At the most basic level, before using this product, I had to log in every day to check on the stability of the backup and sometimes act based upon it. For example, if there was a failed backup then I had to set a new one.

With Rapid Recovery, after creating alerts, I no longer need to check on each client. I used to have to go to each client, one by one, and check several pages to see whether a backup had failed. Now, I simply wait for an email notification to inform me of the status. Basically, half of my day is saved because of the email notification and alerts. When I don't receive an alert then I don't even need to check.

The compression and deduplication features have helped to save on storage costs. We have quite a number of clients, and there is a lot of data. I am quite impressed because we started with an 18 TB capacity license, and we managed to back up almost 126 clients. This was possible because of the deduplication and compression features. There are other solutions that only support compression, and they require that customers allocate more storage. For example, my customers that are using NetVault require more capacity.

Incremental Backup is another feature that saves storage space because this type of backup only records the changes since the last one. If there are few changes then it does not affect the storage very much.

I really like the replication features. Rapid Recovery has different mounting options and you can do a fast restoration. For example, you can mount a virtual hard disk and it doesn't impact your environment. My customers have been quite impressed with the speed of replication.

What needs improvement?

One of the features that I like is the Rapid Recovery Core portal. Basically, you can access the customer's site using a website URL. However, what I notice is that the information sometimes differs from what is in the Rapid Recovery Core. I think that more should be done to ensure that this is synchronized.

When backing up the configuration it has to be done manually, which is something that should be improved. It is quite surprising to me that the configuration cannot be backed up automatically, and I think that Rapid Recovery should have an option for scheduled configuration backup.

I had an experience with one customer where the backup storage was corrupted, and as a result, the repository was corrupt. In that situation, with the repo gone, we were unable to retrieve the backup. To handle situations like this, it would be great if Rapid Recovery offered a second-tier of backup. What I am doing now is archiving the repository, which gives me a secondary backup for my clients.

For how long have I used the solution?

I have been working with Quest Rapid Recovery for almost three years, since 2019. I began working with it as soon as I joined my current company.

What do I think about the stability of the solution?

I would rate the stability at 95%.

What do I think about the scalability of the solution?

This product is quite scalable. I'm quite impressed with the way Rapid Recovery handles scale and the ability to expand it. As our customers migrate from NetVaule to Rapid Recovery, we increase our own total storage space and it's easy to do.

In the first two years, we subscribed to 13 TB of data. Now in our third year, it has been increased to 18 TB. Because the product is profitable and working well, the company is planning to increase usage. Eventually, all of the servers will be put into Rapid Recovery and additional licensing will be purchased.

In our environment, there are two administrators for this solution. One handles the customers and the other is internal. Between them, we have full visibility.

How are customer service and technical support?

Technical support is quite fast. My interactions with them are quick because I have memorized the steps, which start with sending them the logs. Once I send the log to support, they can begin.

Overall, they are quite fast and quite helpful.

Which solution did I use previously and why did I switch?

I use a variety of Quest products, including NetVault. Based on my observations, when a customer allocates 50 TB with NetVault, you can do the same with less storage using Rapid Recovery. It only requires about 20 TB to restore 120 clients, which results in a lower overall storage cost.

Many of my customers began with NetVault, and we proposed Rapid Recovery to them. In general, they have been quite happy with the switch. They like the way it connects with the core and that they do not have to install agents. One of the problems with installing an agent is that you often have to reboot that machine, and they no longer have to do this.

Most of the servers are migrating to Rapid Recovery because they trust it. From a maintenance perspective, the majority of the issues that I had found previously were related to agents. After migrating, these problems are no longer there.

Performing maintenance on Rapid Recovery involves more steps than it does with NetVault, although not very many. I just want to ensure that everything with Rapid Recovery is stable.

I also use products from other vendors including Veeam.

How was the initial setup?

Both installing and upgrading are simple and straightforward to do. It is not a complex process to set it up. The complete deployment takes less than 15 minutes.

Based on the customers that I have now, my implementation strategy focuses on VMware. VMware connects to Rapid Recovery using vCenter. It is set up so that customers retain their data for one month.

Because Rapid Recovery doesn't have a secondary backup, I also have the archiving solution as part of this. 

What about the implementation team?

We have an in-house team for deployment.

Minimal staff is needed for deployment and maintenance.

What's my experience with pricing, setup cost, and licensing?

Licensing fees are based on the amount of data that you want to store, which is related to how many customers you want to cover. I recommend that before purchasing a license, you identify how many clients will be protected. You then need to estimate the total amount of storage based on each client's size.

Which other solutions did I evaluate?

Evaluation of other options is the responsibility of the customer. My company handles multiple data products but this is the only option we offer for data recovery.

What other advice do I have?

The Synthetic Incremental Backup feature is a new one that I haven't set up yet. Instead, I use the normal incremental backup.

When replication is being used, when it first starts, it will be slow. The reason for this is that you have to start with a base. Then, once you have the base, the replication is very fast.

It is important to my clients that features such as deduplication and replication are included at no extra charge. They understand these features, as well as compression, and understand the costs involved. As they switch from other products, they know that implementing Rapid Recovery and adding storage will not cost very much.

The biggest lesson that I have learned from using this product is to not trust the storage hardware. Similarly, don't trust the connection between your customer and the backup storage site. When corruption occurs then it is quite troublesome and requires a lot of troubleshooting. Moreover, some data may be lost permanently. To deal with this, we have started creating multiple repositories and back up accordingly. This gives us insurance that data is not lost in the event of a disaster.

My advice for anybody who is looking into this product is to first know what they have in their environment. For example, if they are using a tape backup system then this product is not applicable. However, if they have a supported storage system then this is a good choice. Similarly, if replication is being used at branch offices then this product is very good because of the speed. I really like how the replication capability works.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Project Manager at TMI DUBAI
Real User
Its ease of use and price set it apart from other solutions
Pros and Cons
  • "Its ease of use and price are most valuable. It is simple and straightforward. Someone who has never used any backup software will easily understand it from the first installation. It is that simple. Price-wise, it is much cheaper than its counterparts."
  • "It would be a great improvement if they can give a console to control the systems. All other vendors let you simply log in to the cloud console and control everything from there, but for Vembu, whether you choose Vembu Cloud Backup or Vembu Disaster Recovery, you still need to install the Vembu software on your on-premise system and configure it from there. It would be great if I can get a cloud console to manage the systems."

What is our primary use case?

We are a managed service provider (MSP). We have also started to sell it recently. We have been using this solution at our company, and after testing the product for close to six months, we realized this is a good option moving forward.

How has it helped my organization?

We are a managed service provider. For us, this solution is even more beneficial because we can control the licenses, renewals, and other things for customers from a managed service provider's panel, and we get an option to upsell and cross-sell to clients.

What is most valuable?

Its ease of use and price are most valuable. It is simple and straightforward. Someone who has never used any backup software will easily understand it from the first installation. It is that simple. Price-wise, it is much cheaper than its counterparts. 

It also has very less overhead on IT in terms of the product, service, and outcomes. Another good feature is that you don't really have to install any agent on the server side, especially when you are taking Hyper-V backup. We are using Microsoft Hyper-V, and we are taking backup. No agent and other things need to be installed on your machines.

I have done a couple of restores on a trial basis to check the integrity, and I did not find any issue in terms of the reliability of the restores. It was smooth.

What needs improvement?

It would be a great improvement if they can give a console to control the systems. All other vendors let you simply log in to the cloud console and control everything from there, but for Vembu, whether you choose Vembu Cloud Backup or Vembu Disaster Recovery, you still need to install the Vembu software on your on-premise system and configure it from there. It would be great if I can get a cloud console to manage the systems. 

They can also give an option where if you don't want to install an agent, you can use your own server and manage it, but if you want to manage it from the cloud console, you install the agent. It should be my choice. I should have control when I am sitting at home. I should also have control over the cloud so that I can monitor everything and do whatever I like. If my organization policy does not allow me to do that, then obviously I don't do it, but Vembu should provide such an option.

For how long have I used the solution?

I have been using this solution for close to six months.

What do I think about the stability of the solution?

It looks good to me. So far, I haven't found any glitches. It is always there, so it is stable. You run it through the browser, which makes it simple.

What do I think about the scalability of the solution?

It is scalable. We started with a 30-days trial, and after that, we started using its free version. Today, we upgraded from the free version to the enterprise version. We are also working on putting 10 TB backup on Vembu cloud as well as going ahead with almost eight servers for the disaster recovery offsite. 

To upgrade from the free version to the enterprise version, I simply had to synchronize the license, and everything was set. You can scale it very easily. You just need to activate the license under your account, and then you can log in to the Vembu console and just synchronize the license, and you're done. If you want to go from an on-premise backup to the cloud, you should have a cloud license. You can then synchronize and configure it. That's it. 

In terms of the size of the environment, one of the implementations is done for a government organization, and there are around 20 or 25 users with close to 5 terabytes of data and two virtual machines. We don't have plans to increase its usage in the same organization because this is a small subsidiary of a big government office. With the same client, there is nothing more we can do. They have a limited number of users, but we are working on implementing it for other clients.

How are customer service and technical support?

I have used their support, and I had a very good experience. We were basically installing a demo for another client, and they were using 2008. We were getting a particular error while doing the installation, and for that, they needed to reboot the server, but you just cannot reboot the server in a live environment. You need to fix up a time for that. For example, if we have to reboot your servers, we need to schedule it with you, and you will schedule a time for it with your management.

Vembu's support is available 24/7. They said that whenever we are ready, we should just send them an email, and they will do a remote session with us, which is what precisely happened. When we were ready, I sent them the intimation that we will be ready in about 15 or 20 minutes, and their support guy connected with it and helped us.

Which solution did I use previously and why did I switch?

Its ease of use and less price set it apart from other solutions. I have used many solutions, such as Acronis, Veeam, Symantec, Veritas, etc, and all of them are a bit complicated. I found Vembu to be the simplest one. In terms of features, it is similar to others. It has encryption and retention features and multiple backup options that every backup software provides.

Currently, we are also using Acronis, and slowly, we will be migrating from Acronis to Vembu. It is cheaper in price than Acronis. Of course, Acronis gives other benefits such as patch updates, cybersecurity, ransomware protection, and so on, but people have their firewall, endpoint protection, and antivirus. They don't really need to invest again in something that they don't need. The only thing that they particularly need is a backup solution that is encrypted, so there is no point in protecting them from all these things because they are already protected. If you go for Acronis Cyber Cloud, a client is not going to stop using the firewalls or endpoint protection. There is no point in loading double onto that. It is a good addition for those people who are very specific and know what they want. If you just don't know what to do, then you can go on a shopping spree.

With Acronis, all you have to do is to install the agent, and then you can control everything from the cloud. Wherever you are, you simply log in to the console, and you have your servers over there, and you can do whatever you feel like. With Vembu, you have to install the Vembu BDR software onto the server, and from there, you can basically dump the DR or a backup onto the cloud.

How was the initial setup?

It was very straightforward. You simply install the software, plug in the storage or wherever you want to dump that, and create a profile. That's it. These were the three steps, and of course, the fourth one was to activate the free software. You can start with the full-fledged version after 30 days. You can convert the trial version into the free version, which is available online. It is easy. 

Its initial installation took less than an hour. This includes downloading and setting it up.

What's my experience with pricing, setup cost, and licensing?

Price-wise, it is much cheaper than its counterparts. I like its pricing, and its price is okay. The lesser they take, the more profit we can make, but we are happy with its price.

It is very affordable. We were working with a client, and they were looking for backup software and had a very tight budget. When I told them that Vembu is only going to cost around $400 to $500, they were shocked. They didn't believe me, so I showed them the website so that they can check the price themselves. Of course, if they agree to that price, we get a 15% rebate as a managed service provider.

You choose the type of license you want. There are two types of licenses. One is a subscription license, and the other one is a perpetual license. If you go for a perpetual license, next year, if you want, you can renew the support. It is up to our clients whether they want to renew the support or not. They have an option. They also have an option to go for a subscription.

What other advice do I have?

Every IT scenario differs from others. It is a good product, and just give it a shot. If it fits your organization, you will save a lot.

I have been in IT for over 25 years, and I had never heard about this software. I came across this through a consultant who was also working for a government organization. They asked us to install the free Vembu backup software, and I wondered which is this solution. I checked their website, downloaded the software, and installed it for the first time. I was amazed why there is no marketing for this. I get so many marketing emails and other things, but I never got any email related to Vembu. I also didn't come across it while doing research on the internet.

We have been using this solution for only six months. There are many features that we haven't used, but whatever has been phased out and tested was okay. We haven't yet used Vembu to back up Microsoft 365, Google Workspace, or AWS, and we also haven't used Vembu's Download VM to help in migrating physical machines to a VM environment. Similarly, we haven't used its Instant Boot VM feature for instant access to VMs or physical machines after a crash. 

It provides multiple options to recover data during hardware failures or accidental deletion of files, but I haven't tested this option. Having such a feature is a good addition because if some resources are not there, you can restore your data to different ones. We will definitely be using Vembu's data integrity check feature after the enterprise installation.

I would rate Vembu BDR Suite an eight out of 10.

Which deployment model are you using for this solution?

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Reseller
Buyer's Guide
Backup and Recovery Software
May 2023
Get our free report covering Microsoft, Acronis, Rubrik, and other competitors of Veeam Backup & Replication. Updated: May 2023.
708,461 professionals have used our research since 2012.