Try our new research platform with insights from 80,000+ expert users
reviewer1380825 - PeerSpot reviewer
Lead Engineer Architecture & Engineering Services at a tech services company with 10,001+ employees
Real User
Jul 14, 2020
Provides a single point solution that is easy to maintain and provision
Pros and Cons
  • "If you have a larger amount of data than normal in cloud, it is easy to provision and maintain. Waiting for the delivery of the controller, the configuration of enclosures, etc., all this stuff is eliminated compared to using on-premise."
  • "I would like NetApp to come up with an easier setup for the solution."

What is our primary use case?

The main use case of ONTAP is for users to utilize SharePoint. From there, they need to access data where there are specific applications as well as an individual shared folder.

It is being used for application purposes as well as for individual user purposes.

We are using the latest version.

How has it helped my organization?

This isn't an isolated solution. We must have NetApp to support our faster access on a file protocol. We found the same solution on Azure is just as helpful when compared to the on-premise solution.

The solution provides us unified storage, no matter what kind of data we have. If we take a normal storage account in the public cloud, then it may not be active in terms of identity level. However, using NetApp, we can leverage the identity management control integrating with our AD. From there, we can gain the computer user's access and maintain the user side entity for who is accessing what.

What is most valuable?

On-premises, we are using the same NetApp. We find the solution in Azure to be more reliable and tailorable in NetApp with the same NetApp features because it gives us the most updated NetApp solution.

If you have a larger amount of data than normal in the cloud, it is easy to provision and maintain. Waiting for the delivery of the controller, the configuration of enclosures, etc., all this stuff is eliminated compared to using on-premise.

For how long have I used the solution?

Eight months.

Buyer's Guide
NetApp Cloud Volumes ONTAP
December 2025
Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: December 2025.
879,310 professionals have used our research since 2012.

What do I think about the stability of the solution?

From my months' experience, I haven't seen a single point of failure within the ONTAP, except for Azure maintenance.

What do I think about the scalability of the solution?

Scalability is a very good feature. If our data reaches 90 percent (or some threshold level), it automatically increases the storage within ONTAP without our intervention.

The solution helps us control storage costs. It is scalable. If we need more storage, then we can opt for a monthly or yearly option.

How are customer service and support?

The technical support is good.

Once you register with NetApp Cloud Central, people will get in touch with you who can assist you with deploying your solution.

Which solution did I use previously and why did I switch?

This is the first time that we are using this type of a solution in the cloud.

How was the initial setup?

The initial setup is straightforward, but I would like NetApp to come up with an easier setup for the solution.

Deployment time depends on the client. On average, deploying the entire solution can take about a day (eight hours), if there are no issues.

For a standard storage implementation project, we need to have some shared storage for the client's application as well as the user groups and shared files that they have been using. To leverage this, we've been using this solution.

You need to go through the NetApp website and go through the documents regarding deploying ONTAP. If you experience any difficulties, there is a technical team to help you.

What about the implementation team?

Some of the sales managers and other team members helped me setup the environment. They explained to me how the pay as you go and BYOL models work. If you need to the BYOL model to work, they will use some temporary licenses for a 30-day evaluation. They are there for you from beginning to end if you need assistance.

What was our ROI?

Because we went with the BYOL instead of pay as you go, we haven't seen ROI.

Using this solution, the more data that we store, the more money we can save. If you use traditional cloud providers, then you cannot manage unified lists. For that, you would need to follow a set of rules and some other stuff. You also need to have more people managing the entire environment. Whereas, NetApp provides a single point solution. 

What's my experience with pricing, setup cost, and licensing?

They have a very good price which keeps our customers happy. 

Once we deploy the pay as you go model, we cannot convert this product as a BYOL model. This is a concern that we have. We would like NetApp to come up with a solution for this. For example, a customer may think, "Let's use this solution." Later, he realizes that, "This is our solution and I have this budget for the year. If we can pay upfront for one year, then we can reduced the amount we pay." This is currently not possible if we select the pay as you go model.

Your OCCM should always be the same as your ONTAP, e.g., suppose you have deployed one ONTAP, then due to some reason, you deleted it and also OCCM. Then, the next time that you want to deploy another OCCM and ONTAP, that same license won't work because the license is based on the OCCM serial ID.

Which other solutions did I evaluate?

We did not evaluate other solutions. We only evaluated ONTAP.

NetApp is an industry leader as well as we have experienced with NetApp on-premise. That is the reason we chose NetApp as a reliable partner.

What other advice do I have?

We don't use the solution’s cloud resource performance monitoring.

I would rate this solution as a nine (out of 10).

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner.
PeerSpot user
it_user879162 - PeerSpot reviewer
Vice President at a financial services firm with 1,001-5,000 employees
Real User
Jul 14, 2020
Helps us to save on the costs of backup products
Pros and Cons
  • "Its features help us to have a backup of our volumes using the native technology of NetApp ONTAP. That way, we don't have to invest in other solutions for our backup requirement. Also, it helps us to replicate the data to another geographic location so that helps us to save on the costs of backup products."
  • "They have very good support team who is very helpful. They will help you with every aspect of getting the deployment done."
  • "The automated deployment was a bit complex using the public APIs. When we had to deploy Cloud Volumes ONTAP on a regular basis using automation, It could be a bit of a challenge."
  • "We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full."

What is our primary use case?

Our use case is to have multitenant deployment of shared storage, specifically network-attached storage (NAS). This file share is used by applications that are very heavy with a very high throughput. Also, an application needs to be able to sustain the read/write throughput and persistent volume. Cloud Volumes ONTAP helps us to get the required performance from our applications.

We just got done with our PoC. We are now engaging with NetApp CVO to get this solution rolled out (deployment) and do hosting for our customers on top of that.

How has it helped my organization?

Using this solution, the more data that we store, the more money we can save.

What is most valuable?

  • CIFS volume.
  • The overall performance that we are getting from CVO.
  • The features around things like Snapshots. 
  • The performance and capacity monitoring of the storage.

These features help us to have a backup of our volumes using the native technology of NetApp ONTAP. That way, we don't have to invest in other solutions for our backup requirement. Also, it helps us to replicate the data to another geographic location so that helps us to save on the costs of backup products.

Cloud Volumes ONTAP gives us flexible storage.

What needs improvement?

There are a few bugs in the system that they need to improve on the UI part. Specifically, its integration of NetApp Cloud Manager with CVO, which is something they are already working on. They will probably provide a SaaS offering for Cloud Manager. 

We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full. 

For how long have I used the solution?

Six months.

What do I think about the stability of the solution?

I cannot comment on stability right now because we have not been using it in production as of now.

What do I think about the scalability of the solution?

We still have CVO running on a single VM instance. As an improvement area, if CVO can come up with a scale out that will help so we will not be limited by the number of VMs in GCP. Behind one instance, we are adding a number of GCP disks. In some cases, we would like to have the option to scale out by adding more nodes in a cluster environment, like Dell EMC Isilon.

How are customer service and technical support?

Get NetApp involved from day one if you are thinking of deploying Cloud Volumes ONTAP. They have a very good support team who is very helpful. They will help you with every aspect of getting the deployment done.

Which solution did I use previously and why did I switch?

We previously used OpenZFS Cloud Storage. We switched because we were not getting the performance from them. The performance tuning is a headache. There were a lot of issues, such as, the stability and updates of the OpenZFS. We had it because it was a free, open source solution. 

We switched to NetApp because I trust their performance tool and file system.

How was the initial setup?

We did the PoC. Now, we are going to set up a production environment. 

The initial setup was a bit challenging for someone who has no idea about NetApp. Since I have some background with it, I found the setup straightforward. For a few folks, it was challenging. It is best to get NetApp support involved for novices, as they can give the best option for setting to select during deployment.

The automated deployment was a bit complex using the public APIs. When we had to deploy Cloud Volumes ONTAP on a regular basis using automation, It could be a bit of a challenge.

What about the implementation team?

My team of engineers works on deploying this solution. There are five people on my team.

What was our ROI?

We have not realized any money or savings yet because we are still in our deployment process.

What's my experience with pricing, setup cost, and licensing?

They give us a good price for CVO licenses. It is one of the reasons that we went with the product.

Which other solutions did I evaluate?

We did consider several options. 

In GCP, we also considered NetApp's Cloud Volumes services as well, but it did not have good performance. 

Another solution that we tried was Qumulo, which was a good solution, but not that good. From a scaling out perspective, it can scale out a file system, whereas NetApp is not like that. NetApp still works with a single VM. That is the difference.  

We also evaluated the native GCP file offering. However, it did not give us the performance for the application that we wanted.

We do use the cloud performance monitoring, but not with a NetApp product. We use Stackdriver. NetApp provides a separate thing for the monitoring of NetApp CVO, which is NetApp Cloud Manager.

What other advice do I have?

I would rate this solution as an eight (out of 10).

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Google
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
NetApp Cloud Volumes ONTAP
December 2025
Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: December 2025.
879,310 professionals have used our research since 2012.
reviewer1380831 - PeerSpot reviewer
Storage Engineer at a media company with 5,001-10,000 employees
Real User
Jul 14, 2020
Helps us keep control of storage costs because it's an OpEx-based model
Pros and Cons
  • "One of the most valuable features is its similarity to the physical app, which makes it familiar. It's almost identical to a real NetApp, which means you can run all of the associated NetApp processes and services with it. Otherwise, we would definitely have to deploy some hardware on a site somewhere, which could be a challenge in terms of CapEx."
  • "There is room for improvement with the capacity. There's a very hard limit to how many disks you can have and how much space you can have. That is something they should work to fix, because it's limiting. Right now, the limit is about 360 terabytes or 36 disks."

What is our primary use case?

We are predominantly using it as a backup target for our products. We are also doing some CIFS shares to remote sites that don't have their own file server infrastructures.

How has it helped my organization?

It gives us flexibility. In a disaster situation, or even in an office relocation, there can be a gap. NetApp CVO allows us to continue to provide service customers with access to their data, even if a physical site is going to be down for a long period of time. It's only really viable if you know a site is going to be down for a long period of time. We've had office relocations and there have been gaps between when the old office closed and the new office opened, during that period of moving stuff over and setting things up. There were a couple of weeks where we were serving the data out of the cloud, rather than out of the physical site. NetApp CVO may have improved our uptime by 1 or 2 percent, because we don't have that much downtime to start with.

It has all the advantages of the real NetApp product. You can provide storage in most of the formats you'd want. 

It helps us to keep control of storage costs because it's an OpEx-based model rather than a CapEx-based model. It depends on how you license it. You can have it up and down, almost on an hourly basis. Obviously, we don't do that, we've got it up long-term. But it does have that flexibility to bring up an instance of a client filer for just a short period of time.

It has saved us from having to buy and host another filer somewhere. That would be the only option to achieve the same goal. If we were to buy another filer to provision the capacity we've got in the cloud, the CapEx would probably be at least $200,000, whereas the running costs are not that much. It depends on how you deal with AWS, but we don't pay that kind of money. It probably saves us 75 percent of the cost of buying a filer for real.

What is most valuable?

One of the most valuable features is its similarity to the physical app, which makes it familiar. It's almost identical to a real NetApp, which means you can run all of the associated NetApp processes and services with it. Otherwise, we would definitely have to deploy some hardware on a site somewhere, which could be a challenge in terms of CapEx. Also, in our case, in Europe, in terms of physical real estate, we are trying to reduce the size of our data centers.

What needs improvement?

There is room for improvement with the capacity. There's a very hard limit to how many disks you can have and how much space you can have. That is something they should work to fix, because it's limiting. Right now, the limit is about 360 terabytes or 36 disks.

For how long have I used the solution?

I've been using NetApp Cloud Volumes ONTAP for about two years.

What do I think about the stability of the solution?

The stability has been really good. I don't think we've ever had any major outages. AWS, obviously, doesn't guarantee 100 percent uptime, so I can see that it's not been up since I last restarted it. Rather, it's been up since some AWS event resulted in it migrating to another one of their pieces of hardware. But we've never had it actually crash.

What do I think about the scalability of the solution?

The scalability is good to a point, but there is a hard limit on the capacity. We could, obviously, create another associated instance of it, but it wouldn't be a single name space, and we couldn't do some of the things you can do if you have a lot of multiple, real NetApps. So there are some hard limits to how big a solution you can create.

Day-to-day, it's probably only being used by about a dozen people in our organization, because it is mainly a backup target. There is a small collection of people whose shares live on it, but the majority of the business' files are on the real NetApps on their sites.

It's probably at a size where we're not likely to implement any more. You never know. It's very hard to tell what will go on with our company. But at the moment, it's probably not going to get any larger. We may actually shrink the capacity because we are temporarily storing some stuff for a part of the business that should only be on there for a few months at most, with this COVID.

As an organization, we went ahead wholehearted that anything and everything should be in the cloud — cloud first — and that got tempered a little bit because they started to see the costs. We also hit limitations with some of the software vendors because they're quite small companies and very niche. They don't want to support anything that's in the cloud, so there are limits to what you can put in the cloud.

How are customer service and technical support?

Their technical support is very good. In the early stages, we would get almost instant online support, because we would go into the Cloud Manager and there would be a chat and we could have a chat session with the engineers who were implementing it on the NetApp side.

As things have progressed, we now need to follow a more formal support model, but we usually get a pretty good response, for general, routine questions, within five or six hours. If it were a major incident, you would get much faster support. We've never had a major incident with it.

Which solution did I use previously and why did I switch?

It replaced some physical NetApps that were going to be refreshed. One of the reasons we switched was to limit capital expenditure. Another reason was that it was very much a "Let's go and put as much as we possibly can into the cloud" approach. It fell in with that initiative quite well.

How was the initial setup?

The initial setup was pretty straightforward. The challenges we had were only around the security we put on top of AWS. For me, as an engineer, to be able to do things requires another team to do stuff on the network side or to do stuff on my rights within AWS so that I could deploy it and manage it afterwards. But it is relatively straightforward if you're not fighting other complications.

It took us a couple of days to get it up and working the first time. My colleague did one in the US and it took him about half a day. We did one for another part of the business and that took about three or four hours to get up and running.

Initially, we were just doing an evaluation to see what it was like and if we could actually use it. It went from a trial implementation to going live within a month or two, once we realized it was going to do what we wanted to do.

We had four people involved in the implementation. I was involved, as a storage engineer, and we also had one of our client specialists, a network person, and an info-sec person to validate that the network stuff was within their rules. In terms of maintenance, it's just  me, but it doesn't really require a lot of attention because it's cloud-based and it's a NetApp. Generally, once you set them up properly, unless you're changing something, they look after themselves.

What about the implementation team?

It was done by just us. Because it was one of the very early implementations of Cloud Volumes ONTAP, we were working with NetApp and their staff were playing the role that a third-party integrator might have played.

What was our ROI?

We're probably burning about $10,000 a month on it but it's saving us the CapEx and the power and cooling of a real filer. We're likely seeing at least a 50 percent saving.

What's my experience with pricing, setup cost, and licensing?

Choose your disk type properly. Go with the slowest, cheapest disk you can. If you need bigger, faster ones then go for them. 

They've got a variety of license schemes. The one we've gone for is where we pay NetApp once a year. They call it the Bring Your Own license scheme. There is a by-the-hour or by-the-month basis from AWS and you can get it that way as well and be billed through AWS. But you may not get the same level of discounts that you would if you were dealing with NetApp directly. If you are committed to having a client filer for an extended period, then go with the NetApp licensing model rather than the AWS-provisioned one.

Ultimately, the more data you save, the more it costs you, because you're paying AWS for the capacity. NetApp is licensed per filer, but there are additional running costs that are paid to AWS. You pay AWS' hosting fee for an EC2 instance, and each one of the disks within the NetApp is EBS storage and you pay AWS for those.

There is potential to save money by moving things off to object storage. The only cost savings we see on it is against having to buy physical hardware.

Which other solutions did I evaluate?

We looked at third-party hosting with either our own, dedicated hardware or shared NetApp hardware. I wasn't that involved in that evaluation process, but I figure that the costs for the work-around were too high or the solution was too complex for us to go with.

CVO enables us to manage our native cloud storage better than if we used management options provided by the native cloud service. With the native solutions, you don't get any of the advantages of the NetApp in terms of being able to deduplicate and having clear management of the snapshot-ing. Also, at the time, there wasn't an easy way to back up to a cloud NetApp. There was nothing. Now they have a slightly different solution where they'll mount it for you but, at that time, you created your own cloud instance and your own cloud file and you managed that. Now, you can access a solution that is managed by AWS or by NetApp.

What other advice do I have?

It is almost identical to having a real NetApp, and it's just that it's remote and it's in the cloud. Almost anything you can do with NetApp locally you can do with a cloud filer.

Go with the cheapest disks to start with, and if you need the performance you can easily transition to using faster disks.

There are limitations, but in general it's robust and easily managed.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Senior Systems Engineer at a healthcare company with 5,001-10,000 employees
Real User
Dec 2, 2019
Helped reduce our data footprint in the cloud and is easy to scale
Pros and Cons
  • "We are definitely in the process of reducing our footprint on our secondary data center and all those snapshots technically reduce tape backup. That's from the protection perspective, but as far as files, it's much easier to use and manage and it's faster, too."
  • "I think the challenge now is more in terms of keeping an air gap. The notion that it is in the cloud, easy to break, etc. The challenge now is mostly about the air gap and how we can protect that in the cloud."

What is our primary use case?

We use the solution on premises for files and in AWS for the target.

How has it helped my organization?

We are definitely in the process of reducing our footprint on our secondary data center and all those snapshots technically reduce tape backup. That's from the protection perspective, but as far as files, it's much easier to use and manage and it's faster, too.

The solution has definitely helped reduce our organization's data footprint in the cloud. The data-tiering helps a lot. I would say improving data tiering to S3 reduces our footprint by about 90-95%, which is huge. That is instead of just sitting on EBS, which is expensive storage.

What is most valuable?

The solution's Snapshot copies and thin clones is a really fast and easy method for recovery.

What needs improvement?

I think the challenge now is more in terms of keeping an air gap. The notion that it is in the cloud, easy to break, etc. The challenge now is mostly about the air gap and how we can protect that in the cloud.

What do I think about the stability of the solution?

So far, it has been very stable. We haven't had any downtime or other stability issues.

What do I think about the scalability of the solution?

This product is very easy to scale.

How are customer service and technical support?

Most of the time they're very timely. Sometimes you just need to wait, which is okay because those times are not critical issues. When we do have to wait, the response time is usually a day or two, but that's fine with that level of criticality.

Which solution did I use previously and why did I switch?

I've used NetApp for many years. It's something that I know is very stable and reliable. Recommending it to the current company was an easy pass. When I joined the company we were using a different vendor. It was an EMC solution for file, but we moved to NetApp. NetApp has more storage efficiency, the Snapshot feature, and better performance when you have multiple snapshots.

How was the initial setup?

It's very straightforward to set up. It was very easy and fast.

We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. It was very easy and there was almost nothing to do. It's just a click of a button.

What about the implementation team?

We used NetApp Build Engineer to deploy. We had a good experience with them.

What other advice do I have?

Definitely check out this file solution. We are using that and the cloud solution. It's something you need to see in your environment if you are not using it yet.

NetApp is nine out of ten. If we address the air gap concern, it would be a ten.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Storage Architect at a consultancy with 10,001+ employees
Real User
Oct 29, 2019
Critical data is snapshotted more frequently making it easier to restore
Pros and Cons
  • "The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted a lot more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc."
  • "How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically."

What is our primary use case?

The primary use case is to move age old data to the cloud.

It is deployed on the cloud.

How has it helped my organization?

The tool saves us time and money. Now, it's easy to retrieve data back, where you can go back and look at the statistics to study them. Because my company is focused on healthcare, there's no time limit on the retention of information. It's infinite. So, instead of having all our data on tapes and things, which takes many hours to try to retrieve information back. This is a good solution.

What is most valuable?

The migration is seamless. Basically, we shouldn't be spending a whole lot budget-wise. We would like to have something reasonable. What's happening right now is when we try to develop a cloud solution, we don't see the fine print. Then, at the end of the day, we are getting a long bill that says, "Okay, this is that, that is what." So, we don't want those unanticipated costs.

We use the solution’s inline encryption using SnapMirror. We did get Geoaudits and things like that. In other words, everything put together is a security. It's not like just storage talking to the cloud, it's everything else too: network, PCs, clients, etc. It's a cumulative effort to secure. That's where we are trying to make sure there are no vulnerabilities. Any vulnerabilities are addressed right away and fixed.

The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc. 

The solution’s Snapshot copies and thin clones have affected our application development speed positively. They have affected us in a very positive way. From Snapshots, copies, clones, and things, they were able to develop applications, doing pretty much in-house development. They were able to roll it out first in the test environment of the R&D department. The R&D department uses it a lot. It's easy for them because they can simulate production issues while they are still in production. So, they love it. We create and clone for them all the time.

The solution helped reduced our company's data footprint in the cloud. They're reducing it by two petabytes of data in the cloud. All of the tape data, they are now writing to the cloud. It's like we have almost reached the capacity that we bought even before we knew we were going to reach it. So it's good. It reduces labor, because with less tapes, you don't have to go around buying tapes and maintaining those tapes, then sending them offsite, etc. All that has been eliminated.

What needs improvement?

Right now, we're using StorageGRID. Obviously, it is a challenge. Anything that you're writing to the cloud or when you get things from the cloud, it is a challenge. When we implemented StorageGRID, like nodes and things like that, we implemented it on our bare-metal. So the issue is that they're trying to implement features, like erasure coding and things like that, and it is a huge challenge. It's still a challenge because we have a fine node bare-metal Docker implementation, so if you lose a node for some reason, then it's like it stops to read from it or write to it. This is because of limitations within the infrastructure and within ONTAP.

How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically. 

It needs better reporting. Right now, we had to put everything one to the other just to figure out what could be the issue. We get a random error saying, "This is an error," and we have to literally dig into it, look to people, lock files, look through our loads, and look through the Docker lock files, then verify, "Okay, this is the issue." We just want it to be better in alerting and error handling reports. Once you get an error, you don't want to sit trying to figure out what that error means in the first two hours. It should be fixable right away. Then, right away you are trying to work on it, trying to get it done. That's where we see the drawbacks. Overall, the product is good and serves a purpose, but as an administrator and architect, nothing is perfect.

What do I think about the stability of the solution?

There's always room for improvement. Overall, it's still stable.

What do I think about the scalability of the solution?

60 percent of our tape data is sitting in the cloud now.

There's a limitation to scalability. Right now, when you want to expand the initial architecture, we have to add additional loads just so it can handle the data without hurting the performance. Then, we have to go back and request for more licensing. It adds to our licensing, thus adding to the cost. In regards to scalability, unless you have a five to six year plan ahead, we can't say, "Great, we have run out of space. Okay, let's try to increase space." It's not like increasing volume.

How are customer service and technical support?

Unless a much more experienced person comes, I think the print and tech guy is only reading what he sees on the website. He pulls up their code or whatever, because what we see when we open a case is already there is an automatic case that's opened. We see typical questionnaires, but nothing pertaining to the case. For example, you run out of space or high nodes, the technical support is sitting there asking us something else. Nothing to do with high nodes and the volume being down or offline. It's not relevant. It is a generalized thing. You have to sit down and explain to them, "This has nothing to do with the questions you're asking. It's out of context, so you might want to look again and get back with the proper input." That's a pain.

However, the minute we say, "It's very critical," we see a good, solid SME on the line who is helping us.

I'm not experienced as many of my colleagues. They're really frustrated. We did convey this concern to our account person and have seen a lot of change.

Which solution did I use previously and why did I switch?

The company has always been a NetApp shop even before I entered the company. We continue to use it because of the good products. We do market research, obviously. We do see good products, and every year there is improvement. When we want to do hardware upgrades, it's still very good. The way we are trying to develop, it's very seamless for us and not a pain. 

We have never felt, "We are done with NetApp. Let's move onto something else." I love to introduce other vendors into the mix, just so it's not a monopoly. We still love NetApp as our primary.

How was the initial setup?

It is a little complex. It's completely different from the regular standard ONTAP, and how you manage and the learning code. Half the time you get confused and try to compare it with a standard cloud. You start to say, "Oh, this feature was here. How come it's not there? That was very good there. How come it's not here?"

We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. The configuration wizards and its ability to automate the process was good. We liked it. It's all in one place, so you don't have to go around trying to use multiple tools just to get things worked out. You see what you have on the other side plus what do you have on your end, and you're able to access it.

What about the implementation team?

Mostly, we did it ourselves. When we went to MetroCluster, we used their Professional Services. For the rest of ONTAP, we deployed it ourselves. It is pretty much self-explanatory and has good training.

What's my experience with pricing, setup cost, and licensing?

Cloud is cloud. It's still expensive. Any good solution comes with a price tag. That's where we are looking to see how well we can manage our data in the cloud by trying to optimize the costs.

I do know our licensing cost to some extent, but not fully. E.g., I don't know overall how much we have gone over the budget or where did we put costs down just to maintain licensing on it. That part of it, I don't know. 

I know the licensing is a bit on the high-end. That's when we had to downsize our MetroCluster disks and just migrate to disks that were half used. We migrated into those just to reduce maintenance costs.

Which other solutions did I evaluate?

We use Caringo. It's object storage migration for age old data. It is a cheap solution for us, so that's why we use that. When we compared prices, Caringo was much cheaper.

Once we migrated everything to Caringo, there were challenges because it's another vendor, and then you're working with two different vendors. We started having issues, so now we use StorageGRID.

We chose NetApp because we already had the infrastructure. Adding additional resources and features into the mix is much easier because it's one vendor, and they understand the product. If we needed to add something and improve on the solution, it's much easier.

What other advice do I have?

I would recommend NetApp any day, at any time, because there's so much hard work in it. It's more open and transparent. Nobody is coming from NetApp, saying, "We're going to sell this gimmick." Then, you view all the good stuff but begin to realize, "This is not what they promised." For this reason, I would recommend NetApp.

They make sure the solution fits our needs. It's not, "Okay, we'll go to the customer site and tell what we feel like regarding their products." Even if it fits or not, it doesn't affect that they've gone through the door. A lot of people do that. NetApp makes an assessment, then they make sure, "Okay, it does fit in."

The product: I would give it an eight (out of 10). The company: It's a six (out of 10).

We have not yet implemented the solution to move data between hyperscalers and our on-premises environment. It's just from our NetApps to the cloud, not from the hybrid. The RVM team is planning on that. So, they can have the whole untouched thing put on the cloud rather than being hosted on our data stores.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Lead Storage Engineer at a insurance company with 5,001-10,000 employees
Real User
Jul 29, 2019
Enables us to manage multiple petabytes of storage with a small team, including single node and HA instances
Pros and Cons
  • "Unified Manager, System Manager, and Cloud Manager are all GUI-based. It's easy for somebody who has not been exposed to this for years to pick it up and work with it."
  • "We use the mirroring to mirror our volumes to our DR location. We also create snapshots for backups. Snapshots will create a specified snapshot to be able to do a DR test without disrupting our standard mirrors. That means we can create a point-in-time snapshot, then use the ability of FlexClones to make a writeable volume to test with, and then blow it away after the DR test."
  • "Some of the licensing is a little kludgy. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgy. They're working on it right now."

What is our primary use case?

For the most part, we're using it to move data off-prem. We have the ability to do mirrors from on-prem to Cloud Volumes ONTAP and we also have both single-node instances and HA instances. We are running it in both AWS and Azure.

We're using all of the management tools that go along with it. We're using both OnCommand Cloud Manager and OnCommand Unified Manager, which means we can launch System Manager as well.

Unified Manager is what monitors the environment. OnCommand Cloud Manager allows you to deploy and it does have some monitoring capabilities, but it's not like Unified Manager. And from OnCommand Cloud Manager you can launch System Manager, which gives you the lower-level details of the environment.

Cloud Manager will allow you to create volumes, do CIFS shares, NFS mounts, and create aggregates. But the rest of the networking components and other work for the SVMs and doing other configurations are normally done at that lower level. System Manager is where you would do that, whereas Unified Manager allows you to monitor the entire environment.

Say I have 30 instances running out there. Unified Manager allows me to monitor all 30 instances for things like volume-full alerts, near volume-full alerts, I-nodes, full network components being offline, paths, back-end storage paths, aggregate fulls. All those items that you would want to monitor for a healthy environment are handled through Unified Manager.

How has it helped my organization?

We're sitting at multiple petabytes of storage on our NetApp infrastructure. We're talking hundreds of thousands of shares across thousands of volumes. Even with that size of infrastructure, it's being supported by three people. And it's not like we're working 24/7. It gives us the ability to do a lot, to do more with less. Those three people manage our entire NAS environment. I've got two intermediate and one senior storage engineer in our environment who handle things. They're handling those multiple petabytes of on-prem and I'm just starting to get them involved in the cloud version, Cloud Volumes ONTAP. So, for the most part, it's just me on the Cloud Volume side.

In terms of the storage efficiency reducing our storage footprint, the answer I'd like to say is "yes." The problem I have is that nobody ever wants to delete anything. We have terabytes of data on-prem in multiple locations, in both primary and DR backed-up. And now, we're migrating it to the cloud. But eventually, the answer will be yes.

What is most valuable?

I'm very familiar with working from the command line, but Unified Manager, System Manager, and Cloud Manager are all GUI-based. It's easy for somebody who has not been exposed to this for years to pick it up and work with it. Personally, for the most part, I like to get in with my secure CRT and do everything from the command line.

We do a lot of DR testing of our environment, so we're using a couple of components. We use Unified Manager to link with WFA, Workflow Automation, and we do scripted cut-overs to build out. We use the mirroring to mirror our volumes to our DR location. We also create snapshots for backups. Snapshots will create a specified snapshot to be able to do a DR test without disrupting our standard mirrors. That means we can create a point-in-time snapshot, then use the ability of FlexClones to make a writeable volume to test with, and then blow it away after the DR test.

We could also do that in an actual disaster. All we would do is quiesce and break our mirrors, our volumes would become writeable, and then we would deploy our CIFS shares and our NFS mounts. We would have a full working environment in a different geographic location. Whether you're doing it on-prem or in the cloud, those capabilities are there. But that's all done at a lower level.

The data protection provided by the Snapshot feature is a crucial part of being able to maintain our environment. We stopped doing tape-based backups to our NAS systems. We do 35 days of snapshots. We keep four "hourlies," two dailies, and 35 nightly snapshots. This gives us the ability to recover any data that's been accidentally deleted or corrupted, from an application perspective, and to pull it out as a snapshot. And then there are the point-in-time snapshots, being able to create one at a given point in time. If I want to use a FlexCone to get at data, which are just pointers to the back-end data, right now, and use that as a writeable volume without interrupting my backup and DR capabilities, those point-in-time snapshots are crucial.

The user can go and recover the file himself so we don't have to have a huge number of people working on recovering things. The user has the ability to get to that snapshot location to recover the file and go however many days back. Being that it's a read-only a file to the user community, users can get at that data, as long as they have proper rights to that file. Somebody else could not get to a file for which they don't have rights. There's no security breach or vulnerability. It just provides the ability for a user who owns that data to get to a backup copy of that data, to recover it, in case they've deleted or had a file corruption.

We also use their File Services Solutions in the cloud, CIFS and NFS. It works just as well as on-prem. The way we configure an environment, we have the ability to talk back to our domain controllers, and then it uses the standard AD credentials and DNS from our on-prem environments.

Cloud Volumes ONTAP in the cloud, versus Data ONTAP on-prem, are the exact same products. If you have systems on-prem that you're migrating to the cloud, you won't have to retrain your workforce because they'll be used to everything that they'll be doing in the cloud as a result of what they've been doing on-prem. In that sense, Cloud Volumes ONTAP is the exact same product, unless you're using a really old version of Data ONTAP on-prem. Then there's the standard change between Data ONTAP versions.

What needs improvement?

Some of the licensing is a little kludgy. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgy. They're working on it right now. We're working with them on straightening it out.

We're moving a grid environment to Azure and the way it was set up is that we have eight SVMs, which are virtual environments. Each of those has its own CIFS servers, all their CIFS and NFS mounts. The reason they're independent of one another is that different groups of business got pulled together, so they had specific CIFS share names and you can't have the same name in the same server more than once on the network. You can't have CIFS share called "Data" in the same SVM. We have eight SVMs because of the way the data was labeled in the paths. God forbid you change a path because that breaks everything in every application all down the line. It gives you the ability to port existing applications from on-prem into cloud and/or from on-prem into fibre infrastructure.

But that ability wasn't there in Cloud Volumes ONTAP because they assume that it was going to be a new market and they licensed it for a single SVM per instance built out in the cloud. They were figuring: New market and new people coming to this, not people porting these massive old-volume infrastructures. In our DR infrastructure we have 60 SVMs. That's not how they build out the new environments. 

We're working with them to improve that and they're making strides. The licensing is the only thing that I can see they can improve on and they're working on it, so I wouldn't even knock them on that.

For how long have I used the solution?

I've been using it since its inception. Prior to it being called Cloud Volumes ONTAP, it was named a couple of different things as it went along. I've been working with the on-prem Data ONTAP for about 16 years now. When they first offered the Cloud Volumes ONTAP, I started testing that out in a Beta program. It's been a few years now with Cloud Volumes ONTAP. I'm our lead storage engineer, but I'm also on a couple of our cloud teams and I'm a cloud administrator for our organization. We started looking at it when AWS ( /products/amazon-aws-reviews ) first started coming on the scene, at what we could do in the cloud. And as a company direction, we're implementing cloud-first, where available.

What do I think about the stability of the solution?

We've had no issues.

What do I think about the scalability of the solution?

In an HA environment, it will scale up to 358 terabytes. That's not bad per-system. We've had no difficulties.

We will be moving more stuff off-prem into the cloud. Right now it's at about 15 percent of our entire environment, and we plan on at least 10 percent, or more, per quarter, over the next few years.

We'll be doing the tiering and using the Cloud Sync as well. We're a financial and insurance company, so some things have to remain on-prem, and some things, from a PCI perspective, have a lot of different requirements around them. And because we're across multiple countries worldwide, there are all sorts of HIPAA and other types of legal and financial ramifications from a security perspective. In the UK and in Europe there are the privacy components. There are different things in Hong Kong and Singapore, in Spain, etc. Each country unit requires different types of policies to be adhered to. Everything we have is encrypted at rest, as well as encrypted in-flight.

Cloud Volumes ONTAP will also support doing data encryption at a volume level, a software encryption. But from a PCI perspective, we use the NSE drives, which give us hardware encryption. So they're double encrypted. They are hardware encrypted. We're having to use a management appliance to keep and maintain the encryption keys, and we do quarterly encryption-key replacement. But there are also the volumes that are encrypted as well. We also use TLS for transporting the data, doing encryption in-flight. There are all sorts of things that it supports which allow you to be compliant.

Another feature it has is disk sanitize, a destruction component which allows you to do a DoD wipe of the data. Once you've decommissioned an environment, it is completely wiped so nobody can get access to the data that was there previously. That's all built into Data ONTAP, including Cloud Volumes.

NSE drives are a little different because you are not getting physical drives in the cloud environment, so you couldn't do that. But you can do the volume encryption, from Cloud Volumes. In terms of a DoD wipe, you wouldn't be doing that on Azure's or AWS's environments because it's a virtual disk.

How are customer service and technical support?

I've rarely used tech support. I've got so much experience deploying these environments that it's like breathing. It's second nature. And when they first came out with OnCommand Cloud Manager, I was doing beta testing and debugging with the group out of Israel to build the product.

How was the initial setup?

The initial setup was very straightforward. If you use an OnCommand Cloud Manager to deploy it into AWS or Azure, it's point-and-click stupid-simple. It takes less than 15 minutes, depending upon your connectivity and bandwidth. That 15 minutes is to build out a brand-new filer and create CIFS shares on it. It automatically deploys it for you: the back-end storage, the EC2 instances, if you're in an AWS. In Azure, it creates the Blob space. It creates the VMs. 

It's all done for you with just a couple of screens. You tell it what you want to call it, you tell it what account or subscription you're using, depending upon whether it's AWS or Azure. You tell it how big you want the device to be, how much storage you want it to have, and what volumes you want it to create; CIFS shares, etc. You click next, next, next. As long as you have the ability to provision what you've gone into, whether it's AWS or Azure, and turned on programmatic deployment, it gives you the access. The only thing you have to do outside Cloud Volumes ONTAP under OnCommand Cloud Manager is turn it on to allow it to run. It picks up everything else. It'll pick up what VPC you have, what subnet you have. You just tell it what security group you want it to use. It's fairly simple.

If somebody hasn't utilized or isn't familiar with how to deploy anything in either AWS or Azure, it might be a tad more complicated because they'd need to get that information to begin with. You have to have at least moderate experience with your infrastructure to know which VPC and subnet and security group to specify.

What was our ROI?

In my opinion, we're getting a good return on investment.

Which other solutions did I evaluate?

I always try new products. I've used the SoftNAS product, and a couple of other generic NAS products. They don't even compare. They're not on the same page. They're not even in the same universe. I might be a little biased but they're not even close. 

I have looked at Azure NetApp Files, which is another product that NetApp is putting out. Instead of Cloud Volumes it's cloud files. You don't have to deploy an entire NetApp infrastructure. It gives you the ability to do CIFS at file level without having to manage any of the overhead. That's pre-managed for you.

What other advice do I have?

For somebody who's never used it before, the biggest thing is ease of use. In terms of advice, as long as you design your implementation correctly, it should be fine. I would do the due diligence on the front-end to determine how you want to utilize it before you deploy.

We have over 3,000 users of the solution who have access to snapshots, etc. but only to their own data. We have multiple SVMs per business unit and a locked-down security on that. Only individuals who own data have access to it. We are officially like a utility. We give them storage space. We give them the ability to use it and then they maintain their data. From an IT perspective, we can't really discern what is business-critical and what isn't to a specific business unit. We're global, we're not just U.S., we're all over the world.

We've gone into doing HA. It's the same as what's on-prem, and HA on-prem is something we've always done. When we would buy a filer for on-premise, we'd always buy a two-node HA filer with a switch back-end to be able to maintain the environment. The other nice thing, from an on-prem perspective with a switched environment, is that we can inject and eject nodes. We can do a zero-downtime lifecycle. We can inject new nodes and mirror the data to the new nodes. Once everything's on those new nodes, eject the old nodes and we will have effectively lifecycled the environment, without having to take any downtime. Data ONTAP works really well for that. The only thing to be aware of is that to inject new nodes into an existing cluster, they have to be at the same version of Data ONTAP.

In terms of provisioning, we keep that locked down because we don't want them running us out of space. We have a ticketing system where users request storage allocation and the NAS team, which supports the NetApp infrastructure, will allocate the space with the shares, to start out. After that, our second-level support teams, our DSC (distributed service center) will maintain the volumes from a size perspective. If something starts to get near-full, they will automatically allocate additional space. The reason we have that in place is that if it tries to grow rapidly, like if there's an application that's out of control and just keeps spinning up and eating more and more of the utilization, it gives us the ability to stop that and get with the user before they go from using a couple a hundred gigs to multiple terabytes, which would cost them X amount. There is the ability to auto-grow. We just don't use it in our environment.

In terms of the data protection provided by the solution's disaster recovery technology, we use that a lot. Prior to clustered ONTAP - this is going back to 7-Mode - there was the ability to auto-DR with a single command. That gave us the ability to do a cut-over to another environment and automatically fail. We're currently using WFA to do that because, when they first came out with cluster mode, they didn't have the ability to auto-DR. I have not looked into whether they've made auto-DR a feature in these later versions of Data ONTAP.

OnCommand Cloud Manager doesn't allow you to do DR-type stuff. There are other things within the suite of the cloud environment that you can do: There's Cloud Sync which allows you to create a data broker and sync between CIFS shares or NFS mounts into an S3 bucket back-end. There's a lot of stuff that you can do there, but that's getting into the other product lines.

As for using it to deploy Kubernetes, we are working through that right now. That process is going well. We've really just started getting through it and it hasn't been overly complicated. Cloud Volumes ONTAP's capabilities for deploying Kubernetes means it's been fairly easy.

In terms of the cloud, one thing that has made things a little easier is that previously, within the AWS environment, we used to have to create a virtual filer in each of our subscriptions or accounts because we really wanted the filer to be close to the database instances or the servers within that same account, without traversing VPCs. Now, since they have given us the ability to do VPC peering, we can create an overarching primary account and then have it talk to all the instances within that storage account, or subscription in Azure, without having to have one spun up in every single subscription or account. We have a lot of accounts so it has allowed us to reel that back by creating larger HA components in a single account and then give access through VPCs to the other accounts. All that traffic stays within Azure or AWS. That saves money because we don't have to pay them for multiple subscriptions of Cloud Volumes ONTAP and/or additional virtual filers.

For my use, Cloud Volumes ONTAP is a ten out of ten.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Service Architecture at a computer software company with 1,001-5,000 employees
Real User
Jul 23, 2019
High availability enables us to run two instances so there is no downtime when we do maintenance
Pros and Cons
  • "NetApp's Cloud Manager automation capabilities are very good because it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well."
  • "Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations."
  • "Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair."
  • "One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have."

What is our primary use case?

The primary use case is for SAP production environments. We are running the shared file systems for our SAP systems on it.

How has it helped my organization?

It's helped us to dive into the cloud very fast. We didn't have to change any automations which we already had. We didn't have to change any processes we already had. We were able to adopt it very fast. It was a huge benefit for us to use the same concepts in the cloud as we do on-premise. We're running our environment very efficiently, and it was very helpful that our staff, our operators, didn't have to learn new systems. They have the same processes, all the same knowledge they had before. It was very easy and fast.

We did a comparison, of course, and it was cheaper to have Cloud Volumes ONTAP running with the deduplication and compression, compared to storing everything, for example, on HA disks and have a server running all the time as well. And that was not even for the biggest environment.

The data tiering saves us money because it offloads all the code data to the Blob Storage. However, we use the HA version and data tiering just came to HA with version 9.6 and we are not on 9.6 in our production environment. It's still on RC, the pre-release, and not on GA release. In our testing we have seen that it saves a lot of money, but our production systems are not there yet.

What is most valuable?

The high availability of the service is a valuable feature. We use the HA version to run two instances. That way there is no downtime for our services when we do any maintenance on the system itself.

For normal upgrades or updates of the system - updates for security fixes, for example - it helps that the systems and that the service itself stay online. For one of our customers, we have 20 systems attached and if we had to ride that customer all the time and say, "Oh, sorry, we have to take your 20 systems down just because we have to do maintenance on your shared file systems," he would not be amused. So that's really a huge benefit.

And there are the usual NetApp benefits we have had over the last ten years or so, like snapshotting, cloning, and deduplication and compression which make it space-efficient on the cloud as well. We've been taking advantage of the data protection provided by the snapshot feature for many years in our on-prem storage systems. We find it very good. And we offload those snapshots as well to other instances, or to other storage systems.

The provisioning capability was challenging the first time we used it. You have to find the right way to deploy but, after the first and second try, it was very easy to automate for us. We are highly automated in our environment so we use the REST API for deployment. We completely deploy the Cloud Volumes ONTAP instance itself automatically, when we have a new customer. Similarly, deployment on the Cloud Volumes ONTAP for the Volumes and access to the Cloud Volumes ONTAP instance are automated as well.

But for that, we still use our on-premise automations with WFA (Workflow Automation). NetApp has a tool which simplifies the automation of NetApp storage systems. We use the same automation for the Cloud Volumes ONTAP instances as we do for our on-premise storage systems. There's no difference, at the end of the day, from the operating system standpoint.

In addition, NetApp's Cloud Manager automation capabilities are very good because, again, it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well. It's pretty good.

Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations. We're just using it, deploying volumes and using them. We see that, in some way, as being the future of storage services, for us at least: completely managed.

What needs improvement?

Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face.

One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.

For how long have I used the solution?

We've been using Cloud Volumes for over a year now.

What do I think about the stability of the solution?

The stability is very good. We haven't had any outages.

What do I think about the scalability of the solution?

Right now, the scalability is sufficient in what it provides for us, but we can see that our customer environments are growing. We can see that it will reach its performance end in around a year or so. They will have to evolve or create some performance improvements or build some scale-up/scale-out capabilities into it.

In terms of increasing our usage, the tiering will be definitely used in production as soon as its GA for Azur. They're already playing with the Ultra SSDs, for performance improvements on the storage system itself. As soon as they become generally available by Microsoft, that will probably a feature we'll go to.

As for end-users, for us they are our customers. But the customers have several hundred or 1,000 users on the system. I don't really know how many end-users are ultimately using it, but we have about ten customers.

How are customer service and technical support?

Technical support has been very good. The technical people who are responsible for us at NetApp are very good. If we contact them we get direct feedback. We often have direct contact, in our case at least, to the engineers as well. We have direct contacts with NetApp in Tel Aviv.

It's worth mentioning that when we started with Cloud Volumes ONTAP in the past, we did an architecture workshop with them in Tel Aviv, to tell them what our deployments look like in our on-premise environment, and to figure out what possibilities Cloud Volumes ONTAP could provide to us as a service provider. What else could we do on it, other than just running several services? For example: disaster recovery or doing our backups. We did that at a very early stage in the process.

Which solution did I use previously and why did I switch?

We only used native Azure services. We went with Cloud Volumes ONTAP because it was a natural extension of our NetApp products. We have a huge on-premise storage environment from NetApp and we have been familiar with all the benefits from these storage systems for several years. We wanted to have all the benefits in the cloud, the same as we have on-premise. That's why we evaluated it, and we're in a very early stage with it.

How was the initial setup?

To say the initial setup was complex is too strong. We had to look into it and find the right way to do it. It wasn't that complex, it was just a matter of understanding what was supported and what was not from the SAP side. But as soon as we figured that out, it was very straightforward to figure out how to build our environment.

We had an implementation strategy: Determining what SAP systems and what services we would like to deploy in the cloud. Our strategy was that if Cloud Volumes ONTAP made sense in any use case, we would want to use it because it's, again, highly automated and we could use it with our scripting already. Then we had to look at what is supported by SAP itself. We mixed that together in the end and that gave us our concept.

Our initial deployment took one to two weeks, maximum. It required two people, in total, but it was a mixture of SAP and storage colleagues. In terms of maintenance, it doesn't take any additional people than we already have for our on-premise environment. There was no additional headcount for the cloud environment. It's the same operating team and the same people managing Cloud Volumes ONTAP as well as our on-premise storage systems. It requires almost no maintenance. It just runs and we don't have to take care of updating it every two months or so for security reasons.

What about the implementation team?

We didn't use a third-party.

What was our ROI?

We have seen return on investment but I don't have the numbers. 

What's my experience with pricing, setup cost, and licensing?

The standard pricing is online. Pricing depends. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company. We have an Enterprise Agreement or something similar to that. So we get a different price for it.

In terms of additional costs beyond the standard licensing fees, you have to run instances in Azure, virtual machines and disks. You still have to pay for the Azure disks, and Blob Storage if you're using tiering. What's also important is to know is the network bandwidth. That was the most complicated part in our project, to figure out how much data would be streamed out of our data center into the cloud and how much data would have to be sent back into our data center. It's more challenging than if you have a customer who is running only in Azure. It can be expensive if you don't have an eye on it.

Which other solutions did I evaluate?

We have a single-vendor strategy.

What other advice do I have?

Don't be afraid of granting permissions because that's one of the most complex parts, but that's Azure. As soon as you've done that, it's easy and straightforward. When you do it the first time you'll think, "Oh, why is it so complicated?" That's native Azure.

The biggest lesson I've learned from using Cloud Volumes ONTAP is that from an optimization standpoint, our on-premise instance was a lot more complex than it had to be. That's was a big lesson because Cloud Volumes ONTAP is a very easy, light, wide service. You just use it and it doesn't require that much configuring. You can just use the standards which come from NetApp and that was something we didn't do with our on-premise environment.

In terms of disaster recovery, we have not used Cloud Volumes ONTAP in production yet. We've tested it to see if we could adopt Cloud Volumes ONTAP for that scenario, to migrate all our offloads or all our storage footprint we have on-premise to Cloud Volumes ONTAP. We're still evaluating it. We've done a lot of cost-comparison, which looks pretty good. But we are still facing a little technical problem because we're a CSP (cloud service provider). We're on the way to having Microsoft fix that. It's a Microsoft issue, not a NetApp Cloud Volumes ONTAP issue.

I would rate the solution at eight out of ten. There are improvements they need to make for scale-up and scale-out.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Sr Systems Engineer at a healthcare company with 501-1,000 employees
Real User
Nov 23, 2019
Simple to get up and running, and our data is readily available when we need it
Pros and Cons
  • "The most valuable feature of this solution is that it makes our data readily available and we don't have to go through a lot of trouble to access it."
  • "We would like to have support for high availability in multi-regions."

What is our primary use case?

Our primary use case is data replication to the cloud.

How has it helped my organization?

Using Snapshot copies and thin clones for operational recovery is convenient. This technology makes things very easy.

The unified file and block-storage access across clouds and on-premises infrastructure have made things easier for us. It means that we do not face significant roadblocks.

What is most valuable?

The most valuable feature of this solution is that it makes our data readily available and we don't have to go through a lot of trouble to access it.

What needs improvement?

We would like to have support for high availability in multi-regions.

There is no support for Microsoft Azure.

For how long have I used the solution?

I have been using this solution for three years.

What do I think about the stability of the solution?

The stability is very impressive and we have had no issues with it.

What do I think about the scalability of the solution?

Scalability is not an issue because it is really expandable. If you don't know the structure of the business you can scale up, scale down, and do everything graphically.

How are customer service and technical support?

We have not used NetApp technical support directly. We have been speaking with partners who are in our region.

How was the initial setup?

We used the NetApp Cloud Manager to get up and running, and we found it very simple. It was very easy, and you don't have to be an engineer to get it working.

What about the implementation team?

Partners from our region assisted us with the deployment. CW did a good job starting from scratch and getting everything up and running. When I would give a requirement, they would come up with all of the options that were available.

Which other solutions did I evaluate?

I have tried Pure Storage and EMC RecoverPoint, but ONTAP is easier to use.

What other advice do I have?

I love this solution. They have a lot of features and they explore the market really well, whereas other vendors fail to do those things. ONTAP keeps evolving with the needs of the market and follows the trends.

I would rate this solution a ten out of ten.

Which deployment model are you using for this solution?

Private Cloud
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros sharing their opinions.
Updated: December 2025
Buyer's Guide
Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros sharing their opinions.