Try our new research platform with insights from 80,000+ expert users
reviewer1380825 - PeerSpot reviewer
Lead Engineer Architecture & Engineering Services at a tech services company with 10,001+ employees
Real User
Provides a single point solution that is easy to maintain and provision
Pros and Cons
  • "If you have a larger amount of data than normal in cloud, it is easy to provision and maintain. Waiting for the delivery of the controller, the configuration of enclosures, etc., all this stuff is eliminated compared to using on-premise."
  • "I would like NetApp to come up with an easier setup for the solution."

What is our primary use case?

The main use case of ONTAP is for users to utilize SharePoint. From there, they need to access data where there are specific applications as well as an individual shared folder.

It is being used for application purposes as well as for individual user purposes.

We are using the latest version.

How has it helped my organization?

This isn't an isolated solution. We must have NetApp to support our faster access on a file protocol. We found the same solution on Azure is just as helpful when compared to the on-premise solution.

The solution provides us unified storage, no matter what kind of data we have. If we take a normal storage account in the public cloud, then it may not be active in terms of identity level. However, using NetApp, we can leverage the identity management control integrating with our AD. From there, we can gain the computer user's access and maintain the user side entity for who is accessing what.

What is most valuable?

On-premises, we are using the same NetApp. We find the solution in Azure to be more reliable and tailorable in NetApp with the same NetApp features because it gives us the most updated NetApp solution.

If you have a larger amount of data than normal in the cloud, it is easy to provision and maintain. Waiting for the delivery of the controller, the configuration of enclosures, etc., all this stuff is eliminated compared to using on-premise.

For how long have I used the solution?

Eight months.

Buyer's Guide
NetApp Cloud Volumes ONTAP
June 2025
Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
860,711 professionals have used our research since 2012.

What do I think about the stability of the solution?

From my months' experience, I haven't seen a single point of failure within the ONTAP, except for Azure maintenance.

What do I think about the scalability of the solution?

Scalability is a very good feature. If our data reaches 90 percent (or some threshold level), it automatically increases the storage within ONTAP without our intervention.

The solution helps us control storage costs. It is scalable. If we need more storage, then we can opt for a monthly or yearly option.

How are customer service and support?

The technical support is good.

Once you register with NetApp Cloud Central, people will get in touch with you who can assist you with deploying your solution.

Which solution did I use previously and why did I switch?

This is the first time that we are using this type of a solution in the cloud.

How was the initial setup?

The initial setup is straightforward, but I would like NetApp to come up with an easier setup for the solution.

Deployment time depends on the client. On average, deploying the entire solution can take about a day (eight hours), if there are no issues.

For a standard storage implementation project, we need to have some shared storage for the client's application as well as the user groups and shared files that they have been using. To leverage this, we've been using this solution.

You need to go through the NetApp website and go through the documents regarding deploying ONTAP. If you experience any difficulties, there is a technical team to help you.

What about the implementation team?

Some of the sales managers and other team members helped me setup the environment. They explained to me how the pay as you go and BYOL models work. If you need to the BYOL model to work, they will use some temporary licenses for a 30-day evaluation. They are there for you from beginning to end if you need assistance.

What was our ROI?

Because we went with the BYOL instead of pay as you go, we haven't seen ROI.

Using this solution, the more data that we store, the more money we can save. If you use traditional cloud providers, then you cannot manage unified lists. For that, you would need to follow a set of rules and some other stuff. You also need to have more people managing the entire environment. Whereas, NetApp provides a single point solution. 

What's my experience with pricing, setup cost, and licensing?

They have a very good price which keeps our customers happy. 

Once we deploy the pay as you go model, we cannot convert this product as a BYOL model. This is a concern that we have. We would like NetApp to come up with a solution for this. For example, a customer may think, "Let's use this solution." Later, he realizes that, "This is our solution and I have this budget for the year. If we can pay upfront for one year, then we can reduced the amount we pay." This is currently not possible if we select the pay as you go model.

Your OCCM should always be the same as your ONTAP, e.g., suppose you have deployed one ONTAP, then due to some reason, you deleted it and also OCCM. Then, the next time that you want to deploy another OCCM and ONTAP, that same license won't work because the license is based on the OCCM serial ID.

Which other solutions did I evaluate?

We did not evaluate other solutions. We only evaluated ONTAP.

NetApp is an industry leader as well as we have experienced with NetApp on-premise. That is the reason we chose NetApp as a reliable partner.

What other advice do I have?

We don't use the solution’s cloud resource performance monitoring.

I would rate this solution as a nine (out of 10).

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner.
PeerSpot user
reviewer1223481 - PeerSpot reviewer
Storage Admin at a comms service provider with 5,001-10,000 employees
Real User
Snapshot copies and thin clones have made our recovery time a lot faster
Pros and Cons
  • "ONTAP's snapshot copies and thin clones in terms of operational recovery are pretty useful in recovering your data from a time in a snapshot. That's pretty useful for when you have an event where a disaster struck and then you need to recover all your data. It's pretty helpful and pretty fast in those terms."
  • "In terms of improvement, I would like to see the Azure NetApp Files have the capability of doing SnapMirrors. Azure NetApp Files is, as we know, is an AFF system and it's not used in any of the Microsoft resources. It's basically NetApp hardware, so the best performance you can achieve, but the only reason we can't use that right now is because of the region that it's available in. The second was the SnapMirror capability that we didn't have that we heavily rely on right now."

What is our primary use case?

Our primary use case for ONTAP is for DR. 

How has it helped my organization?

ONTAP has improved my organization because we no longer need to purchase all that hardware and have that all come up as a big expense. It worked out better for our budgeting purposes.

We use it to move data between hyperscales on our on-premises environment. We're able to do that with SnapMirror and it's pretty simple to set up and move data around. 

What is most valuable?

The most valuable feature is DR backups. 

ONTAP's snapshot copies and thin clones in terms of operational recovery are pretty useful in recovering your data from a time in a snapshot. That's pretty useful for when you have an event where a disaster struck and then you need to recover all your data. It's pretty helpful and pretty fast in those terms.

We use SnapMirror inline encryption for security in the cloud. A lot of people, especially legal, want their data to be protected. That's what we use it for.

Snapshot copies and thin clones have made our recovery time a lot faster. Doing a restore from a snapshot is a lot better than trying to do a restore from a backup.

In terms of time management and managing our infrastructure, we are a lot better because of the consistency of storage management across clouds.

I wouldn't say it has reduced our data footprint in the cloud because whatever we were using was basically a lift and shift as of right now. We are hoping as we go we'll be able to take advantage of all the storage efficiencies like compression and all that. Hopefully, that'll save us quite a lot of space and time.

What needs improvement?

In terms of improvement, I would like to see the Azure NetApp Files have the capability of doing SnapMirrors. Azure NetApp Files is an AFF system and it's not used in any of the Microsoft resources. It's basically NetApp hardware, so the best performance you can achieve, but the only reason we can't use that right now is because of the region that it's available in. The second was the SnapMirror capability that we didn't have that we heavily rely on right now.

What do I think about the stability of the solution?

We haven't had issues with stability so far. 

What do I think about the scalability of the solution?

Scalability comes down to what service or what NetApp Cloud solution you're using. There are different solutions for what you're trying to achieve. Based on your requirements, you just need to pick the right solution that works for you.

How are customer service and technical support?

I haven't had any issues, so technical support is pretty good.

Which solution did I use previously and why did I switch?

We knew we needed to invest in this solution because we were told we were closing the data centers so we had to migrate to the cloud. The management told us we are closing data centers and migrating everything into the cloud. That's what kicked us off.

How was the initial setup?

We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. It could be a little challenging if you don't know how the network security groups and how the roles in Azure work. That's where we had the challenges with deploying because we had cloud managers in different regions, one in Azure West and one in Azure East and we were trying to do replications between the two clouds. The Cloud Central Cloud Manager wasn't able to make a connection and that was because of some of the roles that we had to provide. Even the documentation on that was kind of scattered across. It wasn't just one page and it had all the information. So that was kind of challenging and it took me a lot of time to figure that out. I think it should be in one single pane of a page. Not as scattered around different pages.

Once I reached out to the support they helped me out, but I was trying to figure it out on my own reading documentation and it didn't do anything.

The first one I deployed in Azure was very simple. The second one that we deployed and I was trying to make the connection between, that was complex because of how the roles worked.

What about the implementation team?

We used consultants for the implementation. We had a pretty good experience with them.

What was our ROI?

We have seen ROI. All of our SLASs via some of our SQL databases,  have SLAs of around five minutes. SnapMirror works great for that. We don't have that and if we have a disaster, then we could be in big trouble if we have SLA breaches and stuff like that.

What's my experience with pricing, setup cost, and licensing?

It has not reduced our cloud cost. We're still pretty new and we're still trying to figure things out like how the cost modeling works and which is the best performance and best cost for our workloads. Based on that, it's a lot of tuning. Once you get there, you just need to monitor your workloads and see how it is and just go from there.

For NetApp it's about $20,000 for a single node and $30,000 for the HA.

Which other solutions did I evaluate?

For the DR we are using NetApp but for the production, a lot of the cloud architects in our company want to go native to Azure or native to AWS. Since we are a NetApp Cloud shop for a while and even our RND on-prem is mostly just all on NetApps. We want to keep that going, going into the cloud because it's a lot simpler to manage our infrastructure, our storage and take advantage of all the efficiencies that NetApp provides. Whereas if you don't use that, all of those savings, and if you have a lot of data as we do, petabytes of data, and Microsoft and AWS, take advantage of all those efficiencies and we don't because we don't have that capability. With the NetApp integration, we can take advantage of all those efficiencies and other performance.

What other advice do I have?

I would rate it a nine out of ten because of the simplicity of the DR is amazing. You just set it up. If there are any issues bringing it back, bringing it online in a DR site just takes a few minutes and then you're back up online again.

The advice that I would give to anybody considering ONTAP is to give it a try. That's how I learned. I didn't know anything about the cloud. Then our company just started telling us that we were moving everything to the cloud and we had to learn about it. That's how we learned and moved everything to the cloud.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
NetApp Cloud Volumes ONTAP
June 2025
Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
860,711 professionals have used our research since 2012.
CTO at Poria
Real User
Reliable, easy to manage, and has an easy setup
Pros and Cons
  • "The initial setup was straightforward. We started with a small pilot and we then moved to production with no downtime at all."
  • "In the next release, I would like to see more options on the dashboard."

What is our primary use case?

My primary use case of ONTAP is for all of my data.

How has it helped my organization?

We have DR and we once had a problem with electricity and the data moved to the other side of the DR and the user and I didn't know about it. ONTAP has avoided this from occurring in the future.

What is most valuable?

The most valuable features are that it's easy to manage and it's reliable. 

I haven't had to restore the Snapshot copies and thin clones. Every time I check, it's working.

I don't use the inline encryption.

What needs improvement?

In the next release, I would like to see more options on the dashboard. 

Local support needs improvement. 

What do I think about the stability of the solution?

It's very stable.

What do I think about the scalability of the solution?

Scalability is easy.

How are customer service and technical support?

Their technical support is very good. 

Which solution did I use previously and why did I switch?

We previously used HPE 3PAR and we switched because of the complexity we had with HPE. It was easier with NetApp.

How was the initial setup?

The initial setup was straightforward. We started with a small pilot and we then moved to production with no downtime at all.

What about the implementation team?

We used an integrator for the setup. They were good. 

Which other solutions did I evaluate?

We chose NetApp because after we did the pilot, we saw the difference between both of the companies.

What other advice do I have?

I would rate it a nine out of a ten. I give it this rating because of my experience with it and the ease of implementation. To make it a ten it wouldn't cost money.

My advice to someone considering this solution would be to go for it. 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
reviewer1223403 - PeerSpot reviewer
Solutions Architect at a tech services company with 201-500 employees
Real User
Provides deduplication, compression, and compaction that should result in cost savings
Pros and Cons
  • "It gives a solution for storage one place to go across everything. So, the customer is very familiar with NetApp on-prem. It allows them to gain access to the file piece. It helps them with the training aspect of it, so they don't have to relearn something new. They already know this product. They just have to learn some widgets or what it's like in the cloud to operate and deploy it in different ways."
  • "I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper."

What is our primary use case?

Desktop-as-a-service is a PoC that I'm doing for our customers to allow them to use NetApp for their personal, departmental, and profile shares. This connects their desktop-as-a-service that we're building for them.

This is for training. The customer has classrooms that they have set up. They have about 150,000 users coming through. They want to have a way to do a secure, efficient solution that can be repeated after they finish this class, before the next class comes in, and use a NetApp CVO as well as some desktop services off of the AWS. 

It is hosted by AWS. Then, it hosted by CVO who sets out some filers, as well Cloud Volumes Manager as well. We were looking at it with Azure as well, because it doesn't matter. We want to do a multicloud with it.

How has it helped my organization?

We haven't put it into production yet. However, in the proof of concept, we show the use of it and the how you can take it in Snapshot daily coverage, because we're doing it for a training area. This allows them to return back to where they were. The bigger thing is if they need to reset up for a class, then we can have a goal copied or flip back where they need to be.

It gives a solution for storage one place to go across everything. So, the customer is very familiar with NetApp on-prem. It allows them to gain access to the file piece. It helps them with the training aspect of it, so they don't have to relearn something new. They already know this product. They just have to learn some widgets or what it's like in the cloud to operate and deploy it in different ways.

The customer knows the product. They don't have to train their administrators on how to do things. They are very familiar with that piece of it. Then, the deduplication, compression, and compaction are all things that you would get from moving to a CVO and the cloud itself. That is something that they really enjoy because now they're getting a lot of cost savings off of it. We anticipate cloud cost savings, but it is not in production yet. It should be about a 30 percent savings. If it is a 30 percent or better savings, then it is a big win for the customer and for us.

What is most valuable?

  • Dedupe
  • Compression
  • Compaction
  • Taking 30 gig of data and reducing it down to five to 10 gig on the AWS blocks.

What needs improvement?

I would some wizards or best practices following how to secure CVO, inherit to the Cloud Manager. I thought that was a good place to be able to put stuff like that in there. 

I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper. 

This is where I would like to see changes, primarily around security and performance matrices.

For how long have I used the solution?

We are still in the proof of concept stage.

What do I think about the stability of the solution?

It is a good system. It is very stable as far as what I've been using with it. I find that support from it is really good as well. It is something that I would offer to all of my customers.

What do I think about the scalability of the solution?

It is easy to scale. It is inherent to the actual product. It will move to another cloud solution or it can be managed from another cloud solution. So, it's taken down barriers which are sometimes put out by vendors in different ways.

How was the initial setup?

We use NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. Its configuration wizards and ability to automate the process are easy, simple, and straightforward. If you have any knowledge of storage, even to a very small amount, the wizards will click through and help to guide you through the right things. They make sure you put the right things in. They give some good examples to make sure you follow those examples, which makes it a bit more manageable in the long run.

Which other solutions did I evaluate?

They use some native things that are inherent to the AWS. They have looked at those things. 

NetApp has been one of the first ones that they looked at, and it is the one that they are very happy with today.

What other advice do I have?

Work with your resources in different ways, as far as in NetApp in the partner community. But bigger than that, just ask questions. Everybody seems willing to help move the solution forward. The biggest advice is just ask when you don't know, because there is so much to know.

I would rate the solution as a nine (out of 10).

We're not using inline encryption right now.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: My company has a business relationship with this vendor other than being a customer. Partner.
PeerSpot user
reviewer1223382 - PeerSpot reviewer
Sr Systems Engineer at a healthcare company with 1,001-5,000 employees
Real User
The native filer capabilities are baked right there on the system
Pros and Cons
  • "The solution’s Snapshot copies and thin clones in terms of operational recovery are the best thing since sliced bread. Rollback is super easy. It's just simple, and it works. It's very efficient."

    What is our primary use case?

    The primary use is virtualization as well as filer storage, pretty much all the features of the ONTAP suite.

    We don't have any cloud footprint for contractual obligations. So, it's all pretty much on-prem, but it's in a co-location.

    How has it helped my organization?

    We use it to replicate between data centers. It is for our DR site as well. We use it to create redundancy.

    We do on-prem S3 for StorageGRID. The on-prem infrastructure is cheap. It works just the same. It's S3, so it works very well as far as integration and things that use S3 in our environment.

    What is most valuable?

    The most valuable features are the native filer capabilities because a lot of SAN providers don't do that. When they do it, they do it with an appliance or a secondary. With this, it is just baked in right there on the system that you require. You don't have to have anything extra.

    The solution’s Snapshot copies and thin clones in terms of operational recovery are the best thing since sliced bread. Rollback is super easy. It's just simple, and it works. It's very efficient.

    What do I think about the stability of the solution?

    The stability is good. I've been with NetApps for a long time, so I've seen them fall and come back. However, with cDOT and all this new stuff, it is great. It just works.

    What do I think about the scalability of the solution?

    We're not that big, storage footprint-wise. However, it's simple. You just add nodes. So, it works.

    How are customer service and technical support?

    We have not really used the technical support.

    Which solution did I use previously and why did I switch?

    We had previous experiences with deploying ONTAP at other companies successfully.

    ONTAP makes our storage solutions more flexible. Traditionally, that's hard to do. ONTAP gives you those features which you typically have to build yourself.

    How was the initial setup?

    It's straightforward. But you do have to know what you're doing. Things do what you expect them to do. There is quite a bit of initial setup, but with things like Ansible and all this new stuff that they're doing, it makes it much easier and automated. So, it's simple.

    What about the implementation team?

    I did the deployment myself with a little help from our vendor's professional services.

    What was our ROI?

    We have had less downtime.

    What's my experience with pricing, setup cost, and licensing?

    Cost is a big factor, because a lot of companies can't afford enterprise grade equipment all the time. They skimp where they can. I would recommend that they improve the cost.

    What other advice do I have?

    This company that I work for now is just acquiring quite a bit of NetApp equipment. We will be doing SnapMirror. I have done it in the past at another company.

    It does exactly what it does, and it does it well. It works, and that's what really matters at the end the day: uptime, functionality, and scalability.

    I would rate it a nine out of 10. There is always room for improvement. No one is ever going to be a 10.

    Which deployment model are you using for this solution?

    Hybrid Cloud
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user
    Storage Architect at NIH
    Real User
    Critical data is snapshotted more frequently making it easier to restore
    Pros and Cons
    • "The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted a lot more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc."
    • "How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically."

    What is our primary use case?

    The primary use case is to move age old data to the cloud.

    It is deployed on the cloud.

    How has it helped my organization?

    The tool saves us time and money. Now, it's easy to retrieve data back, where you can go back and look at the statistics to study them. Because my company is focused on healthcare, there's no time limit on the retention of information. It's infinite. So, instead of having all our data on tapes and things, which takes many hours to try to retrieve information back. This is a good solution.

    What is most valuable?

    The migration is seamless. Basically, we shouldn't be spending a whole lot budget-wise. We would like to have something reasonable. What's happening right now is when we try to develop a cloud solution, we don't see the fine print. Then, at the end of the day, we are getting a long bill that says, "Okay, this is that, that is what." So, we don't want those unanticipated costs.

    We use the solution’s inline encryption using SnapMirror. We did get Geoaudits and things like that. In other words, everything put together is a security. It's not like just storage talking to the cloud, it's everything else too: network, PCs, clients, etc. It's a cumulative effort to secure. That's where we are trying to make sure there are no vulnerabilities. Any vulnerabilities are addressed right away and fixed.

    The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc. 

    The solution’s Snapshot copies and thin clones have affected our application development speed positively. They have affected us in a very positive way. From Snapshots, copies, clones, and things, they were able to develop applications, doing pretty much in-house development. They were able to roll it out first in the test environment of the R&D department. The R&D department uses it a lot. It's easy for them because they can simulate production issues while they are still in production. So, they love it. We create and clone for them all the time.

    The solution helped reduced our company's data footprint in the cloud. They're reducing it by two petabytes of data in the cloud. All of the tape data, they are now writing to the cloud. It's like we have almost reached the capacity that we bought even before we knew we were going to reach it. So it's good. It reduces labor, because with less tapes, you don't have to go around buying tapes and maintaining those tapes, then sending them offsite, etc. All that has been eliminated.

    What needs improvement?

    Right now, we're using StorageGRID. Obviously, it is a challenge. Anything that you're writing to the cloud or when you get things from the cloud, it is a challenge. When we implemented StorageGRID, like nodes and things like that, we implemented it on our bare-metal. So the issue is that they're trying to implement features, like erasure coding and things like that, and it is a huge challenge. It's still a challenge because we have a fine node bare-metal Docker implementation, so if you lose a node for some reason, then it's like it stops to read from it or write to it. This is because of limitations within the infrastructure and within ONTAP.

    How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically. 

    It needs better reporting. Right now, we had to put everything one to the other just to figure out what could be the issue. We get a random error saying, "This is an error," and we have to literally dig into it, look to people, lock files, look through our loads, and look through the Docker lock files, then verify, "Okay, this is the issue." We just want it to be better in alerting and error handling reports. Once you get an error, you don't want to sit trying to figure out what that error means in the first two hours. It should be fixable right away. Then, right away you are trying to work on it, trying to get it done. That's where we see the drawbacks. Overall, the product is good and serves a purpose, but as an administrator and architect, nothing is perfect.

    What do I think about the stability of the solution?

    There's always room for improvement. Overall, it's still stable.

    What do I think about the scalability of the solution?

    60 percent of our tape data is sitting in the cloud now.

    There's a limitation to scalability. Right now, when you want to expand the initial architecture, we have to add additional loads just so it can handle the data without hurting the performance. Then, we have to go back and request for more licensing. It adds to our licensing, thus adding to the cost. In regards to scalability, unless you have a five to six year plan ahead, we can't say, "Great, we have run out of space. Okay, let's try to increase space." It's not like increasing volume.

    How are customer service and technical support?

    Unless a much more experienced person comes, I think the print and tech guy is only reading what he sees on the website. He pulls up their code or whatever, because what we see when we open a case is already there is an automatic case that's opened. We see typical questionnaires, but nothing pertaining to the case. For example, you run out of space or high nodes, the technical support is sitting there asking us something else. Nothing to do with high nodes and the volume being down or offline. It's not relevant. It is a generalized thing. You have to sit down and explain to them, "This has nothing to do with the questions you're asking. It's out of context, so you might want to look again and get back with the proper input." That's a pain.

    However, the minute we say, "It's very critical," we see a good, solid SME on the line who is helping us.

    I'm not experienced as many of my colleagues. They're really frustrated. We did convey this concern to our account person and have seen a lot of change.

    Which solution did I use previously and why did I switch?

    The company has always been a NetApp shop even before I entered the company. We continue to use it because of the good products. We do market research, obviously. We do see good products, and every year there is improvement. When we want to do hardware upgrades, it's still very good. The way we are trying to develop, it's very seamless for us and not a pain. 

    We have never felt, "We are done with NetApp. Let's move onto something else." I love to introduce other vendors into the mix, just so it's not a monopoly. We still love NetApp as our primary.

    How was the initial setup?

    It is a little complex. It's completely different from the regular standard ONTAP, and how you manage and the learning code. Half the time you get confused and try to compare it with a standard cloud. You start to say, "Oh, this feature was here. How come it's not there? That was very good there. How come it's not here?"

    We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. The configuration wizards and its ability to automate the process was good. We liked it. It's all in one place, so you don't have to go around trying to use multiple tools just to get things worked out. You see what you have on the other side plus what do you have on your end, and you're able to access it.

    What about the implementation team?

    Mostly, we did it ourselves. When we went to MetroCluster, we used their Professional Services. For the rest of ONTAP, we deployed it ourselves. It is pretty much self-explanatory and has good training.

    What's my experience with pricing, setup cost, and licensing?

    Cloud is cloud. It's still expensive. Any good solution comes with a price tag. That's where we are looking to see how well we can manage our data in the cloud by trying to optimize the costs.

    I do know our licensing cost to some extent, but not fully. E.g., I don't know overall how much we have gone over the budget or where did we put costs down just to maintain licensing on it. That part of it, I don't know. 

    I know the licensing is a bit on the high-end. That's when we had to downsize our MetroCluster disks and just migrate to disks that were half used. We migrated into those just to reduce maintenance costs.

    Which other solutions did I evaluate?

    We use Caringo. It's object storage migration for age old data. It is a cheap solution for us, so that's why we use that. When we compared prices, Caringo was much cheaper.

    Once we migrated everything to Caringo, there were challenges because it's another vendor, and then you're working with two different vendors. We started having issues, so now we use StorageGRID.

    We chose NetApp because we already had the infrastructure. Adding additional resources and features into the mix is much easier because it's one vendor, and they understand the product. If we needed to add something and improve on the solution, it's much easier.

    What other advice do I have?

    I would recommend NetApp any day, at any time, because there's so much hard work in it. It's more open and transparent. Nobody is coming from NetApp, saying, "We're going to sell this gimmick." Then, you view all the good stuff but begin to realize, "This is not what they promised." For this reason, I would recommend NetApp.

    They make sure the solution fits our needs. It's not, "Okay, we'll go to the customer site and tell what we feel like regarding their products." Even if it fits or not, it doesn't affect that they've gone through the door. A lot of people do that. NetApp makes an assessment, then they make sure, "Okay, it does fit in."

    The product: I would give it an eight (out of 10). The company: It's a six (out of 10).

    We have not yet implemented the solution to move data between hyperscalers and our on-premises environment. It's just from our NetApps to the cloud, not from the hybrid. The RVM team is planning on that. So, they can have the whole untouched thing put on the cloud rather than being hosted on our data stores.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Lead Storage Engineer at a insurance company with 5,001-10,000 employees
    Real User
    Enables us to manage multiple petabytes of storage with a small team, including single node and HA instances
    Pros and Cons
    • "Unified Manager, System Manager, and Cloud Manager are all GUI-based. It's easy for somebody who has not been exposed to this for years to pick it up and work with it."
    • "We use the mirroring to mirror our volumes to our DR location. We also create snapshots for backups. Snapshots will create a specified snapshot to be able to do a DR test without disrupting our standard mirrors. That means we can create a point-in-time snapshot, then use the ability of FlexClones to make a writeable volume to test with, and then blow it away after the DR test."
    • "Some of the licensing is a little kludgy. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgy. They're working on it right now."

    What is our primary use case?

    For the most part, we're using it to move data off-prem. We have the ability to do mirrors from on-prem to Cloud Volumes ONTAP and we also have both single-node instances and HA instances. We are running it in both AWS and Azure.

    We're using all of the management tools that go along with it. We're using both OnCommand Cloud Manager and OnCommand Unified Manager, which means we can launch System Manager as well.

    Unified Manager is what monitors the environment. OnCommand Cloud Manager allows you to deploy and it does have some monitoring capabilities, but it's not like Unified Manager. And from OnCommand Cloud Manager you can launch System Manager, which gives you the lower-level details of the environment.

    Cloud Manager will allow you to create volumes, do CIFS shares, NFS mounts, and create aggregates. But the rest of the networking components and other work for the SVMs and doing other configurations are normally done at that lower level. System Manager is where you would do that, whereas Unified Manager allows you to monitor the entire environment.

    Say I have 30 instances running out there. Unified Manager allows me to monitor all 30 instances for things like volume-full alerts, near volume-full alerts, I-nodes, full network components being offline, paths, back-end storage paths, aggregate fulls. All those items that you would want to monitor for a healthy environment are handled through Unified Manager.

    How has it helped my organization?

    We're sitting at multiple petabytes of storage on our NetApp infrastructure. We're talking hundreds of thousands of shares across thousands of volumes. Even with that size of infrastructure, it's being supported by three people. And it's not like we're working 24/7. It gives us the ability to do a lot, to do more with less. Those three people manage our entire NAS environment. I've got two intermediate and one senior storage engineer in our environment who handle things. They're handling those multiple petabytes of on-prem and I'm just starting to get them involved in the cloud version, Cloud Volumes ONTAP. So, for the most part, it's just me on the Cloud Volume side.

    In terms of the storage efficiency reducing our storage footprint, the answer I'd like to say is "yes." The problem I have is that nobody ever wants to delete anything. We have terabytes of data on-prem in multiple locations, in both primary and DR backed-up. And now, we're migrating it to the cloud. But eventually, the answer will be yes.

    What is most valuable?

    I'm very familiar with working from the command line, but Unified Manager, System Manager, and Cloud Manager are all GUI-based. It's easy for somebody who has not been exposed to this for years to pick it up and work with it. Personally, for the most part, I like to get in with my secure CRT and do everything from the command line.

    We do a lot of DR testing of our environment, so we're using a couple of components. We use Unified Manager to link with WFA, Workflow Automation, and we do scripted cut-overs to build out. We use the mirroring to mirror our volumes to our DR location. We also create snapshots for backups. Snapshots will create a specified snapshot to be able to do a DR test without disrupting our standard mirrors. That means we can create a point-in-time snapshot, then use the ability of FlexClones to make a writeable volume to test with, and then blow it away after the DR test.

    We could also do that in an actual disaster. All we would do is quiesce and break our mirrors, our volumes would become writeable, and then we would deploy our CIFS shares and our NFS mounts. We would have a full working environment in a different geographic location. Whether you're doing it on-prem or in the cloud, those capabilities are there. But that's all done at a lower level.

    The data protection provided by the Snapshot feature is a crucial part of being able to maintain our environment. We stopped doing tape-based backups to our NAS systems. We do 35 days of snapshots. We keep four "hourlies," two dailies, and 35 nightly snapshots. This gives us the ability to recover any data that's been accidentally deleted or corrupted, from an application perspective, and to pull it out as a snapshot. And then there are the point-in-time snapshots, being able to create one at a given point in time. If I want to use a FlexCone to get at data, which are just pointers to the back-end data, right now, and use that as a writeable volume without interrupting my backup and DR capabilities, those point-in-time snapshots are crucial.

    The user can go and recover the file himself so we don't have to have a huge number of people working on recovering things. The user has the ability to get to that snapshot location to recover the file and go however many days back. Being that it's a read-only a file to the user community, users can get at that data, as long as they have proper rights to that file. Somebody else could not get to a file for which they don't have rights. There's no security breach or vulnerability. It just provides the ability for a user who owns that data to get to a backup copy of that data, to recover it, in case they've deleted or had a file corruption.

    We also use their File Services Solutions in the cloud, CIFS and NFS. It works just as well as on-prem. The way we configure an environment, we have the ability to talk back to our domain controllers, and then it uses the standard AD credentials and DNS from our on-prem environments.

    Cloud Volumes ONTAP in the cloud, versus Data ONTAP on-prem, are the exact same products. If you have systems on-prem that you're migrating to the cloud, you won't have to retrain your workforce because they'll be used to everything that they'll be doing in the cloud as a result of what they've been doing on-prem. In that sense, Cloud Volumes ONTAP is the exact same product, unless you're using a really old version of Data ONTAP on-prem. Then there's the standard change between Data ONTAP versions.

    What needs improvement?

    Some of the licensing is a little kludgy. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgy. They're working on it right now. We're working with them on straightening it out.

    We're moving a grid environment to Azure and the way it was set up is that we have eight SVMs, which are virtual environments. Each of those has its own CIFS servers, all their CIFS and NFS mounts. The reason they're independent of one another is that different groups of business got pulled together, so they had specific CIFS share names and you can't have the same name in the same server more than once on the network. You can't have CIFS share called "Data" in the same SVM. We have eight SVMs because of the way the data was labeled in the paths. God forbid you change a path because that breaks everything in every application all down the line. It gives you the ability to port existing applications from on-prem into cloud and/or from on-prem into fibre infrastructure.

    But that ability wasn't there in Cloud Volumes ONTAP because they assume that it was going to be a new market and they licensed it for a single SVM per instance built out in the cloud. They were figuring: New market and new people coming to this, not people porting these massive old-volume infrastructures. In our DR infrastructure we have 60 SVMs. That's not how they build out the new environments. 

    We're working with them to improve that and they're making strides. The licensing is the only thing that I can see they can improve on and they're working on it, so I wouldn't even knock them on that.

    For how long have I used the solution?

    I've been using it since its inception. Prior to it being called Cloud Volumes ONTAP, it was named a couple of different things as it went along. I've been working with the on-prem Data ONTAP for about 16 years now. When they first offered the Cloud Volumes ONTAP, I started testing that out in a Beta program. It's been a few years now with Cloud Volumes ONTAP. I'm our lead storage engineer, but I'm also on a couple of our cloud teams and I'm a cloud administrator for our organization. We started looking at it when AWS ( /products/amazon-aws-reviews ) first started coming on the scene, at what we could do in the cloud. And as a company direction, we're implementing cloud-first, where available.

    What do I think about the stability of the solution?

    We've had no issues.

    What do I think about the scalability of the solution?

    In an HA environment, it will scale up to 358 terabytes. That's not bad per-system. We've had no difficulties.

    We will be moving more stuff off-prem into the cloud. Right now it's at about 15 percent of our entire environment, and we plan on at least 10 percent, or more, per quarter, over the next few years.

    We'll be doing the tiering and using the Cloud Sync as well. We're a financial and insurance company, so some things have to remain on-prem, and some things, from a PCI perspective, have a lot of different requirements around them. And because we're across multiple countries worldwide, there are all sorts of HIPAA and other types of legal and financial ramifications from a security perspective. In the UK and in Europe there are the privacy components. There are different things in Hong Kong and Singapore, in Spain, etc. Each country unit requires different types of policies to be adhered to. Everything we have is encrypted at rest, as well as encrypted in-flight.

    Cloud Volumes ONTAP will also support doing data encryption at a volume level, a software encryption. But from a PCI perspective, we use the NSE drives, which give us hardware encryption. So they're double encrypted. They are hardware encrypted. We're having to use a management appliance to keep and maintain the encryption keys, and we do quarterly encryption-key replacement. But there are also the volumes that are encrypted as well. We also use TLS for transporting the data, doing encryption in-flight. There are all sorts of things that it supports which allow you to be compliant.

    Another feature it has is disk sanitize, a destruction component which allows you to do a DoD wipe of the data. Once you've decommissioned an environment, it is completely wiped so nobody can get access to the data that was there previously. That's all built into Data ONTAP, including Cloud Volumes.

    NSE drives are a little different because you are not getting physical drives in the cloud environment, so you couldn't do that. But you can do the volume encryption, from Cloud Volumes. In terms of a DoD wipe, you wouldn't be doing that on Azure's or AWS's environments because it's a virtual disk.

    How are customer service and technical support?

    I've rarely used tech support. I've got so much experience deploying these environments that it's like breathing. It's second nature. And when they first came out with OnCommand Cloud Manager, I was doing beta testing and debugging with the group out of Israel to build the product.

    How was the initial setup?

    The initial setup was very straightforward. If you use an OnCommand Cloud Manager to deploy it into AWS or Azure, it's point-and-click stupid-simple. It takes less than 15 minutes, depending upon your connectivity and bandwidth. That 15 minutes is to build out a brand-new filer and create CIFS shares on it. It automatically deploys it for you: the back-end storage, the EC2 instances, if you're in an AWS. In Azure, it creates the Blob space. It creates the VMs. 

    It's all done for you with just a couple of screens. You tell it what you want to call it, you tell it what account or subscription you're using, depending upon whether it's AWS or Azure. You tell it how big you want the device to be, how much storage you want it to have, and what volumes you want it to create; CIFS shares, etc. You click next, next, next. As long as you have the ability to provision what you've gone into, whether it's AWS or Azure, and turned on programmatic deployment, it gives you the access. The only thing you have to do outside Cloud Volumes ONTAP under OnCommand Cloud Manager is turn it on to allow it to run. It picks up everything else. It'll pick up what VPC you have, what subnet you have. You just tell it what security group you want it to use. It's fairly simple.

    If somebody hasn't utilized or isn't familiar with how to deploy anything in either AWS or Azure, it might be a tad more complicated because they'd need to get that information to begin with. You have to have at least moderate experience with your infrastructure to know which VPC and subnet and security group to specify.

    What was our ROI?

    In my opinion, we're getting a good return on investment.

    Which other solutions did I evaluate?

    I always try new products. I've used the SoftNAS product, and a couple of other generic NAS products. They don't even compare. They're not on the same page. They're not even in the same universe. I might be a little biased but they're not even close. 

    I have looked at Azure NetApp Files, which is another product that NetApp is putting out. Instead of Cloud Volumes it's cloud files. You don't have to deploy an entire NetApp infrastructure. It gives you the ability to do CIFS at file level without having to manage any of the overhead. That's pre-managed for you.

    What other advice do I have?

    For somebody who's never used it before, the biggest thing is ease of use. In terms of advice, as long as you design your implementation correctly, it should be fine. I would do the due diligence on the front-end to determine how you want to utilize it before you deploy.

    We have over 3,000 users of the solution who have access to snapshots, etc. but only to their own data. We have multiple SVMs per business unit and a locked-down security on that. Only individuals who own data have access to it. We are officially like a utility. We give them storage space. We give them the ability to use it and then they maintain their data. From an IT perspective, we can't really discern what is business-critical and what isn't to a specific business unit. We're global, we're not just U.S., we're all over the world.

    We've gone into doing HA. It's the same as what's on-prem, and HA on-prem is something we've always done. When we would buy a filer for on-premise, we'd always buy a two-node HA filer with a switch back-end to be able to maintain the environment. The other nice thing, from an on-prem perspective with a switched environment, is that we can inject and eject nodes. We can do a zero-downtime lifecycle. We can inject new nodes and mirror the data to the new nodes. Once everything's on those new nodes, eject the old nodes and we will have effectively lifecycled the environment, without having to take any downtime. Data ONTAP works really well for that. The only thing to be aware of is that to inject new nodes into an existing cluster, they have to be at the same version of Data ONTAP.

    In terms of provisioning, we keep that locked down because we don't want them running us out of space. We have a ticketing system where users request storage allocation and the NAS team, which supports the NetApp infrastructure, will allocate the space with the shares, to start out. After that, our second-level support teams, our DSC (distributed service center) will maintain the volumes from a size perspective. If something starts to get near-full, they will automatically allocate additional space. The reason we have that in place is that if it tries to grow rapidly, like if there's an application that's out of control and just keeps spinning up and eating more and more of the utilization, it gives us the ability to stop that and get with the user before they go from using a couple a hundred gigs to multiple terabytes, which would cost them X amount. There is the ability to auto-grow. We just don't use it in our environment.

    In terms of the data protection provided by the solution's disaster recovery technology, we use that a lot. Prior to clustered ONTAP - this is going back to 7-Mode - there was the ability to auto-DR with a single command. That gave us the ability to do a cut-over to another environment and automatically fail. We're currently using WFA to do that because, when they first came out with cluster mode, they didn't have the ability to auto-DR. I have not looked into whether they've made auto-DR a feature in these later versions of Data ONTAP.

    OnCommand Cloud Manager doesn't allow you to do DR-type stuff. There are other things within the suite of the cloud environment that you can do: There's Cloud Sync which allows you to create a data broker and sync between CIFS shares or NFS mounts into an S3 bucket back-end. There's a lot of stuff that you can do there, but that's getting into the other product lines.

    As for using it to deploy Kubernetes, we are working through that right now. That process is going well. We've really just started getting through it and it hasn't been overly complicated. Cloud Volumes ONTAP's capabilities for deploying Kubernetes means it's been fairly easy.

    In terms of the cloud, one thing that has made things a little easier is that previously, within the AWS environment, we used to have to create a virtual filer in each of our subscriptions or accounts because we really wanted the filer to be close to the database instances or the servers within that same account, without traversing VPCs. Now, since they have given us the ability to do VPC peering, we can create an overarching primary account and then have it talk to all the instances within that storage account, or subscription in Azure, without having to have one spun up in every single subscription or account. We have a lot of accounts so it has allowed us to reel that back by creating larger HA components in a single account and then give access through VPCs to the other accounts. All that traffic stays within Azure or AWS. That saves money because we don't have to pay them for multiple subscriptions of Cloud Volumes ONTAP and/or additional virtual filers.

    For my use, Cloud Volumes ONTAP is a ten out of ten.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Service Architecture at All for One Group AG
    Real User
    High availability enables us to run two instances so there is no downtime when we do maintenance
    Pros and Cons
    • "NetApp's Cloud Manager automation capabilities are very good because it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well."
    • "Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations."
    • "Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair."
    • "One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have."

    What is our primary use case?

    The primary use case is for SAP production environments. We are running the shared file systems for our SAP systems on it.

    How has it helped my organization?

    It's helped us to dive into the cloud very fast. We didn't have to change any automations which we already had. We didn't have to change any processes we already had. We were able to adopt it very fast. It was a huge benefit for us to use the same concepts in the cloud as we do on-premise. We're running our environment very efficiently, and it was very helpful that our staff, our operators, didn't have to learn new systems. They have the same processes, all the same knowledge they had before. It was very easy and fast.

    We did a comparison, of course, and it was cheaper to have Cloud Volumes ONTAP running with the deduplication and compression, compared to storing everything, for example, on HA disks and have a server running all the time as well. And that was not even for the biggest environment.

    The data tiering saves us money because it offloads all the code data to the Blob Storage. However, we use the HA version and data tiering just came to HA with version 9.6 and we are not on 9.6 in our production environment. It's still on RC, the pre-release, and not on GA release. In our testing we have seen that it saves a lot of money, but our production systems are not there yet.

    What is most valuable?

    The high availability of the service is a valuable feature. We use the HA version to run two instances. That way there is no downtime for our services when we do any maintenance on the system itself.

    For normal upgrades or updates of the system - updates for security fixes, for example - it helps that the systems and that the service itself stay online. For one of our customers, we have 20 systems attached and if we had to ride that customer all the time and say, "Oh, sorry, we have to take your 20 systems down just because we have to do maintenance on your shared file systems," he would not be amused. So that's really a huge benefit.

    And there are the usual NetApp benefits we have had over the last ten years or so, like snapshotting, cloning, and deduplication and compression which make it space-efficient on the cloud as well. We've been taking advantage of the data protection provided by the snapshot feature for many years in our on-prem storage systems. We find it very good. And we offload those snapshots as well to other instances, or to other storage systems.

    The provisioning capability was challenging the first time we used it. You have to find the right way to deploy but, after the first and second try, it was very easy to automate for us. We are highly automated in our environment so we use the REST API for deployment. We completely deploy the Cloud Volumes ONTAP instance itself automatically, when we have a new customer. Similarly, deployment on the Cloud Volumes ONTAP for the Volumes and access to the Cloud Volumes ONTAP instance are automated as well.

    But for that, we still use our on-premise automations with WFA (Workflow Automation). NetApp has a tool which simplifies the automation of NetApp storage systems. We use the same automation for the Cloud Volumes ONTAP instances as we do for our on-premise storage systems. There's no difference, at the end of the day, from the operating system standpoint.

    In addition, NetApp's Cloud Manager automation capabilities are very good because, again, it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well. It's pretty good.

    Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations. We're just using it, deploying volumes and using them. We see that, in some way, as being the future of storage services, for us at least: completely managed.

    What needs improvement?

    Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face.

    One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.

    For how long have I used the solution?

    We've been using Cloud Volumes for over a year now.

    What do I think about the stability of the solution?

    The stability is very good. We haven't had any outages.

    What do I think about the scalability of the solution?

    Right now, the scalability is sufficient in what it provides for us, but we can see that our customer environments are growing. We can see that it will reach its performance end in around a year or so. They will have to evolve or create some performance improvements or build some scale-up/scale-out capabilities into it.

    In terms of increasing our usage, the tiering will be definitely used in production as soon as its GA for Azur. They're already playing with the Ultra SSDs, for performance improvements on the storage system itself. As soon as they become generally available by Microsoft, that will probably a feature we'll go to.

    As for end-users, for us they are our customers. But the customers have several hundred or 1,000 users on the system. I don't really know how many end-users are ultimately using it, but we have about ten customers.

    How are customer service and technical support?

    Technical support has been very good. The technical people who are responsible for us at NetApp are very good. If we contact them we get direct feedback. We often have direct contact, in our case at least, to the engineers as well. We have direct contacts with NetApp in Tel Aviv.

    It's worth mentioning that when we started with Cloud Volumes ONTAP in the past, we did an architecture workshop with them in Tel Aviv, to tell them what our deployments look like in our on-premise environment, and to figure out what possibilities Cloud Volumes ONTAP could provide to us as a service provider. What else could we do on it, other than just running several services? For example: disaster recovery or doing our backups. We did that at a very early stage in the process.

    Which solution did I use previously and why did I switch?

    We only used native Azure services. We went with Cloud Volumes ONTAP because it was a natural extension of our NetApp products. We have a huge on-premise storage environment from NetApp and we have been familiar with all the benefits from these storage systems for several years. We wanted to have all the benefits in the cloud, the same as we have on-premise. That's why we evaluated it, and we're in a very early stage with it.

    How was the initial setup?

    To say the initial setup was complex is too strong. We had to look into it and find the right way to do it. It wasn't that complex, it was just a matter of understanding what was supported and what was not from the SAP side. But as soon as we figured that out, it was very straightforward to figure out how to build our environment.

    We had an implementation strategy: Determining what SAP systems and what services we would like to deploy in the cloud. Our strategy was that if Cloud Volumes ONTAP made sense in any use case, we would want to use it because it's, again, highly automated and we could use it with our scripting already. Then we had to look at what is supported by SAP itself. We mixed that together in the end and that gave us our concept.

    Our initial deployment took one to two weeks, maximum. It required two people, in total, but it was a mixture of SAP and storage colleagues. In terms of maintenance, it doesn't take any additional people than we already have for our on-premise environment. There was no additional headcount for the cloud environment. It's the same operating team and the same people managing Cloud Volumes ONTAP as well as our on-premise storage systems. It requires almost no maintenance. It just runs and we don't have to take care of updating it every two months or so for security reasons.

    What about the implementation team?

    We didn't use a third-party.

    What was our ROI?

    We have seen return on investment but I don't have the numbers. 

    What's my experience with pricing, setup cost, and licensing?

    The standard pricing is online. Pricing depends. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company. We have an Enterprise Agreement or something similar to that. So we get a different price for it.

    In terms of additional costs beyond the standard licensing fees, you have to run instances in Azure, virtual machines and disks. You still have to pay for the Azure disks, and Blob Storage if you're using tiering. What's also important is to know is the network bandwidth. That was the most complicated part in our project, to figure out how much data would be streamed out of our data center into the cloud and how much data would have to be sent back into our data center. It's more challenging than if you have a customer who is running only in Azure. It can be expensive if you don't have an eye on it.

    Which other solutions did I evaluate?

    We have a single-vendor strategy.

    What other advice do I have?

    Don't be afraid of granting permissions because that's one of the most complex parts, but that's Azure. As soon as you've done that, it's easy and straightforward. When you do it the first time you'll think, "Oh, why is it so complicated?" That's native Azure.

    The biggest lesson I've learned from using Cloud Volumes ONTAP is that from an optimization standpoint, our on-premise instance was a lot more complex than it had to be. That's was a big lesson because Cloud Volumes ONTAP is a very easy, light, wide service. You just use it and it doesn't require that much configuring. You can just use the standards which come from NetApp and that was something we didn't do with our on-premise environment.

    In terms of disaster recovery, we have not used Cloud Volumes ONTAP in production yet. We've tested it to see if we could adopt Cloud Volumes ONTAP for that scenario, to migrate all our offloads or all our storage footprint we have on-premise to Cloud Volumes ONTAP. We're still evaluating it. We've done a lot of cost-comparison, which looks pretty good. But we are still facing a little technical problem because we're a CSP (cloud service provider). We're on the way to having Microsoft fix that. It's a Microsoft issue, not a NetApp Cloud Volumes ONTAP issue.

    I would rate the solution at eight out of ten. There are improvements they need to make for scale-up and scale-out.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros sharing their opinions.
    Updated: June 2025
    Buyer's Guide
    Download our free NetApp Cloud Volumes ONTAP Report and get advice and tips from experienced pros sharing their opinions.