We primarily use it for storage for VMs and backup units.
We use this solution on a daily basis. In Sweden, typically small to medium-sized companies use this solution.
Download the NetApp AFF (All Flash FAS) Buyer's Guide including reviews and more. Updated: November 2022
NetApp All Flash FAS (AFF) is a storage infrastructure that contains only flash memory drives instead of spinning-disk drives. NetApp All Flash FAS (AFF) addresses enterprise storage requirements with best-in-class data management, higher performance, and flexibility. All Flash FAS systems accelerate your operations without sacrificing the effectiveness, dependability, or adaptability of your IT infrastructure, since they are built on ONTAP data management software. As an enterprise-grade all-flash array, the NetApp All Flash FAS (AFF) manages and safeguards your critical data, making your datacenter’s transition to flash simple and painless.
NetApp All Flash FAS (AFF) is a robust, scale-out platform designed for virtualized environments and can be deployed as an independent system or a high-performance tier in a NetApp ONTAP configuration.
NetApp All Flash FAS Features
NetApp AFF has many valuable key features. Some of the most useful ones include:
NetApp All Flash FAS Benefits
There are many benefits to implementing NetApp AFF. Some of the biggest advantages the solution offers include:
Reviews from Real Users
NetApp AFF (All Flash FAS) stands out among its competitors for a number of reasons. Two major ones are its handling of read and write operations and its high-capacity drives. PeerSpot users take note of the advantages of these features in their reviews:
Harish M., Senior Consultant at a tech services company, writes of the solution, “The most valuable feature is its ability to handle high-intensity read and write operations. It works very well in terms of this.” He adds, “We recently started using the volume encryption feature, which is helpful because there are some federal projects that require data at rest to be encrypted.”
Another PeerSpot reviewer, a director at an IT Infrastructure Services department at a university, notes, “In terms of the footprint, it is far more efficient. It has smaller, higher-capacity drives than our older unit. In terms of space, power, and cooling, it has simplified things.”
NetApp AFF (All Flash FAS) was previously known as NetApp All Flash FAS, NetApp AFF, NetApp Flash FAS.
Acibadem Healthcare Group, AmTrust Financial Services, Citrix Systems, DWD, Mantra Group
We primarily use it for storage for VMs and backup units.
We use this solution on a daily basis. In Sweden, typically small to medium-sized companies use this solution.
MetroCluster functions, SnapMirror functionality, and ease-of-use are the most valuable functions for us.
Their backup software could be improved.
In the next release, I would like to see a complete S3 protocol. Also better compatibility and integration with VM-ware.
I have been using AFF since its release.
Nowadays, AFF is very scalable — ever since they implemented the ClusterMode. I think it's very easy to scale, both up and out. It's also very stable.
They provide different types of support. When an accident happens that impacts your business, they respond very fast and give very good help. Sometimes, when you have problems with their software, it can take a long time — that should be improved. Overall, their top functions, operating systems, the storage controller, they are very strongly enforced.
The initial setup is very simple. How much time it takes depends on the size and what the initial setup should be. It can be a long process.
We do everything from the initial setup, to the integration with system backups, the whole chain, including the hardware, the software, the daily work, as well as the daily administration as well.
It depends on how you look at things, but they are in a higher price range.
They have different license models. You can get a license model where everything is included, but you can also purchase more licensing and buy what you need. It really depends on what you buy.
I would absolutely recommend this solution to other companies.
Our primary use case for AFF is to host our internal file shares for all of our company's "F" drives, which is what we call them. All of our CIFS and NFS are hosted on our AFF system right now.
We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point.
Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.
One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.
AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.
AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.
AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFF dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.
The most valuable feature on AFF, for me as a user, is one of the most basic NetApp features, which just:
A user comes to you and says, "I need more space."
"Okay, here, you have more space."
I don't have to move things around. I don't have to deal with other systems. It's just so nice.
Other things that have been really useful, of course, are the clustering features and being able to stay online during failovers and code upgrades; and just being able to seamlessly do all sorts of movement of data without having to disrupt end-users' ability to get to those files. And we can take advantage of new shelves, new hardware, upgrade in place. It's kind of magic when it comes to doing those sorts of things.
The simplicity of AFF with regards to data management and data protection — I actually split those two up. It's really easy to protect your data with AFF. You can set up SnapMirror in a matter of seconds and have all your data just shoot over to another data center super quickly.
But I find some issues with other administrators on my team when it comes to management of the data because they have to either learn a CLI, which some of them really don't like to do — to really get into managing how volumes should be moved or to edit permissions and stuff like that. Or they go into a user interface, which is fine, it's web-based, but it's not the most intuitive interface as far as finding the things you need to do, especially when they get complicated. Some things just hide in there and you have to click a few levels deep before you can actually do what you need to do.
I think they're working on improving that with like the latest versions of ONTAP. So we're kind of excited to see where that's going to go. But we haven't really tried that out yet to see.
One of the areas that the product can improve is definitely in the user interface. We don't use it for SAN, but we've looked at using it for SAN and the SAN workflows are really problematic for my admins, and they just don't like doing SAN provisioning on that app. That really needs to change if we're going to adopt it and actually consider it to be a strong competitor versus some of the other options out there.
As far as other areas, they're doing really great in the API realm. They're doing really great in the availability realm. They just announced the all-SAN product, so maybe we'll look at that for SAN.
But a lot of the improvements that I'd like to see around AFF go with the ancillary support side of things, like the support website. They're in the middle of rolling this out right now, so it's hard to criticize because next month they're going to have new stuff for me to look at. But tracking bugs on there and staying in touch with support and those sorts of things need a little bit of cleanup and improvement. Getting to your downloads and your support articles, that's always a challenge with any vendor.
I would like to see ONTAP improve their interfaces; like I said, the web one, but also the CLI. That could be a much more powerful interface for users to do a lot of scripting right in the CLI without needing third-party tools, without necessarily needing Ansible or any of those configuration management options. If they pumped up the CLI by default, users could see that NetApp has got us covered all right here in one interface.
That said, they're doing a lot of work on integrations with other tools like Ansible and I think that might be an okay way to go. We're just not really there yet.
We've been using AFF for file shares for about 14 years now.
The stability of AFF has actually been great. This is one of the areas where it has improved over time. During the Cluster Mode transition, there were some rocky periods here and there. Nothing serious, but you'd do a code upgrade and: "Oh, this node is being a little cranky." As they've moved to their newer, more frequent, deployment model of every six months, and focused more on delivering a focused release during that six months — instead of throwing in a bunch of features and some of them causing instability — the stability of upgrades and staying up has just improved dramatically. It's to the point where I'm actually taking new releases within a month of them coming out, whereas on other platforms that we have, we're scared to go within three months of them coming out.
Scalability on AFF is an interesting thing. We use CIFS and that doesn't scale well as a protocol. AFF does its darndest to get us up there. We've found that once we got into the right lineup of array, like the AFF A700 series, or thereabouts, that was when we had what we needed for our workloads at our site. But I would say that the mid-range stuff was not really doing it for us, and our partners were hesitant to push us to the enterprise tier when they should have. So for a while, we thought NetApp just couldn't do it, but it was really just that our partners were scared of sticker-shock with us. Right now we've been finding AFF for CIFS is doing everything we need. If we start leveraging it for SAN I could have something to say on that, but we don't.
Don't be scared. They're a great partner. They've got a lot of options for you. They've got a lot of tools for you. Just don't be scared to look for them. You might need to do a little bit of digging; you might need to learn how the CLI works. But once you do, it's an extremely powerful thing and you can do a lot of stuff with it. It is amazing how much easier it is to manage things like file shares with a NetApp versus a traditional Windows system. It is life-changing if you are an admin who has to do it the old-fashioned way and then you come over here and see the new way. It frees you up from most of that so you can focus on doing all the other work with the boring tools that don't work as well. NetApp is just taking care of its stuff. So spend the time, learn the CLI, learn the interfaces, learn where the tools are. Don't be afraid to ask for support. They're going to stand with you. They're going to be giving you a product that you can build on top of.
And come out to NetApp Insight because it's a good conference and they got lots of stuff [for you] to learn here.
NetApp certainly has options to unify data services across NAS and local and the cloud. But we are not taking advantage of them currently.
I'm going to give it a nine out of ten. Obviously you've heard my story. It's meeting all our needs everywhere, but the one last piece that's missing for me is some of those interface things and some of the SAN challenges for us that would let us use it as a true hybrid platform in our infrastructure. Because right now, we see it as CIFS-only and NAS-only. I would really like to see the dream of true hybrid storage on this platform come home to roost for us. We're kind of a special snowflake in that area. The things we want to do all on one array, you're not meant to. But if we ever got there, it would be a ten.
The primary use case of this solution is for our production storage array.
We have not used this solution for artificial intelligence or machine learning applications as of yet. This product has reduced our total latency from a spinning disc going into flash discs. We rarely see any latency and if we do it is not the discs, it's the network. The overall latency right now is about two milliseconds or less.
AFF hasn't enabled us to relocate resources, or employees that we were previously using for storage operations.
It has improved application response time. With latency, we had applications that had thirty to forty milliseconds latency, now they have dropped to approximately one to three, a maximum of five milliseconds. It's a huge improvement.
We use both technologies and we have simplified it. We are trying to shift away from the SAN because it is not as easy to failover to an opposite data center.
We are trying to switch over to have everything one hundred percent NFS. Once the switch to NFS is complete our cutover time will be one hour versus six.
The most valuable features are the FlexClone and SnapMirror. The ease of use, the SnapMirror capabilities, the cloning, and the efficiencies are all good features.
The simplicity of this solution around data protection and data management is extremely easy.
With Data protection there is nothing easier than setting up SnapMirror and getting it across and protecting our data. Currently, we have a five minute RPO, so every five minutes we're snapping across the other side without any issues.
This solution simplifies IT operations by unifying data services across SAN and NAS environments.
There are little things that need improvement. For example, if you are setting up a SnapMirror through the GUI, you are forced to change the destination name of the volume, and we like to keep the volume names the same.
When you have SVM VR and you have multiple aggregates that you're writing the data to on the source array, and it does its SVM DR, it will put it on whatever aggregate it wants, instead of keeping it synced to stay on both sides.
This solution doesn't help leverage the data in ways that I didn't think were possible before.
We are not using it any differently than we were using it from many years ago. We were getting the benefits. What we are seeing right now is the speed, lower latency, and performance, all of the great things that we haven't had in years.
This solution hasn't freed us from worrying about usage, we are already reaching the eighty percent mark, so we are worried about usage, which is why we are looking toward the cloud to move to fabric pools with cloud volumes to tier off our snapshots into the cloud.
I wish that being forced to change the volume name would change or not exist, then I wouldn't have to go to the command line to do it at all.
This solution is stable, it's the best. I can't complain.
We move large amounts of data from one data center to another every day without any interruptions. In terms of IT operations, it has cut our ticket count down significantly, approximately a seventy percent reduction in tickets submitted to us.
This solution is scalable, it's phenomenal.
This solution's thin provisioning has allowed us to add new applications without having to purchase additional storage. The thin provisioning has helped us with deduplication, maintaining compaction, and efficiency levels. Without the provisioning, we wouldn't be able to take advantage of all of the great features.
We are running approximately a petabyte of storage physically, and logically approximately ten petabytes.
The technical support is one of the best.
Previously we had not used another solution. We have been using NetApp for years, we went from refresh approximately two years ago, then sixty to forty to the A300 All-Flash.
The initial setup was straightforward.
We filled out a spreadsheet ahead of time that contained everything necessary to get us going. When it came time for the deployment we went with the information on the spreadsheet and deployed it successfully.
We used an integrator to help us with this solution, we used Sigma Solutions, and our experience was excellent. We worked hand in hand with them.
It's expensive. It's in the hundreds of thousands.
It's beneficial, but at times, I feel compared to other vendors, we are paying a premium for the licensing that other vendors include.
You're locked in with NetApp, and you already have everything setup.
We have not evaluated other solutions, it's not worth it.
We are not at the point where we are allowed to automatically tier data to the cloud, but we are looking forward to it.
I can't see that this solution needs any other features other than what it already has. Everything that I need is already there, except for the cloud and it's there but we haven't taken advantage of it yet.
I would advise that you compare everything and put money aside, really take a look at the features and how they will or can benefit you.
It's a total win for your firm.
I would rate this solution a ten out of ten.
NetApp AFF is used to store all of our data.
We're a full Epic shop, and we 're running Epic on all of our AFFs. We also run Caché, Clarity Business Objects, and we love the SnapMirror technologies.
Prior to bringing in NetApp, we would do a lot of Commvault backups. We utilize Commvault, so we were just backing up the data that way, and recovering that way. Utilizing Snapshots and SnapMirror allows us to recover a lot faster. We use it on a daily basis to recover end-users' files that have been deleted. It's a great tool for that.
We use Workflow Automation. Latency is great on our right, although we do find that with AFF systems, and it may just be what we're doing with them, the read latency is a little bit higher than we would expect from SSDs.
With regard to the simplicity of data protection and data management, it's great. SnapMirror is a breeze to set up and to utilize SnapVault is the same way.
NetApp absolutely simplifies our IT operations by unifying data services.
The thin provisioning is great, and we have used it in lieu of purchasing additional storage. Talking about the storage efficiencies that we're getting, on VMware for instance, we are getting seven to one on some volumes, which is great.
NetApp has allowed us to move large amounts of data between data centers. We are migrating our data center from on-premises to a hosted data center, so we're utilizing this functionality all the time to move loads of data from one center to another. It has been a great tool for that.
Our application response time has absolutely improved. In terms of latency, before when we were running Epic Caché, the latency on our FAS was ten to fifteen milliseconds. Now, running off of the AFFs, we have perhaps one or two milliseconds, so it has greatly improved.
Whether our data center costs are reduced remains to be seen. We've always been told that solid-state is supposed to be cheaper and go down in price, but we haven't been able to see that at all. It's disappointing.
The most valuable features of this solution are SnapMirror and SnapVault. We are using SnapMirror in both of our data centers, and we're protecting our data with that. It is very easy to do. We are just beginning to utilize SnapVault.
We are using the AQuoS operating system, which allows us to get a lot more out of our AFF systems. It allows us to do storage tiering, which we love. You can also use the storage efficiencies to get a lot more data on the same platform.
The read latency is higher than we would expect from SSDs.
The quality of technical support has dwindled over time and needs to be improved.
This is a stable solution. We are running an eight-node cluster and the high availability, knowing that a node can go down and still be able to run the business, is great.
We do not worry about data loss. With Clustered Data ONTAP, we're able to have a NetApp Filer fail, and there is no concern with data loss. We're also using SnapMirror and SnapVault technology to protect our data, so we really don't have to worry.
Scalability is pretty easy. We've done multiple head swaps in our environment to swap out the old with the new. It's awesome for that purpose.
My experience with technical support is, as of late, the amount of expertise and what we're getting out of support has kind of dwindled a little bit. You could tell, the engineers that we talked to aren't as prepared or don't have the knowledge that they used to. We have a lot of difficulty with support.
The fact that NetApp's trying to automate the support with Elio is pretty bad, to be honest with you. In my experience, it just makes getting a hold of NetApp support that much more difficult, going through the Elio questions, and they never help so we end up just wasting minutes just clicking next and next, and let's just open a support case already, type thing. So it's been going downhill.
Prior to this solution, we were running a NetApp 7-Mode implementation with twenty-four filers.
We went from twenty-four 7-Mode filers to an eight-node cluster, so we've done a huge migration to cDOT. With the 7-Mode transition tool, it was a breeze.
We use consultants to assist us with this solution. We do hire Professional Services with NetApp to do some implementations. The technicians that we have been getting on-site for those engagements have been dwindling in quality, just like the technical support. A lot of the techs that we used to get really knew a lot about the product and were able to answer a lot of our technical questions for deployment. One of the techs that we had recently does not know anything about the product. He knows how to deploy it but doesn't know enough to be able to answer some of the technical questions that we'd like to have answered before we deploy a product.
We are looking at implementing SnapCenter, which gives us one pane of glass to utilize snapshots in different ways, especially to protect our databases.
I used to work on EMC, and particularly, the VNX product. They had storage tiering then, and when I came onboard to my new company, they ran 7-Mode and didn't have a lot of storage tiering. It was kind of interesting to see NetApp's transition to storage tiering, with cDOT, and I really liked that transition. So, my experience overall with NetApp has been great and the product is really great.
I think some of the advertisements for some of the products, that can really help us, is kind of poor. The marketing for some of the products is poor. We were recently looking at HCI, and we really didn't have a lot of information on HCI, prior to its deployment. It was just given to us and a lot of the information concerning what it was and how it was going to help wasn't really there. I had to take a couple of Element OS classes, in order to find out about the product and get that additional info, which I think, marketing that product, would have helped with a lot better.
My advice to anybody who is researching this type of solution is to do your research. Do bake-offs, as we do between products, just to make sure that you are getting the best product for what you are trying to do.
I would rate this solution a nine out of ten.
We use NetApp AFF to host all of our on-premises applications and data.
We use NetApp for artificial intelligence and machine learning applications, and we find the latency to be pretty decent.
Data protection and management is one of the best features of NetApp. We like the SnapVault, SnapShot, and SnapMirror, and we use those features extensively.
Our IT operations have been simplified by unifying data services. We have fiber channel, block data, NFS, and CIFS, and we can deploy multi-tenancy boxes from each one. Sometimes, we have all of the different data types in one box. You can add more clusters or more nodes to your cluster. It is easy for us to modularly grow if the need arises.
NetApp has allowed us to leverage our data in new ways, including our test scenarios. A lot of the time it is really hard to test production data because we do not have multiple copies of the same thing that we can use for testing. The solution is flexible enough to allow us to create multiple copies, then try out seven or eight scenarios, then pick which one will be the best going forward. We can do that all within minutes.
We have utilized thin provisioning so that we haven't had to purchase additional storage for our applications. The snapshot technology, unlike other ones, doesn't take up extra space when you're making multiple copies. This means that we don't need extra storage for our temporary tests. Once we are finished we delete the extra copies.
We have used this solution for moving large amounts of data between data centers. We are currently migrating data from a cloud in Atlanta to a cloud in Chicago, and we are using the SnapMirror technology extensively for this.
Using the all-flash solution improves our application response time, and it also has a smaller footprint. You can also tier it, depending on the needs of the application.
NetApp AFF has definitely reduced our data center costs. We have been increasing our storage but not increasing our footprint. I would estimate the savings to be thirty percent.
We have not tested tiering cold data to the cloud, but we are currently working on finding appropriate use cases.
Overall, this solution has really reduced our downtime and has made our lives a lot easier.
The most valuable features are the ease of administration and configuration, as well as the speed of deployment.
Using snapshots at each stage of the configuration for applications means that administration is easier because you don't have to worry about messing it up. It makes things a lot smoother.
On the fiber channel side, there is a limit of sixteen terabytes on each line, and we would like to see this raised because we are having to use some other products.
I have been using NetApp since 1998.
This is a stable solution. The dependability and reliability of the product have improved significantly over time, and there is redundancy built into the boxes. We don't worry about stability anymore.
Scaling this solution is easy. You can start small with one HA pair and add them as you go. You can make new clusters and add new nodes to clusters.
The technical support for NetApp is decent. I mean, it's improving. I understand that it is hard to get people up to date with all of the new technologies but NetApp has done a pretty good job.
Using the online documentation, we are able to find answers most of the time. If not, we can find an expert who will come online and help us to get through. The combination of technical support, Professional Services, and online documentation has really helped.
Service is one of NetApp's strengths.
We were using a bunch of other products prior to using this solution, and we are still using some that have been deployed because of the sixteen terabyte limit on each line of the fiber channel.
The initial setup is not complex at all. It has been made easier compared to other vendors.
We're a big corporation and we have the expertise in-house. Once in a while, we use Professional Services to get through some situations. Our experience with them has been very positive and we have a very good relationship with them.
It is very hard to measure ROI, but we know that it is very good compared to other products.
The price to performance ratio with NetApp is unmatched by any other vendor right now.
We have products from HPE, Dell, and NetApp in our environment right now. They each have their share, and each one is equally working.
I am a long-time user and I love this product. Over the years we have asked for improvements and they are doing a great job. I will be happy to see them continue to make improvements, overall.
My advice to anybody researching this type of solution is to look at NetApp. If they don't then they are missing out on great technology and a feature-rich product.
I would rate this solution a ten out of ten.
We use this solution for in-house data.
The simplicity around data protection and data management is good with the snapshots and then being able to lock them up. We can conserve the data for our space and then set the layers that we set with the administration. It's very feasible.
Our data staff is smaller than it was because it's easier to manage in one portal. We have moved several employees into different departments.
The IT operations have been simplified through the unification of data services because we have just one window where we can manage it all.
With regard to application response time, I can say that the speed increase is substantially noticeable, but I do not have any numbers. It is probably twice as fast as it was.
I know that the data center costs have been reduced because we have fewer people managing the data, but I do not know by how much.
This solution has lessened our concern about storage as a limiting factor. It comes down to the easy manageability, the deduplication, and the compaction. Our volumes aren't growing as fast as they were.
The most important features are the IOPS and the ease of the ONTAP manageability.
The deduplicate process is performed in the cache before it goes to storage, which means that we don't use as much storage.
The versatility of NetApp is what makes it really nice.
The certification classes are good, but they don't cover enough of the material, and the exams only test on what is covered in class. When I leave those classes, I only feel half-full. I have to do so much research and I'm trying to get the data for my tasks, and it's a little complicated at times.
The NetApp AFF is very stable and we haven't had any issues.
From what I can't tell, this solution is very scalable.
The NetApp technical support is very good. They have the website and they have the forums where you can get questions answered. You can get a lot of things answered without even talking to anybody.
Prior to NetApp AFF, we were using an HPE Storage solution. It was a little more difficult to swap out the drives on the XP series. You have to shut down the drive and then wait for a prompt to remove it. It's a long process and if somebody pulls it out hot and puts another one in then you're going to have to do a complete rebuild. It is not as robust or stable when you are swapping parts.
NetApp is very easy to set up.
All of the solutions by different vendors have setup wizards but with NetApp, it walks you through the steps and it is easy. It has NAS, CIFS, NFS, and block, all at once. Building the lines and going through is done step-by-step. With other vendors like EMC, you have to get a separate filer. There are a lot more questions that have to be asked on the front end.
NetApp also talks seamlessly with VMware, and most people are on VMware.
We performed the implementation.
Our shortlist of vendors included EMC, NetApp, and HPE, because we have relationships with all of them. Ultimately, NetApp gives us more versatility.
This is my favorite storage platform.
I would rate this solution a nine out of ten.
We use AFF to serve out the Oracle and for the virtual storage VDI.
Before we implemented AFF, Oracle was running on a traditional storage spindle and at a very low speed with high latency, and the database was not running very well. After we converted from the spinning disk to the all-flash array, it was at least four times faster to access the volume than before. For the VDI, they were not able to run the traditional spinning disk. This is what we bought the AFF for.
The thin provisioning has enabled us to add new applications without having to purchase additional storage. The basic rule we practice is that every time we create a flex group, we also create it with thin provisioning. That gives give us a little bit more cushion.
AFF has enabled us to automatically tier cold data to the cloud.
It has absolutely improved application response time. Now they talk directly to the SSD rather than a spinning disk. It has improved by at least four times.
We are able to reallocate resources or employees that we were previously using for storage operations. It allows us to do lots of things that we would never have been able to do before, like provisioning, dedupe, and data compacting.
We are able to move large amounts of data from one data center to another or to the cloud. We call it the SVMDR. I am able to replicate the entire native storage to the new location without a lot of effort.
We stay away from what is called a silo architecture. NetApp cluster enables us to do a volume move to different nodes and share the entire cluster with the various sub setups as well as using the most storage we have on ONTAP. We are able to tailor and cut out at a file level, block-level or power level, to our various clients.
The monitor and performance need improvement. Right now we are using the Active IQ OnCommand Unified Manager, but we also have to do the Grafana to do the performance and I hope we will be able to see the improvement of the active IQ in terms of the performance graph. It should also be more detailed.
In the next release, I'm looking for a flex group because that is the next level of the volumes, extended volume for the flex vault. In the flexible environment, we run into the limitation of the capacity at a hundred terabytes and sometimes in oil and gas, like us, when the seismic data is too big, sometimes a hundred terabytes are not big enough. We have to go with the next level, which is the flex group and I hope it has features like volume being able to transfer to the flex group. I think they said they will add a few more features to the flex group. I also wanted to see the non-disruptive conversion from flex vault to the flex group be easier so we don't have to have any downtime.
Every time we start up the system, they have an HA, so the failover capability helps us do a non-disruptive upgrade. It really helped.
The scalability is a non-disruptive add on so if we need to grow the system we are able to either add an additional shell to it.
We never have any issues with technical support. They are very responsive to our problems because we have a NetApp account manager, so we are able to to engage the level two level three engineering much quicker.
We also evaluated Pure Storage. They also provide an all-flash array but I like NetApp better. With NetApp they allow us as a system administrator, we are able to do everything we want.
The initial setup was straightforward. We have been doing it for a while, so we know how to put it together.
We implemented it ourselves.
You have to pay a little bit more for the storage but you gain with the speed provided.
AFF is just like any traditional NetApp. It has Snapshot, SnapMirror, and SnapVault.
I don't see anybody get even close to NetApp. NetApp is one of the best. I would rate them a nine out of ten.
My advice to anybody considering this solution is to look at the best out there and NetApp is one of the best in terms of ease of use and gives you a full-functionality.
Our primary use for this solution is for production storage. We have got everything: VMware, SQL servers and file servers. It handles all of them.
NetApp AFF helped to improve our organization functions by improving our storage solution. We used to use tapes and that required a lot of effort and resources. Now the tape systems are all eliminated. We do onsite, offsite, SnapMirror, and SnapVault backups and it is a much better situation.
The most valuable features of the solution are speed, performance, and reliability.
The manufacturers are moving very fast with releases and additions of features. Versions 9.5 and 9.6 are already out and they are adding more and more features to every release. It has got way too many features as-is right now. The only improvement they need would be to make what they already have perfect.
The stability of the solution is very good. The reliability is just top-notch. We have not had any outage or unscheduled downtime. Sometimes a disk fails or the SSD fails, but it gets replaced without any users knowing about it because of service interruptions.
The scalability of the product is wonderful. It is just a simple matter of adding more shelves and provisioning more disk storage.
Tech support is a place where there is room to improve the product experience. Tech support is one thing that I am not 100% happy with and I do not strongly agree with many people who feel it is pretty good. NetApp has a wonderful product, but the support is subpar compared to the other vendors like EMC. So there is clearly room to improve.
The response time when they are busy is not very good. Even the priority-one calls are supposed to have like a two-hour response time or a 30-minute response time. I do not get any calls in that timeframe until I push them through different channels — through the back end.
Also, the primary support call center is in India. I don't get to the real technicians from the support team from North Carolina or places like that until much later. I understand they are trying to filter out calls that do not need upper-level support, but I know what I'm doing. I already know exactly what the problem is and then I still have to go through what should be unnecessary screening. It seems like a lengthy process. In the meantime, I might have only one strand of high availability running, which is not a good situation and I feel very uncomfortable that I could lose service.
We knew that we needed to invest in a new solution as it was mostly a cost-effective decision. When the purchase of our AFF system was announced — which was an AFF8040 — it was not any more expensive than SAS (Serial Attached SCSI) drives. So the cost was about the same and the solution was very effective. Sure enough, we made the right decision. It is performing very well, too, even though it is almost obsolete now.
The initial setup of the product was very straight forward to me. I'm certified on just about all the NetApp NCIE (NetApp Certified Implementation Engineer), all of those things like SAN, NAS, and Data Protection. So to me, it was very easy. I mean, they did a wonderful job helping set it up, but as more features are added it became more complex. Someone could easily forget to do one thing, like setting up a firewall, internal firewalls and stuff like that and leave some security holes. But it is fairly easy if you have some expertise and are a little careful.
We did not need any help with the implementation. I do everything myself.
I do not study the return on investment or any of those types of things because our department is just constant and we are not a profit center. We know what "I" is, we just do not know what "R" is.
At the time when we purchased the NetApp AFF, it was bundled into the hardware price. That made the pricing okay. If we were to add more shelves now, the licensing cost increases exponentially. It is probably cheaper to buy brand new hardware in the new model. It will be faster and bundled in with software for a promotion where they throw in all the licenses. It works out well.
Other vendors were not really on the shortlist at the time. NetApp is our standard for now. In the future, I don't know if it will remain that way and we may re-evaluate other solutions. FlexPod may be our future or HCI, but we are using NetApp big-time and it is a successful solution for us.
The solution's simplicity around data protection and data management is very good. The SnapMirror and SnapVault data protection is a wonderful thing. Also using snapshots in lieu of tape or disk backups is handy.
The solution simplifies our IT operations by unifying data management in an approach to staying in NAS (Network-attached Storage) environments. For example, our SAN (Storage Area Network) provides the performance. We have Brocade switches with a fiber channel connection to AFF, which matches the performance of the AFF. We also have the file services. Lots of files are serviced from that as well. We have virtualized all of the hosts and the physical machines to virtual machines. That saved a lot of money and resource and effort.
The solution is helping us to leverage data in different ways. It is just more reliability and simplicity and the performance helps the business quite a bit. We used to experience a significant amount of downtime and outage. We do not experience that anymore, so business probably is more profitable.
I do not have any direct insight into profitability. We are like an expense center and not the profit center: we do not use the computer to make money. We use the computer to support making gasoline and energy.
Thin provisioning allowed us to add new applications and purchase additional storage. The thin provisioning is an essential part of what we do because the SQL DBAs are the worst. They ask for one terabyte for future growth when they need only 100 gigabytes in reality. Without the thin provisioning, I have to give them the one terabyte that they have asked for, which is a waste of resources. So it is a cost savings feature.
The solution has allowed us to move large amounts of data from one data center to another without interruption to the business. It is affecting IT operations in a tremendous way. The reliability is key for the IT services. Not having any outage, unscheduled outage, or latency and performance issues are the most important key features.
The solution has helped improve application response time. We used to have some issues with poor performance when we had the SAS disks. Sometimes we had situations when the VMware was competing for the storage. Now the AFF is just much faster and provides all the data needed for VMware and SQL servers.
The solution has also reduced our data center costs. The thin provisioning, SnapMirror, and all of those features have helped our processes. I'm not sure of any exact amounts but the cost savings are quite a bit.
On a scale from one to ten where ten is the best, I would rate the product as a nine. The product itself is a ten. The services are a seven. But I highly recommend the product.
Our primary use for this solution is NFS and fiber channel mounts for VMware and Solaris.
Prior to deploying this product, we were having such severe latency issues that certain applications and certain services were becoming unavailable at times. Moving to the AFF completely obliterated all those issues that we were having.
With regard to the overall latency, NetApp AFF is almost immeasurably fast.
Data protection and data management features are simple to use with the web management interface.
We do not have any data on the cloud, but this solution definitely helps to simplify IT operations by unifying data that we have on-premises. We are using a mixture of mounting NFS, CIFS, and then using fiber channel, so data is available to multiple platforms with multiple connectivity paradigms.
The thin provisioning has allowed us to add new applications without having to purchase additional storage. The best example is our recent deployment of an entire server upgrade from Windows 2008 to Windows 2016. Had we not been using thin provisioning then we never would have had enough disk space to actually complete it without upgrading the hardware.
We're a pretty small team, so we have never had dedicated storage resources.
NetApp AFF has reduced our application response time. In some cases, our applications have gone from almost unusable to instantaneous response times.
Storage is always a limiting factor, simply because it's not unlimited. However, this solution has enabled us to present the option of less expensively adding more storage for very specific application uses, which we did not have before.
The most valuable feature is speed.
The price of NVMe storage is very expensive.
We haven't had a problem with stability since it has gone online.
We haven't needed to scale yet, but I can imagine that it would be seamless.
The NetApp technical support is outstanding.
Our previous NetApp system was a SAS and SATA spinning disk solution that was reaching end-of-life, and we were overrunning it. We were ready for an upgrade and we stuck with NetApp because of the easy of cross-upgrading, as well as the performance.
The initial setup was fairly straightforward, in that we were doing this migration from an old NetApp to a new one. However, because of the problems with latency they were having on that, it got a little bit complicated because we had to shuffle things around a lot.
The technical support helped us out well with these issues, and on the grand scheme of things, it was a very straightforward migration.
We used a company called StorageHawk, and our experience was phenomenal.
Comparing this solution to others it may seem expensive, but the price to performance for NetApp is greater. You get a lot more for the money.
We considered solutions by EMC, but they were very quickly ruled out.
I have experience with a previous version of NetApp from quite some time ago, and everything about the current version has improved.
NetApp AFF performs well, we haven't had any issues with it, and I suspect that it is going to be pretty easy to upgrade. It would be nice if the NVMe storage was less expensive, even though it's worth it.
I would rate this solution an eight out of ten.
Our primary use case for NetApp AFF is unstructured data. We set up it up for high availability and minimum downtime.
This solution simplifies our IT operations by unifying data services across SAN and NAS environments. We are using it on the fiber channel side, as well as the iSCSI side, for both CIFS and NFS, so it across the entire infrastructure.
We have used NetApp AFF to large move amounts of data. We just recently did a migration using SnapMirror and SVM DR. We did have some scheduled downtime, but there was no unplanned disruption in service.
Even with this solution implemented, I still have to manage the storage side and the availability of it, so we still have to worry about it being a limiting factor.
The most valuable features are the flexibility and level of technical support.
This is a very reliable solution in terms of keeping the system online.
This solution should be made easier to deploy. A lot of systems nowadays just come with a box where everything is included. With AFF, you have to manage it, you have to install ONTAP, and you have to configure the networking.
The stability is good. This is a very reliable solution.
It can be set up as a cluster, HA, and when one node goes down the others hold the data, so the customer barely notices that there is a failover.
I would rate the scalability an eight or nine out of ten.
We can grow this solution very easily, just by adding storage. All we need to do is buy a shelf and expand the storage side of it.
I would rate the customer support an eight out of ten. They are really good in terms of responding to the customer.
We have a large amount of unstructured data, so we felt that AFF was the right solution for us.
In terms of complexity, the initial setup is somewhere in the middle. It is not straightforward where you can run it out of the box. You have to set it up and configure the network.
We had a jumpstart, but I can handle the installation on my own.
We have not seen ROI so far.
We did consider using other vendors, but NetApp AFF was the best in terms of reliability.
In order to automatically tier cold data to the cloud, you would have to use third-party software.
I would rate this solution a seven out of ten.
Our primary use case for this solution is for production storage.
We don't use ONTAP for artificial intelligence or machine learning applications.
We're not replicating to the cloud yet. We're replicating from on-prem to on-prem, but replicating to the cloud is probably our next step in our data center evolution.
ONTAP has improved my organization because we now have better performance. We can scale up and we can create servers a lot faster now. With the storage that we had, it used to take a lot longer, but now we can provide the business what they need a lot faster.
It simplifies IT operations by unifying data services across SAN and NAS environments. We use our own type of SAN and NAS for CIFS and also for virtual servers. It's pretty basic. I didn't realize how simple it was to create storage and manage storage until I started using NetApp ONTAP. We use it daily.
Response time has improved. IOPS reading between reading and the storage and getting it to the end-users is a hundred times faster than what it used to be. When we migrated from 7-Mode to cluster mode and went to an all-flash system, the speed and performance were amazing. The business commented on that which was good for us.
Datacenter costs have definitely been reduced with the compression that we get with all-flash. We're getting 20 to one so it's definitely a huge saving.
It has enabled us to stop worrying about storage as a limiting factor. We can thin provision data now and we can over-provision compared to the actual physical hardware that we have. We have a lot of flexibility compared to what we had before.
The data protection and data management are very user-friendly. We use a software-based, disk-encryption and it comes with ONTAP and it's just very easy to implement and very easy to manage. In fact, you don't even have to manage it once it's working.
In terms of what needs improvement, I would like to see more consistency with the UI. It seems to change every few versions. The menus can be in a completely different place.
It's just a small learning curve. The menus are all the same, just in different places. You've got to get used to it. One of the features, which I thought was strange that was missing was when you snapvault from one cluster to another, the option to mirror that second cluster is not available unless you use it for the CLI. So you can't use it for the user interface. You have to go to the CLI. I thought that's a bit strange. To make it better it should be available as an option through the UI.
We have never had a single fault in the 10 years we've been using it. Nothing bad happens, it's an unbelievable system. Really reliable.
If we want to expand, the option is there for us to do that. It's not a requirement at the minute, but I know that we want to do it. It should be really easy to do, just add another cluster and then just configure it. We know it's available to us. We know how easy it is to configure, so that's a great option that we have there if we need it.
We don't go through NetApp directly. We go through a vendor. They've been great. Obviously they're certified, they know what they're doing. They have had to escalate sometimes to NetApp themselves if they didn't know the answer. We've never had a problem that we couldn't resolve.
The initial setup was straightforward. We use a metric cluster in NetApp, so getting that set up initially is very complex. Once it's working, it's very simple to manage. But a reseller helped us install that. I don't think it could be any more straightforward. It's a necessary complexity.
We used a reseller for the implementation. We're in an ongoing relationship with them. They support us 24/7 if we need. It's going really well. We never had any problems, so it's nothing to really complain about really. I've been working with them for about five years, but the company's been working with them for about 10 years.
We have not seen ROI.
We evaluated solutions like Dell EMC and HP. I think from the reputation that NetApp has, that was definitely the choice for us.
The advice I would give to anybody considering this solution is that it's expensive but it's worth it. It's worth it because of its reliability. When you're working on infrastructure reliability and uptime are the most important things. You have to provide a service to the business and make sure it's up all the time. So if you can have a system that does that, and I know that other products have their own problems, I know that I have got friends that use HP or use Dell and they have problems. Maybe it's because of the way they've configured it. With NetApp, we've never had any issue, never had an outage. If you're looking at reliability, you're going to pay a little bit extra, but that depends on your reseller. NetApp is definitely the way to go.
I would rate it a ten out of ten because I've got no reason not to. It doesn't break. It's reliable. It's fast. It's easy to manage. It's scalable and we've never had any problems that we can't fix. The worst thing we can ever have is really the disc fails and then within three hours, we get a brand new one. We just plug and play where we go with no outage, no downtime, and that's probably the main thing for us is having 100% uptime and we've never not had 100% uptime.
Our primary use case for this solution is machine learning.
The performance of NetApp AFF allows our developers and researches to run models and their tests within a single workday instead of spreading out across multiple workdays.
For our machine learning applications, the latency is less than one millisecond.
The simplicity of data protection and data management is standard with the rest of NetApp's portfolio. We leverage SnapMirror and SnapVault.
In my environment, currently, we only use NAS. I can't talk about simplifying across NAS and SAN, but I can say that it provides simplification across multiple locations, multiple clusters, and data centers.
We have used NetApp to move large amounts of data between data centers, but we do not currently use the cloud.
Our users have told me that the application response time is faster.
The price of the A800 is very expensive, so our data center costs have not been reduced.
We are using ONTAP in combination with StorageGRID for a full data fabric. It provides us with a cold-hot tiering solution that we haven't experienced before.
Thin provisioning has allowed us to over-provision existing storage, especially NVMe SSD, the more expensive disk tier. Along with data efficiencies such as compaction, deduplication, and compression, it allows us to put more data on a single disk.
Adding StorageGRID has reduced our TCO and allows us to better leverage fastest NVMe SDD more, hot tiering to that, and cold tiering to StorageGRID.
The most valuable features are the ease of use and performance.
I would like to see NetApp improve more of its offline tools and utilities. Drilling down to their active IQ technology, that's great if your cluster is online and attached to the internet, with the ability to post and forward auto support, but in terms of having an offline cluster that is standalone, all of those utilities don't work. If there's a similar way to how NetApp has a unified manager, but on-premises where the user could deploy and auto support could be forwarded to that, and maybe more of a slimmed-down active IQ solution could be made available, I'd be interested in that.
I need a FlexPool to FlexGroup solution.
I would like to see the FAS and AFF platforms simplified so that the differences will disappear at some point. This would reduce the complexity for the end-storage engineers.
I would rate the stability of NetApp AFF as moderate at this point. There were some unfortunate growing paints initially with the A800. Our problem was related to compatibility issues with the active optical transceivers, and it caused an outage within our data center. Our customer was not happy with this.
The scalability is very good and we have had no issues.
When we had our data center outage, we had an excellent NetApp engineer on-site. We went back and forth through it and eventually worked our way through it, but it was a multi-day problem.
We have been a NetApp customer for a long time. We just recently added a NetApp StorageGRID product for more object-store advantages in our data pipeline. It is adding more value.
NetApp is the number one leader in NFS, which is the protocol that we primarily use. We looked for a new solution simply because IOM3 modules were deprecated and moving forward from ONTAP 9.3 to version 9.6 required a full forklift upgrade, and a bunch of hardware was thrown out.
The initial setup was complex.
The move from older FAS systems with older disk shelves to the newer AFF A800 systems is a transition that is a nightmare in terms of rack space, moving data, and trying to do it online so that the customer doesn't experience downtime. It was a multi-day upgrade.
We used a reseller and a NetApp badged engineer, and our experience with them was very good.
NetApp has a good support team, good account management, good engineers, and they have the ability to stay ahead of what's trending in technology.
Ideally, the cost would be lower, it would be less complex, and the hardware compatibility would be better.
I would rate this solution a seven out of ten.
Our primary use case for NetApp AFF is performance-based applications. Whenever our customers complain about performance, we move their data to an all-flash system to improve it.
We have our own data center and don't share our network with others.
We have moved all of our AI and machine learning applications to all-flash to improve their performance. Prior to this, they were SaaS or on disk. The latency has certainly decreased.
Data protection is a big part of NetApp, and we are using SnapMirror as well as MetroCluster. We did use SnapVault before, but we moved to SnapMirror and we want to take advantage of the synchronous replication in MetroCluster.
I would say that NetApp has helped us to leverage data in new ways. Because it has the PowerShell modules and workflow automations, we have been able to create volumes, give access to them, and automate workflows.
I think that we have been able to reallocate resources that were dedicated to storage because of the automation tools that NetApp has. It helps to speed up our day-to-day tasks. What used to take us thirty minutes, now takes us five minutes.
Our application response time has increased, but it is hard to quantify with a number. I can just say that it has improved in general.
Using this solution has helped to decrease our worry about storage issues. We normally limit our customers' space, giving them less. We try to ask them questions about the type of data and the applications that they have. Sometimes, they will say that they want ten terabytes, but don't really know what they are going to use it for. With regard to our storage, we are not worried about limitations at all.
It is easy to manage data through the GUI by using Active IQ and the unified manager.
Being a non-storage guy, I think that it was quite easy for me to pick things up and learn this solution. They way they are built is really good when it comes to people who want to start fresh. cDOT is a really good OS.
The most valuable feature is the performance.
This solution is getting cheaper over time.
I would like to see better tutorials available, beyond the basics, that cover subjects like MetroCluster and automation.
We have been using this solution for about one year.
When it comes to stability, NetApp as a whole is good. We have never had any of these kinds of issues.
At the end of the day, we always have the replication going on, so if there is an issue on-premises then we still have our DR site. The replication is still there and everything is up to date.
We have expanded a lot. We had an eight-node cluster and now we have a twelve-node cluster. Scalability is really easy when it comes to NetApp.
As storage space is getting cheaper, we wanted to move to newer hardware.
NetApp does the initial setup when you buy the equipment.
We have a NetApp resident who works with us on-site. I would rate their service and our experience with them a ten out of ten.
We did have some applications that we were using in the cloud, but we came back because of financial issues.
We do have performance issues from time to time that we have to deal with, but it is not specific to AFF. Sometimes the application is not well-managed by the application teams. The load may not be being handled correctly, which is not related to the type of storage but could be related to users not selecting the correct storage options for their applications.
We have not tested the recent graphical update yet, but if it works well then I think that it will be one of the big advantages this solution has. We used to do the upgrades using the CLI.
My advice to anybody researching storage solutions is to go with NetApp. My experience with the vendor is good. The AFF is a good tool to have, whether the client is a small business or a larger enterprise like a bank.
I think the problem with smaller companies is that they don't always understand the importance of data. Perhaps they don't see storage as a solution, but rather just an expense.
I would rate this solution an eight out of ten.
We did it for consolidation of eight file repairs. We needed the speed to make sure that it worked when we consolidated.
We do a lot of financial modeling. We have a large compute cluster that generates a lot of files. It is important for us to get a quick response back for any type of multimillion file accesses across the cluster at one time. So, it's a lot quicker to do that. We found that solid-state performs so much better than than spinning drives, even over multiple clusters. it works.
It is helping us consolidate, save money, and increasing access to millions of files at once.
It is very important in our environment for all the cluster nodes. We have 4,500 CPUs that are going through and accessing all the files, typically from the same volume. So, it is important for it to get served quickly so it doesn't introduce any delay in our processing time.
Solid-state drives are the most valuable feature. It has the speed now to do workloads. We're not bound by I/O from the drives. Also, we are just starting to hit the sweet point of the capacity of the solid-state drives versus spinning disk.
I would like there to be a way to break out the 40 gig ports on them. We have a lot of 10 gigs in our environment. It is a big challenge breaking out the 40 gig coming out of the filer. It would be nice to have good old 10 gig ports again, or a card that has just 10 gig ports on it.
Stability has been really good. It's been solid. We had a couple of problems when we first set it up because we set it up incorrectly. But we learned, we change the settings and things are working a lot better now.
We haven't had to scale it yet. We literally reduced 18 racks worth of equipment into two and still have room in those two racks to do additional shelves, expanding into that footprint. So, it's expandable and dense, which is great.
The process was easy to consolidate into one AFF HA pair. It was simply doing volume copies and across SnapMirrors in the environment. It just migrated right over. It wasn't a problem at all.
It is reducing our data center costs. We consolidated eight HA pairs into one AFF HA pair.
We would like it to be free.
For our workload, it's, it's doing what we need it to do.
I would rate the product a nine (out of 10).
We do not use the solution for artificial intelligence or machine-learning applications right now.
We use NetApp AFF to support our VMware environment.
We have been happy with the performance and it has not given us any issues.
I like the simplicity of data protection and data management. We use snapshots for our FAS recovery, and we use SnapVault for our backups.
NetApp definitely simplifies our IT operations by unifying services. We only use this solution on-premises, but with NAS, we don't need Microsoft Windows to create a share. It's all on our NetApp platform. I like it because we do not have to switch.
I wouldn't say that we have reallocated resources that were previously dedicated to storage operations, although it does give us time to do other things.
We have used NetApp to move large amounts of data between data centers. It has made it easier for us, and RPOs are shorter because of it.
With respect to the response time for applications, I can definitely say that it has improved, although we have not done any benchmarking. I perceive the improvement through monitoring the applications.
This solution is pretty expensive, so I'm not sure whether it has reduced our data center costs.
NetApp has helped eliminate storage as a limiting factor in our business. My customers are happier because they have no issues with performance or accessing their data.
The most valuable feature is the ease of management. You just set it and you don't have to worry about it.
During a maintenance cycle, there are outages for NAS. There is a small timeout when there is a failover from one node to another, and some applications are sensitive to that.
We are in the process of swapping our main controller, and there is no easy way to migrate the data without doing a volume move. I would like a better way to swap hardware.
Technical support could use some improvement.
Stability is very good, although we do have some NAS outages during maintenance.
Overall, I like the scalability. It can do NAS, CIFS, and fiber channel all in one box and it's easy to manage.
I would say that the technical support is hit or miss. Sometimes you get somebody good, but other times, you have to just escalate a couple of times to get the right person.
Our previous solution was spinning disk, and our application demands more in terms of storage and performance. NetApp AFF just seemed like the natural route because we didn't want to get left behind.
One of the reasons we like this solution is that all of the features are included with the one license. For example, we can use NFS, CIFS, SnapMirror, SnapRestore, etc. It's all included in the package and we don't have to pick and choose.
We purchased the license for a five-year term.
We evaluated other options, including solutions by EMC, before choosing NetApp. The reason for our choice is that we already had NetApp in our environment, and the price-point is also a little better than the competing products.
My advice to anybody who is researching this type of solution is to test and compare all of the products. Overall, I think that AFF is a solid store system and it's very easy to use.
I would rate this solution a nine out of ten.
Our primary use case is that we have two areas with AFF storage
We reduced our floor space by reducing 44 racks units to four rack units. It has helped us with our data center economies of scale. It reduces our support costs too, which is great.
It has a really useful, friendly console.
The dedupe gives us more IOPS for more reliance equipment and better performance.
It is really stable and trustworthy. The equipment is reliable. It doesn't break, so I can sleep at night. We don't have to worry that there is a problem with our equipment every week.
We haven't had any problems with the equipment. In two years, we have needed support twice.
We don't like the cost. We would like to buy more.
I would rate the product a 10 out of 10. It is reliable and has good performance. Working with the product is a great experience.
We primarily utilize AFFs for engineering VDIs. We are utilizing it to host VDI and performance is the primary expectation from AFFs. We are satisfied with the product.
It's helping to leverage data. The storage is being utilized to implement larger, complex file sizes. That is how we are utilizing this product.
Speed is the most valuable feature. It is all-flash, so it is fast.
It simplifies since it is integrated with the other platforms as well. It's maintainable; it does not take too much to maintain the stuff. Creating users and sessions is easy on it.
It is a fast product, but NetApp could focus even more on the configuration.
Since the failure rate has been reduced, we haven't had any outages so far, or even P2s, on this solution. It has been impressive.
It's a fast product. It is exactly the same as other fast products; it is scalable.
We have more than 100 users utilizing the product concurrently. Concurrence is one parameter that we looked for, and AFF is satisfying that problem.
We have a premium support globally. NetApp has been promising on every front.
There was not much complexity involved. Since this was a new setup, migrations were not in order. So, it was pretty straightforward.
We tested it out against another solution and it worked out very well. Based on that, we took the decision to expand it further.
It is working out well from a latency point of view, which is why we have opted for AFF. We are getting results.
Traditionally, we are limiting the number of our vendors. We still haven't ventured out to any other vendors. We have consistently been with NetApp.
Going forward, I would like to compare AFF vs Pure Storage based on all the parameters.
I would rate it a nine (with 10 being perfect). It is pretty impressive. I am holding back one for improvement in its scope.
This is the first time that we have implemented all-flash in one of our regions.
We are not utilizing it as a tiering solution.
We have been using the FAS series product, and AFF is pretty similar to the FAS products, as it still runs the ONTAP operating system. They are using AFF because that comes with all-flash disks, which gives us better performance with a smaller footprint. We use that mainly to start our block and NAS data.
One of the best things about the AFF products is its integration with NetApp StorageGRID, which can give you the ability of tiering to the cloud or StorageGRID. Whether it is on-prem or off-prem, tiering is the industry trend right now. One of the ways that these products help us is by using the new ONTAP version as well. They identify the cold data sitting on our main storage arrays, consuming the very expensive media and moving that to the cheaper storage tiers, whether it's on-prem, StorageGRID, off-prem on a public cloud, or a private cloud. With this integration as part of the Data Fabric, we have been able to lower some costs of storing data on-prem.
One of the main features that differentiate AFF from the FAS products, or some other technologies used, is the footprint of these arrays are significantly smaller than the traditional ones. Also, the performance that you get to these new arrays is really significant. You can see a huge difference there. By switching to it, we can achieve more storage performance and efficiency as well as in the long run lower down some of the TCOs due to reducing the footprint.
The one thing about NetApp products is they've been using the same operating system among all of their products, e.g., FAS or AFF. That feature makes it easier to manage and operate those environments because you don't really need to learn the whole new things or train all your engineers on new technology. Overall, it helps with the operations. It's not that complicated. It's easy to manage and operate.
I'm at the NetApp Insight events and seen that new features and functionality are either in the roadmap or coming. However, I think adding more features to make it more cloud enabled will help us with cloud tiering and simplify the whole cloud operations when it's integrating with our on-prem AFF products. That is one area where we would like to see more improvements from NetApp.
We have been using NetApp products for a while.
NetApp has been stable. It is one of the vendors who we trust to put our production workload on it for numerous reasons. The AFF can survive disk failure. Although, the flash disks have longer life spans, everything is redundant. We haven't experienced any significant issue with these arrays. I would call it is it's six nines. There are even more arrays when it comes to availability and stability.
Every time you contact the vendor for the technical issues that you have been dealing with, the level of support you get or the time it takes for you to get your issue resolved really matters and depends on the issue itself (how complicated it is). Sometimes, the support may send some requests to the technical team to gather logs and send them back to support. How many of these logs you have to collect or if you have to engage another vendor's support come into effect when you are trying to find out how fast an issue can be resolved. In general, when you open a case with NetApp support, usually if it's a P1 or P2 case, usually they are very fast when it gets to the point that we need to escalate to the next level of support. So far, we have had a good experience with NetApp. For most cases, they were able to help us resolve the issue as fast as possible.
It has improved the application response because the array using the SSD disks are also an NVMe compatible array. We are also using the NVMe host (HBAs) because our fabric is also NVMe compatible with some of the hosts running some mission critical applications with that, AFF, and the back-end storage. We have seen good improvement in the performance of our applications.
We've been using some other vendors products as well.
I cannot disclose the name of the vendors that we are using to compete with NetApp. In the industry today, you can't really tell if there is a bad product or good product. It comes down to your requirements. As a customer, first you have to define your requirements. Then, you need to know what you need, what is your goal, how are you going to achieve it, and what your challenges are. We identified those and have compared some solutions.
NetApp was our vendor of choice who could help us to fulfill our requirements, especially for some of the challenges that we were facing. NetApp has been able to help us with that.
I would never give a 10 because there is always room for improvement for any technology. From zero to 10, I would give about an eight to nine to the AFF products because we have been very happy with them so far.
We primarily use NetApp AFF for file storage and VMware.
Coming from a financial background, we are very dependent on performance. Using an all-flash solution, we have a performance guarantee that our applications are going to run fine, no matter how many IOPS we do.
We use NetApp for both SAN and NAS, and this solution has simplified our operations. Specifically, we use it for SAN on VMware, and all of our NFS storage is on NAS. They are unified in that it is the same physical box for both.
This solution has not helped us to leverage data in new ways.
Thin provisioning has allowed us to add new applications without having to purchase additional storage. This is one of the reasons that we purchased NetApp AFF. We almost always run it at seventy percent utilized, and we only purchase new physical storage when we reach the eighty or eighty-five percent mark.
I find that we do have better application response time, although it is not something that I can benchmark.
As a storage team, we are not worried about storage as a limiting factor. When other teams point out that storage might be an issue, we tell them that we've got the right tools to say that it is not.
I think that the DR applications are the most valuable, including Snapshots and SnapMirror. They are one of the market leaders in this regard. It is a very solid platform that has been in the market for a while.
Technical support can be a little slow when it comes to escalating through levels of support.
We have had trouble with restoring applications, and if there is more support for application-aware backups then that would be great.
We have rarely had an issue where there was an outage. Whenever we have an issue, we can rely on NetApp support.
We are running in cluster mode, which is known for its scalability. I would say that it is good.
The technical support has been all right, but it takes a while to get a hold of the right person because you've got to go through the level one, level two support. But, after a while, you get the support that you need.
We do have experts within the company, so we only go to NetApp's support when we have a very serious issue that we need to work on.
Overall, it has been all right.
We have used NetApp for a very long time. Our reason for implementing AFF was that we wanted to go for an all-flash solution. We didn't want to keep using hard disks, but we still wanted to continue using SnapMirror and Snapshots. This was the way to do it.
The initial setup of this solution is straightforward, at least for me. I've deployed NetApp before in my previous jobs, and it was easy with my experience. That said, it is not very complex.
We used Professional Services from one of NetApp's partners, Diversus, to assist with our deployment. Our experience with them as been good. They are one of the top NetApp partners in Sydney, Australia.
We did not evaluate other options.
I would rate this solution an eight out of ten.
Our primary use for NetApp AFF is backup for our production. It's more for our database for all of our retail for Nordstrom. We've got to keep it running every day, so we've got to make sure that we have all the databases backed up for three years, or more.
We use NetApp AFF for artificial intelligence and machine learning applications, and there is no latency that I can see. It has been pretty solid.
This solution is pretty simple when it comes to data protection and data management.
After we implemented NetApp, we noticed that the deduplication and the latency changed a lot. Rather than buy more disk space, we now compress a lot of stuff and we have more storage. Overall, we have more storage and less latency, which saves us money. I would say that we save between half a million and three-quarters of a million dollars, yearly.
We use our data in the same way. This solution benefited us in that it was hard to convince our upper management to buy more disk, so this helped out.
The thin provisioning helped a lot, and it was probably the biggest key. We noticed that we were short in certain areas and we needed to add more room for VDI. With thin provisioning, we weren't using as much, and with not much latency on it.
Being able to move large amounts of data from one data center to another has helped us. We have a data center in one office and another one that is about a hundred miles away. We share a lot of data between these two sites. There is almost no latency, so it works out perfectly. When we have an incident, such as a power outage at one site, we automatically have a backup on the other end. Also when one side is down, we're still available, although we're limited to certain things on one side. Overall, the backup is pretty good.
We are currently discussing the possible relocation of resources.
I would estimate that our application response time has improved by twenty to thirty percent. For example, our photo studio application is faster.
At this time, we are examining out data center costs and considering a different data center.
Using NetApp has helped alleviate worry about storage being a limiting factor. Had I been asked this a year ago, it would have been a different story. The additional storage means that things are easier and running more smoothly, and we don't have to worry about it breaking down.
The most valuable features for us are controlling the snapshots, the ease of reverting back, and scheduling.
NetApp AFF is very good at cleaning up your storage.
The stability is good but there is room for improvement with other options.
Stability is good, although there is always room for improvement.
We are working on scaling this solution right now. It is a big part of what we want to do, including moving to the cloud.
Technical support for this solution is good, and I've never had a problem. They are straight to the point and give you a lot of detail on what to expect or what you might run into. Whether you call or get support online, it is pretty good.
We started looking into NetApp AFF because our previous solution was outdated, and we were having storage problems. They were older FAS storage, also by NetApp.
We were interested in getting something a little better, including improvements in the storage and the latency.
The initial setup was straightforward. It's always been very easy with how everything works, and their support has been pretty solid too.
We worked with partners for implementation and deployment. Our experience with them was pretty good.
Having our VDI work better is important to us because our work-from-home employees can work a lot better, which helps save money.
We only evaluated NetApp, and we are slowly looking at VMware, VDI, and the cloud.
We went with this solution primarily because of the stability. I also see reducing a lot of storage and cleaning up a lot of stuff. It is pretty good at this.
We are looking into a cloud version in the future.
My advice for anybody who is researching this type of solution is to consider several things. If they are trying to save money, think that they'll have to buy more disk, or want to clean up what they have, I think that they should go ahead with NetApp AFF. It makes a big difference, especially if you see the thirty percent improvement that we have seen. It's a pretty big jump.
This solution is very good, but nobody is perfect.
I would rate this solution an eight out of ten.
We use NetApp AFF mostly as a NAS solution, but we do some SAN with it. Basically, we're just doing file services for the most part.
We're running an AFF A300 as well as a FAS8040 that is clustered together with the AFF A300.
We're not allowed to use cloud models.
We don't use NetApp AFF for machine learning or artificial intelligence applications.
With respect to latency, we basically don't have any. If it's there then nobody knows it and nobody can see it. I'm probably the only one that can recognize that it's there, and I barely catch it. This solution is all-flash, so the latency is almost nonexistent.
The DP protection level is great. You can have three disks failing and you would still get your data. I think it takes four to fail before you can't access data. The snapshot capability is there, which we use a lot, along with those other really wonderful tools that can be used. We depend very heavily on just the DP because it's so reliable. We have not had any data inaccessible because of any kind of drive failure, at all since we started. That was with our original FAS8040. This is a pretty robust and pretty reliable system, and we don't worry too much about the data that is on it. In fact, I don't worry about it at all because it just works.
Using this solution has helped us by making things go faster, but we have not really implemented some of the things that we want to do. For example, we're getting ready to use the VDI capability where we do virtualization of systems. We're still trying to get the infrastructure in place. We deal with different locations around the world and rather than shipping hard drives that are not installed into PCs, then re-installing them at the main site, we want to use VDI. With VDI, we turn on a dumb system that has no permanent storage. It goes in, they run the application and we can control it all from one location, there in our data center. So, that's what we're moving towards. The reason for the A300 is so that our latency is so low that we can do large-scale virtualization. We use VMware a tremendous amount.
NetApp helps us to unify data services across SAN and NAS environments, but I cannot give specifics because the details are confidential.
I have extensive experience with storage systems, and so far, NetApp AFF has not allowed me to leverage data in ways that I have not previously thought of.
Implementing NetApp has allowed us to add new applications without having to purchase additional storage. This is true, in particular, for one of our end customers who spent three years deciding on the necessity of purchasing an A300. Ultimately, the customer ran out of storage space and found that upgrading the existing FAS8040 would have cost three times more. Their current system has quadruple the space of the previous one.
With respect to moving large amounts of data, we are not allowed to move data outside of our data center. However, when we installed the new A300, the moving of data from our FAS8040 was seamless. We were able to move all of the data during the daytime and nobody knew that we were doing it. It ran in the background and nobody noticed.
We have not relocated resources that have been used for storage because I am the only full-time storage resource. I do have some people that are there to help back me up if I need some help or if I go on vacation, but I'm the only dedicated storage guy. Our systems architect, who handles the design for network, storage, and other systems, is also familiar with our storage. We also have a couple of recent hires who will be trained, but they will only be used if I need help or am not available.
Talking about the application response time, I know that it has increased since we started using this solution, but I don't think that the users have actually noticed it. They know that it is a little bit snappier, but I don't think they understand how much faster it really is. I noticed because I can look at the system manager or the unify manager to see the performance numbers. I can see where the number was higher before in places where there was a lot of disk IO. We had a mix of SATA, SAS, and flash, but now we have one hundred percent flash, so the performance graph is barely moving along the bottom. The users have not really noticed yet because they're not really putting a load on it. At least not yet. Give them a chance though. Once they figure it out, they'll use it. I would say that in another year, they'll figure it out.
NetApp AFF has reduced our data center costs, considering the increase in the amount of data space. Had we moved to the same capacity with our older FAS8040 then it would have cost us four and a half million dollars, and we would not have even had new controller heads. With the new A300, it cost under two million, so it was very cost-effective. That, in itself, saved us money. Plus, the fact that it is all solid-state with no spinning disks means that the amount of electricity is going to be less. There may also be savings in terms of cooling in the data center.
As far as worrying about the amount of space, that was the whole reason for buying the A300. Our FAS8040 was a very good unit that did not have a single failure in three years, but when it ran out of space it was time to upgrade.
The most valuable feature of this solution is its simplicity. It is easy to use.
I want an interface through ONTAP that look more like what it does for the E-Series with SANtricity. One of the things that I liked about the SANtricity GUI is that it is standalone Java. It doesn't have to have a web browser. Secondly, when you look at it, there are a lot more details. It shows the actual shelves and controllers, and if a drive goes bad then it shows you the exact physical location. If it has failed, is reconstructing, or whatever, it shows you the status and it shows you where the hot spares are. In other words, be rearranging the GUI, you can make it look like it actually does in the rack. From a remote standpoint, I can call and instruct somebody to go to a particular storage rack and find the fourth shelf from the top, the fifth drive over from the left, and check for a red light. Once they see it, they can pull that drive out. You can't get simpler than that.
There are a lot of features with ONTAP, and the user interface is far more complicated than it needs to be. I would like to see it more visual.
We have been using this solution for about three months
The stability is incredible. If you looked up the word "stability" in the dictionary, it would show you a picture of the A300 or the FAS8040 in a NetApp array.
Scalability is not a problem. When we got the new flash system, we were able to combine it with the old hybrid that included iSCSI, SATA, SAS, and flash, into a four-way cluster. It was all running before the end of the day, and we moved about four hundred terabytes worth of data between them.
I find the technical support for NetApp to be really good, although I'm a little biased because I used to be one of those guys back in the days under the E-series. If I have a question for them and they don't know the answer, they'll find the person who does. When I was a support engineer, that's the way I worked.
Both pre-sales and post-sales engineers are good. Our presales engineer has been a godsend, answering all of the techie questions that we had. If he didn't know something then he would ask somebody. Sometimes the questions are about fixing things, but at other times it is just planning before we tried something new.
We've had NetApp since day one. Within our organization, there are multiple other teams and almost all of them use NetApp on classified networks. We have a little bit of HP and I think there's a couple of EMCs floating around somewhere, but they're slowly going away. Most of them being replaced by NetApp.
Mainly, NetApp is very robust, very reliable, and they cost less. Nowadays with the government worried about costs, trying to keep taxes down, that's a big plus. It just so happens that it's a very good product. It's a win-win.
The initial setup was pretty straightforward.
I handled the implementation myself, although I would contact technical support to fill in any gaps that I might have had.
When we installed the new A300, we used NetApp Professional Services because the person who was brought in was able to do it a lot faster than I could. That is all he does, so he is exceptionally proficient at it. It took him about two and a half days, whereas it would have probably taken me a little over a week to complete.
The only thing that I can say about ROI is that our costs are probably going to be less than if we had stuck with our original idea.
We didn't have any other vendors on the list, although we had one team that tried to push HP on to us and we said no. HP was really the only other possible alternative that we had. We had tossed around a couple of other vendors, but we never really gave them any serious thought. We already knew NetApp, so it made more sense because they could integrate better and that was the main thing we were looking at. The level of integration. Since we had a NetApp that we've had for many years, it just made sense to stick with what we had, but a newer and faster version.
One of my favorite parts of this solution is that most of the day I sit there and do nothing, watching the lights go green on unify manager, knowing that they should stay green because it indicates that it is working. That's what I look for. It works, and most of the time I don't have to do a lot with it unless somebody wants some space carved out.
I've been in the storage business since 1992. I've been doing work with storage systems before there was such a thing as a storage area network (SAN) or network-attached storage (NAS). Those are buzzwords that came along about fifteen or sixteen years ago and I was well entrenched in storage long before then. My expectation is not very high other than the fact that it's fast and reliable. Other than that, as far as what we can do with it, it's capabilities, I have a pretty low bar because I know what storage can do and I know what it should do and the only time I'm disappointed is when it doesn't do it. I haven't experienced that with NetApp.
The only thing that I would change is the GUI, which is cosmetic. It will not make the product better, but it will make it a lot simpler for those of us who have to support the NetApp equipment, and we can do it in a more timely fashion.
My advice to anybody who is researching this solution is to buy it. Don't worry about it, just buy it. NetApp will help you install it, they'll help you with the right licensing, and they'll help you with all of the questions you have. They will even give you some suggestions on how you might want to configure it based on your needs, which is never accurate, but that's not the fault of the installer. It's usually because the customer doesn't know what they want, but you at least get a good start and they can make recommendations based on past experience. As far as price per performance, this solution is hard to beat. I'm a big supporter.
I would rate this solution a ten out of ten.
The primary use case is for customers who need absolute low latency and have low latency in their workloads. They need maximum performance in their virtualization and file storage environments.
We had some customers who were running virtualization workloads on classical spinning disks. We implemented an AFF system, and they got a huge performance boost out of it because the latency of the SSDs is simply much lower. Actually, most customers benefit from the improved latency and performance from the AFF systems.
Another important aspect of it is because we have customers who use SAN and NAS, they want only one system. This simplifies things by handling both the same way. You set up data protection, and it doesn't matter if it is SAN or NAS, you know the data is protected to a secondary system or to the cloud, wherever you want it to be.
A few customers are tiering out to their own S3 data center, not the cloud. For them, it has reduced their costs because they had an existing S3 solution. They just tier through that, then they need less space in the SSD tier.
The most valuable features are that it runs Data ONTAP, which is compatible with the whole Data Fabric, and its absolute performance.
Simplicity is a very key aspect of the system because you can configure everything with the System Manager. It does most of the complicated things behind your back, so you don't have to handle them. Since it integrates with the Data Fabric, it's very simple to set up a data protection scheme.
We have had customers asking about S3 support for a while now. I heard that is coming in one of the next versions. So, I would like to see S3 targeted support on the FAS system.
The stability of the AFF system is very high because it's running on ONTAP, and ONTAP is a proven operating system for about 20 years. So, it's very stable. We have thousands of systems with our customers and the AFF system inherits stability from the FAS system. We know it is stable.
The scalability is great. The cluster scalability can be scaled out. The cluster can be scale out to up to 24 nodes. You can also scale them up if you add disks. So, scalability is not a problem. You can even scale it down if you need to, and we've also done this with a few customers. We can scale down the clusters later if the workload or requirements change. That is definitely one of the big plus points.
The technical support works well for us. We do the first level support for all our customers, so the customers call us. If we are ever in trouble and don't know how to respond to the support call, we can open the second level case with NetApp. That works very nicely. So, the customer is in good hands with us, and we are in good hands with NetApp.
We do the initial setup ourselves. We use the CLI, so we don't use the simplified methods because we have some special requirements most of the time.
It definitely reduces costs because it simply takes less power to run these systems. While the SSDs don't take power, they are in general very big right now. So, the running cost has decreased for a lot of our customers.
The product is at least a nine (out of 10). I have been working with FAS systems for around 15 years. I've come to know how easy and reliable they are. They do what they are supposed to do, and they do it very well. Now, the AFF system is just the flash version, which does the same things, but faster. So, it's almost perfect.
Currently, we are leveraging AFF for our VMware environment solution. So, we use it as a storage for our customers and are leveraging it to provide a faster storage solution for VMware customers.
We are using it for block level based only storage, as of today.
With AFF, the benefit is that we have 27 data centers across the country, we are able to standardize across all them and do storage replication. The simplicity of being able to offload cold data to StorageGRID with the tiering layers that NetApp provides, this just makes it easier for us to be able to reduce labor hours, operations, and time wasted trying to figure out moving data. The simplicity of tiering is a big bonus for us.
In terms of data protection, we have been leveraging SnapMirror with Snapshot to be able to do cloning. For the simplicity, we find it is able to do SnapMirror on a DR site in a disaster situation so we can recover and the speed to recovery is much more efficient. We find it much easier than what other vendors have done in the past. For us, to be able to do a SnapMirror a volume and restore immediately with a few comments, we find it more effective to use.
AFF has helped us in terms of performance, taking Snapshots, and being able to do cloning. We had a huge struggle with our backup system doing snapshots at the VM level. Using AFF, it has given us the flexibility to take a Snapshot more quickly.
The most valuable features are dedupe, compression, compaction, and the flexibility to offload your cold data to StorageGRID. This is the biggest key point, which drove our whole move to the NetApp AFF solution.
AFF has opened our eyes in a different light of how storage value works. In the past, we looked at it more as just a container where we could just dump our customer dBms and let the customers use it in terms of efficiency. Today, to be able to replicate that data to a different location, use that data to recover your environment or be able to have the flexibility with the solution and data. These are things which piqued our interest. It's something that we're willing to provide as a solution to our customers.
We are looking at Cloud Volume today. We would like to be able to have on-prem VMs that can just be pushed o the cloud, making that transition very seamless in a situation where you are low on capacity and need to push a VM to the cloud, then bring it back. Seamless transition is something that we really would enjoy.
Stability has so far met all our requirements. We are leveraging pretty well. We haven't really had many issues.
We struggled a bit in the beginning. But with the support of NetApp, we were able to upgrade to new firmware which helped us become more effective and stable for almost a month now. So, it's pretty good.
Scalability is the most effective way that we have seen so far from NetApp to be able to add additional disks. The ability to leverage the efficiency has also given us the flexibility to integrate it as one solution. Scalability is working for us. As demand grows, NetApp has been supporting it.
I would rate the support as an eight (out of 10).
Customer service is one area of the product line where I would love to see improvement. I have had several vendor experiences with NetApp where I faced challenges in the initial call trying to navigate the requirements of the service level expectation. Their response could be better improved. However, the final result is great. It is just the initial support level where improvement would help to effectively solve problems.
Initially, we were working with EMC VNX devices. But as life kicks in, we were looking for a long-term solution and what our roadmap was in terms of storage aspects. We saw the true benefit in terms of cost as well as the efficiency to be able to leverage storage. We found AFF to be a better fit for our use case.
We had the Dell EMC product line for a long time in terms of portfolio and different options of gears. We looked at NetApp gears and capabilities, not just the storage component. However, the capability of being able to go beyond the storage, as a software-defined solution is something that attracted us to NetApp. It is a fit all solution for now.
In our previous storage, we were doing a lot of roadmapping and giving customers a certain amount of storage. Whether customers used or allocated it, it was sitting in there. With the AFF thin provisioning, it has given us the benefit of being able to reduce our footprint from four arrays to a single 2U array. So, we are able to leverage efficiency and virtual volumes with thin provisioning. This gives us almost three to four times more storage efficiency.
The initial setup was pretty smooth because NetApp came onsite with their support. They gave us the option to send a technician onsite to do the whole cabling. We were part of the architecting of the whole design, in terms of how we wanted to leverage our data lift and be able to leverage how we want to take control of the data. With their support and being able to set it up through the OnCommand System, it was not a lot of clicks. The initial setup was pretty straightforward. From the expectations that we had and the simplicity of setting it up, it wasn't so complex.
So far, we only have rolled it out in one of our data center heavily. We tested it out, and it's working well. We have put a lot of production workload into it. Our next target is to roll it out across all the data centers. We are hoping to save almost 30 to 40 percent of our footprint initially. That would be a big savings for us.
I am doing the whole migration for the solution.
AFF has given us the ability basically to reduce the amount of time that we are spending on OnCommand. What we have been able to do now is leverage in VSC, which has given us the simplicity to be able to provision data store from within the vSphere environment: provision and deprovision. Now, we can give more options to our users to provision their storage as well, there is less of a footprint for storage admins. They can now focus doing more automation rather than just doing the day-to-day work.
Comparing it to other vendors, there was more complexity when leveraging the features with the cost of the features available today, based on where the roadmap is. NetApp seems to fit our requirements for now.
I would rate the product as a 10 (out of 10), but the whole package including the support would be a nine (out of 10).
Cold data tiering to cloud is something that we're looking at today. Right now, we're more focused on StorageGRID and being able to do everything on-prem. However, we are looking at Cloud Volumes to leverage for the immediate term use case and how we could leverage a quick turnaround to the market for our customers' needs.
We use this solution for NAS and SAN.
NetApp helped us with its ease of deployment and ease of use.
The solution's data protection and data management are also easy.
AFF has improved our response time by about 30%.
We have enough storage, especially with the enhanced deduplication and compaction. It is good to be able to have a multitude of environments without having to worry about having spaces deployed. We always have a good amount of space. We do have multi-performance, with different performance layers for slower and quicker storage.
Multi-protocol is the most valuable feature for us. It does everything in one system: CIFS, ISCSI, and fiber channel. Other systems don't do all that.
The procurement process could be improved. It takes a long time for us to receive stuff. The product is good. It's not the product, it's just that it takes forever to get it. It's not our reseller's problem; it's usually held up at NetApp.
Waiting for equipment is one of our biggest hiccups. I live in Pennsylvania and we flew out to Washington state to do an install. We were there for three days, but the product didn't show up. We left and the product came the next day. Then we had to send somebody else out. That's because things were getting held up in shipping and stuff like that. The shipping is my only beef with NetApp.
It is easy to deploy and it's scalable.
I am happy with their technical support. It's not bad. We haven't had to use it very much, but I think they're proficient.
We had an AFF already there. We just upgraded. In my previous company, where I was for five years, we used NetApp extensively. So I had a lot of experience and interaction with it.
We found the setup straightforward. I've been using NetApp for a long time, though.
Our partner is a good friend of mine. I've worked with them for a long time. They work with a lot of other companies. They're huge NetApp distributors.
The price of the upgrading of the solution is high. I could buy a whole unit of All Flash FAS 300 with a shelf for around $285,000. Yet if I want to add one additional shelf, it'll cost me $275,000. So they want you to upgrade by replacing it. It's cheaper to buy a whole new unit than to just scale-out. The upside is they last. AFF lasts us three or four years. So that's a good investment.
I don't think it's cost-efficient for a lot of people. Their pricing structure is not competitive at this point with other companies. Support is a fortune on it. Every three years you need to do a rip and replace for an upgrade. It's not an in-place upgrade.
We evaluated Pure Storage and Nimble. I've used HPE 3PAR and Tintri as well. We've looked at a lot of different vendors. Most of them were better in terms of their upgrade process. Nimble and Pure have a hot upgrade process, which NetApp does not have. Although the cost of Pure is a lot more. Nimble was a good product, but they were bought by HP I think, so that will probably go away. I don't see it as much as I did before. We chose NetApp because of its speed and stability.
I think it fits a multitude of needs. For someone who doesn't know how to provision storage, it gives you, SIPS and NAS storage. NAS storage gives you a SAN protocol so you can provision ISCSI fiber channel one, depending on what you're using it for. It's basically an all-in-one solution. It does everything for you.
I would rate this solution as nine out of ten. There have been a few times we've seen buggy releases on some of the ONTAP software upgrades. Nine is good, though. I never get a ten when we get our reviews. If you get a ten, there's no room for improvement. Nine gives you room to improve. If you give it a ten, they're not going to have any reason to improve.
We use this solution for back end storage of vSphere virtual machines over NFS.
This product was brought in when I started with the company, so that's hard for me to answer how it has improved my organization. I would say that it's improved the performance of our virtual machines because we weren't using Flash before this. We were only using Flash Cache. Stepping from Flash Cache with SAS drives up to an all-flash system really had a notable difference.
Thin provisioning enables us to add new applications without having to purchase additional storage. Virtually anything that we need to get started with is going to be smaller at the beginning than what the sales guys that sell our services tell us. We're about to bring in five terabytes of data. Due to the nature of our business operations that could happen over a series of months or even a year. We get that data from our clients. Thin provisioning allows us to use only the storage we need when we need it.
The solution allows the movement of large amounts of data from one data center to another, without interrupting the business. We're only doing that right now for disaster recovery purposes. With that said, it would be much more difficult to move our data at a file-level than at the block level with SnapMirror. We needed a dedicated connection to the DR location regardless, but it's probably saved our IT operations some bandwidth there.
I'm inclined to say the solution reduced our data center costs, but I don't have good modeling on that. The solution was brought in right when I started, so in regards to any cost modeling, I wasn't part of that conversation.
The solution freed us from worrying about storage as a limiting factor. In our line of business, we deal with some highly duplicative data. It has to do with what our customers send us to store and process through on their behalf. Redundant storage due to business workflows doesn't penalize us on the storage side when we get to block-level deduplication and compression. It can make a really big difference there. In some cases, some of the data we host for clients gets the same type of compression you would see in a VDI type environment. It's been really advantageous to us there.
The speed, inline deduplication, and compression are really nice. It's also just easy to manage. We use Snapshot and SnapMirror offsite, which give us some good recovery options.
The solution's data protection and management are as simple as you can hope for. On the data protection side, we have a gigabit connection to our disaster recovery center and we replicate snapshots with SnapMirror hourly. This gives us a really good way to roll things back if we need to but have everything offsite at the same time.
I really don't have anything to ask for in this regard because we're not really pushing the envelope on any of our use cases. NetApp is really staying out ahead of all of our needs.
I believe that there were firmware issues. I think it was just a mismatch of things that were going on. It could have possibly been something in the deployment process that wasn't done exactly right.
It's reliable. I don't have to lose sleep over something being wrong with the system. The few incidents we've had here and there have been resolved quickly, either by our channel partner or by NetApp support.
As for scalability, we've added shelves in with very little effort. We're probably not what NetApp wants to see, but we've been purchasing some large six-terabyte SATA drives to expand out colder storage and just get those racked and plugged in. It's very easy to take it up and scale. We are looking very slowly at moving towards the cloud and the NetApp approach to cloud storage is way ahead of what we need, which is very reassuring.
The technical support team is always easy to deal with. Fortunately I haven't had to deal with them much, but when the need arises they're good to work with.
That decision to got with AFF was made before me. They switched from a NetApp FAS system, which is spinning disc storage. We came over to that from a Hitachi BlueArc system that was very old. The FAS system was doing well, but when it came time to add more storage, it was obvious that the choice for flash was the way to go, specifically for virtual machines and applications. It would have been chosen for virtual machine storage and application delivery.
I would say the initial setup was straightforward. When the stuff ships out, it comes with diagrams of how everything needs to be wired. The online resources are great to read through and the ONTAP system is consistent across platforms. Deploying AFF is less complicated than deploying older solutions.
We do a lot of work with our partner, which is informative. They know the products well and do a great job working with us to meet our schedules and technical needs.
I'd definitely encourage people to do a proof of concept and get trial gear in there because it's going to shine. It's something that when you actually get in there and use it, it just clicks.
I would rate this solution as a ten out of ten.
Our primary use case for AFF is for file storage.
It simplifies IT operations.
Thin provisioning enables us to add new applications without having to purchase additional storage. Thin provisioning is obviously heavily utilized so we don't have to buy a new kit.
AFF has enabled us to move large amounts of data from one data center to another. It has also affected IT operations by greatly improving resilience.
AFF SSDs have improved application response times. We've seen a five-fold decrease in the latency figure.
Datacenter costs have decreased because of the smaller footprint and less power usage. In one system we saw six racks go down to half a rack. It's probably five to one in terms of actual data space.
Speed, reliability, ease of use are the most valuable features.
The overall latency in your environment is very good.
We don't use the solution for artificial intelligence or machine learning applications.
The simplicity around data protection and data management is very good. We use SnapVault for data protection which works very well. SnapMirror is also good. We mainly use the command line a lot, so we don't tend to use many provisioning tools.
We have had issues with CIFS presentations and outages, so if that was removed, we could do seamless upgrades without affecting CIFS presentations. That would be an advantage. That's about the only improvement I can think of.
Scalability is very good.
The technical support is very good. We haven't had any issues.
Initially, the setup was complex because it was new and very different, it was 7-Mode to cDot. We got a lot of support from NetApp so it wasn't an issue. It was just complex, but they provided the assistance we needed.
We are integrators but NetApp consultants also help.
We always use NetApp for our file services.
I would rate it an eight out of ten. Nothing would make it a ten, nothing is perfect. I would advise someone considering this solution to buy it!
Our primary use case of this solution is for SAN block storage.
We don't use AFF for artificial intelligence or machine learning applications.
It has improved the way my organization functions because it has enabled us to host a very fast, multi-tenant private cloud solution.
AFF has improved application response time by a lot.
This solution has helped us to stop worrying about storage as a limiting factor. We know we've got enough storage left and it's easy to manage, so we can tell how much real storage we do have left.
We use SapMirror a lot but the speed of the AFF is also very valuable.
The overall latency in our environment is very low because it's All Flash and we've got 10 Giga dedicated to the storage network
AFF's simplicity around data protection and data management is pretty good. With the NetApp volume encryption, we're getting data at rest encryption right now. It was very easy to turn on and very easy to manage with the onboard key manager.
It has enabled us to add new applications, without having to purchase additional storage. We've over-provisioned our storage quite a bit, simply because we know we've got time before people will grow into it.
It has not reduced our data center costs. NetApp charges a pretty penny for their stuff.
The next release desperately needs NFS4, extended attributes.
In terms of what needs improvement, the NAS areas are a little behind on technologies. For example, SMB 3 is not quite up to speed with a lot of the storage spaces stuff. NFS4 doesn't support some of the features that we need.
It's rock solid.
Scalability is expensive.
Their technical support is very good. We use them quite a bit and we have had good experiences with them.
We've been with NetApp since I came on the project and because I had NetApp experience before I brought it with me.
I've set up a NetApp network previously. The setup was pretty straightforward.
We used an integrator and we had a very good experience with them.
We've looked at EMC and Microsoft storage spaces. Neither one of them really compares.
My advice to someone considering this solution is that if you can afford it and you will be using it a lot, go for it.
I would rate it an eight out of ten. To make it a perfect ten it would need to be cheaper.
We use NetApp AFF products for file storage across multiple agencies in the State of Nebraska. We are a consolidated state, so all of the agencies of our state have consolidated files on NetApp products. We use AFF as our top tier solid-state storage for application and user data storage.
Different customers will have different needs, e.g., when you're looking at somebody who just has simple file service needs, then it's very easy. That can be met with many different products. But, we also like that you can build SVMs with different network profiles, vLANs, security protocols, etc.
We like the ability to create different SVMs on AFF products because they can create different vLANs and network access points for different customers. We can actually drop virtual appliances onto any customer's network. If they have different firewall and network profiles than each other, we can keep all of the data completely separated.
We can also meet the different needs for different Snapshot and backup policies. A Department of Labor or Department of Health and Human Services will have very different needs from just standard user profile folders.
We like AFF because it has a very high reliability rate with very high performance. We are using it for top tier performance on application and virtual machine storage, as well as just being able to separate out SVMs for different security and network needs for all of our different customers across the state.
We use the Snapshot feature to simplify backups for data protection. We set different policies that let let our agencies choose what backup policy they want to have for their Snapshots. It's very simple. Users can be given the opportunity to look at previous versions directly from the Windows interface or they can call/put in a ticket seeking support from our IT group if they need a larger system restore, because their data is protected with NetApp and replicated as well.
Stability is great. We haven't had to replace a single drive. We haven't had any issues with the AFFs or compatibility issues. We haven't had any problems at all. It has worked exactly the same as our previous system but with greater performance.
In both our traditional cluster and MetroCluster, we have been able to scale very easily. We just add additional shelves of solid-state disk. They expand the storage array so we can just increase the aggregate sizes and assign more space. It's been very simple to scale.
Tech support is great with NetApp if you can get past Tier 1. A lot of times when you open a new case or do a direct dial-in with an issue, like with any support, you will definitely reach a Tier 1 level that is not particularly helpful until you get escalated to an expert. However, the experts that I have reached have always been great.
We have several different SAN and NAS products in our environment. With the traditional spinning storage, We are running into bottlenecks with performance problems. The AFF products have given us the opportunity to move people to all-flash high performance storage tiers, which will make their virtual machines, database servers, and SQL run much better in a flash environment for us than in a hybrid or spinning disk environment.
Switching to AFF has improved the performance of a lot of our virtual machines in a VMware environment. The number of support tickets that we receive has fallen to almost zero because of this, so it's been a real help for our virtual server support team.
We have used the solution’s thin provisioning to add new applications without having to purchase additional storage. We use thin provisioning on all of our flash arrays at this point. It gives us the choice to be able to overprovision and take advantage of compression, compaction, and thin provisioning all at the same time. We can get more out of the purchases that we make.
I would like it to be a lot less expensive, but it's been a very good solution for us.
I would give it a 10 (out of 10). It's been solid. The performance is great. It has solved a lot of problems in our environment.
The primary use case for AFF is as a SAN storage for our SQL database and VMware environment, which drives our treatment systems. We do not use our it currently for AI or machine learning.
We are running ONTAP 9.6.
Our AFF 8040 is currently helping us in terms of response time and speed because it is a flash system. Most importantly, it enables our SQL Cluster to respond to database queries and things a lot faster. It minimizes latency and stuff like that, which is important in radiation treatment.
The latency is important in that the data that we serve from the system drives LINAC, which is a big machine that shoots radiation into cancer patients. The latency affects how long the patients end up having to sit there tied down to these tabletops for the radiation treatment. It also helps speed up the setup of the machine, which takes about five minutes because the machine has to rotate around and do all these things. Sometimes, if the system doesn't respond in enough time, these interlocks happen and the machine stops. There are a lot of safety interlocks that cause the system to stop if things don't happen right, so we aren't mistreating patients and killing people. It's not a typical file server. We tell people usually it's a black box for radiation treatment. On airplanes you have the black box which records all data, this is exactly what our NetApps do for radiation treatment.
Our AFF does simplify our SAN and NAS environments. We currently don't use any cloud because we're a medical institution that hasn't approved cloud storage of any type because of HIPAA violations. When we came from our old NAS work solution, we could only do one or the other: It was NAS or SAN. The, AFF provides the ability to do both. It consolidates a lot of our storage into one or two chassis, which makes money savings in our data center. It saves a lot of rack space, which we don't have much of anymore. We have a new building and are almost out of space already.
The simplicity of the data management in our current system is really easy, especially with the setting up of redundant volumes and SnapMirror. We have it mirrored over to an 8200 non-flash system. We use that for our DR SVMs, so if our SQL Cluster goes down, the other volumes take over, and we have no downtime because it drives patient treatment. It gets complicated fast.
The data protection that we currently use is SnapMirrors and SnapVaults. We have our SnapVault off on an offsite with a FAS2552 system.
We currently use some thin provisioning for our planning system, but we will probably move away from thin provisioning because our Solaris planning system actually has some issues with the thin provisioning and way Solaris handles it, since Solaris uses a ZFS file system. The ZFS file system doesn't like the thin provisioning changing things and it brings systems down, which is bad.
One thing that could be improved is the web interface. I would like to see some of the features in the web interface, like where the Snapshots are located, brought up a bit more to the front. This way I don't have to do as many clicks If I'm using the GUI, which I do once in a while. We are usually going in and looking at Snapshots for doing restores, etc., and if it is more upfront or to the surface, it might save a few clicks. It's not so bad.
We have had our AFF for three years now and not had any problems with it whatsoever. It's been rock solid. They haven't lost a drive or node. We haven't had a hardware failure. It has been fantastic.
The scalability of AFF in our NetApp systems in general has been ewonderful. I have another enclosure full of flash drives sitting in our dock right now ready to go in. I can schedule it, put it in the rack, and have it in the system and utilized in maybe half an hour. It works just great.
Our AFF has freed us up greatly in terms of allocating storage. Our old system didn't expand at all. With the new system, we can add another shelf in, merge data into the aggregate, and grow volumes (all live), which is great in a hospital.
The tech support has been awesome. We have meetings with our local guys once a month, whether we need it or not, and they answer our questions. I have been able to hot call them on demand on the weekends when we were doing upgrades and side things on our NetApp, then had some issues. I was able to call, and they stop and help out, which has been fantastic. They are probably our best vendor.
I chose NetApp because I was most impressed with the engineers that we talked to about the system and its overall metrics along with the things that we were given, like latency and redundancy. I was most impressed with the demos that they did that, which included: ease of setting up an AFF, ease of deploying storage to a SQL Cluster, and just overall simplicity of how easy it is to move data around to back up things.
Our AFF has improved our application time greatly. Our database response time has gone up a lot from our previous SaaS storage that we had. The systems were nine-years-old and were about do to go. When we went to the flash, we noticed a huge increase application response rate (50 percent or more). It was like night and day.
It was more of an expensive system at the time when we bought it because flash was relatively new. We probably save the most amount of money just in the time to set up with it. We had to set up in an afternoon, then we were serving out data later on that day. Just the fact that it's been rock solid. We haven't had to sit there and baby it, fixing things, tweaking and tuning it. It just works. The biggest savings is not having to sit there and keep it warm.
I would give our AFF probably a 10 (out of 10). We had no problems with it. It's an easy upgrade. We can do everything on the fly in the middle of the day, which is important. With the hospital, it's been a great all around piece of hardware.
This solution provides storage for our entire company.
We have a unified architecture with NAS and SAN from both NetApp ONTAP AFF clusters.
This solution reduced our costs by consolidating several types of disparate storage. The savings come mostly in power consumption and density. One of our big data center costs, which was clear when we built our recent data center, is that each space basically has a value tied to it. Going to a flash solution enabled us to have a lower power footprint, as well as higher density. This essentially means that we have more capacity in a smaller space. When it costs several hundred million dollars to build a data center, you have to think that each of those spots has a cost associated with them. This means that each server rack in there is worth that much at the end. When we look at those costs and everything else, it saved us money to go to AFF where we have that really high density. It's getting even better because the newer ones are going to come out and they're going to be even higher.
Being able to easily and quickly pull data out of snapshots is something that benefits us. Our times for recovery on a lot of things are going to be in the minutes, rather than in the range of hours. It takes the same amount of time for us to put a FlexClone out with a ten terabyte VM as it does a one terabyte VM. That is really valuable to us. We can provide somebody with a VM, regardless of size, and we can tell them how much time it will take to be able to get on it. This excludes the extra stuff that happens on the back end, like vMotion. They can already touch the VM, so we don't really worry about it.
One of the other things that helped us out was the inline efficiencies such as the deduplication, compaction, and compression. That made this solution shine in terms of how we're utilizing the environment and minimizing our footprint.
With respect to how simple this solution is around data protection, I would say that it's in the middle. I think that the data protection services that they offer, like SnapCenter, are terrible. There was an issue that we had in our environment where if you had a fully qualified domain name that was too long, or had too many periods in it, then it wouldn't work. They recently fixed this, but clearly, after having a problem like this, the solution is not enterprise-ready. Overall, I see NetApp as really good for data protection, but SnapCenter is the weak point. I'd be much more willing to go with something like Veeam, which utilizes those direct NetApp features. They have the technology, but personally, I don't think that their implementation is there yet on the data production side.
I think that this solution simplifies our IT operations by unifying data services across SAN and NAS environments. In fact, this is one of the reasons that we wanted to switch to this solution, because of the simplicity that it adds.
In terms of being able to leverage data in new ways because of this solution, I cannot think of anything in particular that is not offered by other vendors. One example of something that is game-changing is in-place snapshotting, but we're seeing that from a lot of vendors.
The thin provisioning capability provided by this solution has absolutely allowed us to add new applications without having to purchase additional storage. I would say that the thin provisioning coupled with the storage efficiencies are really helpful. The one thing we've had to worry about as a result of thin provisioning is our VMware teams, or other teams, thin provisioning on top of our thin provisioning, which you always know is not good. The problem is that you don't really have any insight into how much you're actually utilizing.
This solution has enabled us to move lots of data between the data center and cloud without interruption to the business. We have SVM DR relationships between data centers, so for us, even if we lost the whole data center, we could failover.
This solution has improved our application response time, but I was not with the company prior to implementation so I do not have specific metrics.
We have been using this solution's feature that automatically tiers data to the cloud, but it is not to a public cloud. Rather, we store cold data on our private cloud. It's still using object storage, but not on a public cloud.
I would say that this solution has, in a way, freed us from worrying about storage as a limiting factor. The main reason is, as funny as it sounds because our network is now the limiting factor. We can easily max out links with the all-flash array. Now we are looking at going back and upgrading the rest of the infrastructure to be able to keep up with the flash. I think that right now we don't even have a strong NDMP footprint because we couldn't support it, as we would need far too much speed.
The most valuable features of this solution are snapshotting and cloning. For example, we make use of FlexClone. We're making more use of fabric pools, which is basically tiering of the storage. That way, instead of having just ONTAP with this expensive cost, if we want to roll off to something cheaper, like object storage, we can do that as well.
The cost of this solution should be reduced.
SnapCenter is the weak point of this solution. It would be amazing from a licensing standpoint if they got rid of SnapCenter completely and offered Veeam as an integration.
This solution is very stable. We have had downtime, but only on specific nodes. We were always able to failover to the other nodes. We had downtime from a power outage in our data centers that was mainly because we didn't want the other side to actually have to take a load of an SVM DR takeover because we knew it was going to be back up in a certain amount of time. Other than that, we have had no downtime.
It seems to be almost infinitely scalable. Being an organization as large as we are, it definitely meets our needs.
We have onsite staff that is a purchased service from NetApp, so we do not directly deal with technical support.
Prior to this solution, we had all these different disparate types of storage. It was a problem because, for example, but we'd be running on low NAS but there was all the extra storage in our SAN environment. The solution seems a little cheaper, but when you added the whole cost up, it was cheaper for us to just have a single solution that could do everything.
We have seen ROI, but I can't quantify how much.
This is a really good solution that definitely meets our needs. It integrates well with all of the software that we're using and they have a lot of good partnerships that enable that. There are a lot of things that can bolt right in and talk to it natively, like Veeam and other applications. That can really make the product shine. I just wish that NetApp would buy Veeam.
I would rate this solution an eight out of ten.
We are in the process of moving to AWS and we are using this solution to help move all of our data to the cloud, using the tiering and other functionality.
We have approximately fifty AFF clusters spread across three locations.
We plan to use this solution for artificial intelligence and machine-learning applications, but we are still in the PoC right now. It is something that my team is working on.
Our DR and backup are done using SnapMirror.
This solution has helped simplify our IT operations. We can easily move data from on-premises to the cloud, or from one cloud to another cloud. NetApp SnapShots and SnapMirror are also helpful.
The thin provisioning has allowed us to add new applications without having to purchase additional storage. We are shrinking the data with functions like deduplication and giving almost two hundred percent. It is very helpful.
This solution has allowed us to move very large amounts of data without affecting IT operations. We have moved four petabytes to the cloud. We have moved data from on-premises to the cloud, and also between clouds. It is easy to do. For example, if you want DR or a backup in a second location, then you just use SnapShot. If you have a database that you want to have available in more than one location then you can synchronize them easily. We are very happy with these features.
Our application response time has been improved since implementing this solution. The AFF cluster is awesome. Our response time is now below two milliseconds, whereas it used to be four or five milliseconds. This is very useful.
The costs of our data center have definitely been reduced by using this solution. The power consumption and space, obviously, because this solution is very small, have been reduced.
We have been using this solution to automatically tier cold data to the cloud. I would not say that it has affected our TCO.
This solution has not changed our position in terms of worrying about storage as a limiting factor.
The most valuable features of this solution are the deduplication and the ability to move data to different clouds. We have been using Cloud Sync and Cloud Volumes, and we have moved four petabytes using Cloud Sync.
It would be very useful if we could do the NFS to CIFS file transfer, but it is not supported at this time.
We are finding limitations when it comes to moving data to AWS.
We have been using this solution for ten years.
The stability of this solution is fine. We have not experienced any downtime or any issues.
Scalability is something that we are spending time on, but it is an internal issue related to seeking financial approval. The scalability of the solution is not a technical issue.
The technical support for this solution has always been number one. There is no doubt that they are getting more responsive and more technical.
We performed a PoC using Cloud Volumes and Cloud Sync, and we were happy with the time, durability, and availability.
The initial setup of this solution is straightforward.
We can install this solution ourselves.
We have seen ROI from this solution.
We evaluated a solution by EMC, but we found they their filesystem was not as robust. That is the reason that we chose NetApp.
We are really happy customers and this is a solution that I can recommend.
I would rate this solution a nine out of ten.
We use it primarily for CIFS and NFS shares, e.g., Windows shares and network shares for Linux-based systems.
It has been very helpful for us. Data mobility is big. Being able to move data between different locations quickly and easily. This applies to data protection and replication. The hardware architecture has been very good as far as easily being able to refresh environments without any downtime to our applications. That's been the biggest value to us from the NetApp platforms.
The solution simplifies IT operations by unifying data services across SAN and NAS environments on-premise.
We are working on a lot of efforts right now where environments need multiple copies of data. Today, those are full copies of data, which require us to have a lot of storage. Our plans are that you'll be able to leverage NetApp Snapshot technology to lessen the amount of capacity that we require for those environments, primarily like our QA and dev environments.
We've done full data center migrations. The ease of replication and data protection has made moving large amounts of data from one data center to another completely seamless migrations for us.
Early on, the clustered architecture was a little rough, but I know in the last four years, the solution has been absolutely rock solid for us.
Something I've talked to NetApp about in the past is going more to a node-based architecture, like the hyper-converged solutions that we are doing nowadays. Because the days of having to buy massive quantities of storage all at one time, have changed to being able to grow in smaller increments from a budgetary standpoint. This change would be great for our business. This is what my leadership would like to see in a lot of things that they purchase now. I would like to see that architecture continue to evolve in that clustered environment.
I would like to see them continue to make it simpler, continuing to simplify set up and the operational side of it.
I can't remember the last time we had an issue or an outage.
It is one of the best solutions out there right now. It is extremely simple, reliable, and seldom ever breaks. It's extremely easy to set up. It's reliable, which is important for us in healthcare. It doesn't take a lot of management or support, as it just works correctly.
Our NetApp environment has been fairly stable and simple that we don't have a lot of resources allocated to support it right now. For our entire infrastructure, we probably have three engineers in our entire enterprise to support our entire NetApp infrastructure. So, we haven't necessarily reallocated resources, but we already run pretty thin as it is.
Scalability has been great. There have been some things I would like to see them do differently, but overall, the scalability has been wonderful for us.
The solution’s thin provisioning has allowed us to add new applications without having to purchase additional storage. We use thin provisioning for everything. We use the deduplication compression functionality for all of our NetApps. If we weren't using thin provisioning, we'd probably have two to times more storage on our floor right now than we do today.
We use all-flash arrays for our network shares. We have a couple of other platforms that we also have used in the past. I really wanted to move away from those for simplicity. Another big reason is automation. NetApp has done a great job with their automation The Ansible modules along with all the PowerShell command lists that they have developed, make it very consumable for automation, which is very big for us right now. That was one of the big driving forces is having a single operating environment, regardless if I'm running an all-flash array or hybrid array. It's the same look and feel. Everything works exactly the same regardless. That definitely speaks to the simplicity and ease of automation. I can automate and use it everywhere, whether it's cloud, on-prem, etc. That was one of the real decisions for us to decide to go that direction.
The overall setup is very easy. Deploying a new cDOT system is the hardest part. On our business side, because our environment is very complex, there was some complexity that came up. In general, that is one nice thing about Netapp. Regardless of how simple or complex your environment is, it can fit all of those needs. Especially on the network side, it can fit into those environments to take advantage of all the technologies that we have in our data centers, so it's been really nice like that.
We did the deployment ourselves.
The solution has improved application response time. We are using the All Flash FAS boxes of the AFS and our primary use case is around file shares. These aren't really that performance intensive. Therefore, overall, response times have improved, but it's not necessarily something that can be seen.
From a sheer footprint savings, we're in the process of moving one of our large Oracle environments which currently sits on a VMAX array, taking up about an entire rack, to an AFF A800 that is 4U. From just the sheer power of cooling and rack-space savings, there have been savings.
I haven't seen ROI on it yet, but we're working on it.
We did RFIs with the different solutions. We were looking at a NetApp, Isilon, and Nutanix. Those were three that we were looking at. NetApp won out primarily around simplicity and ease of automation. It's the different deployment models where you can deploy in the cloud or on-prem, speaks to its simplicity. Our environment is very complex already. Anything that we can do to simplify it, we will take it.
When you are evaluating solutions:
You will be looking at things, like cloud, automation, and simplicity, regardless of how big you are. The NetApp platform gives you all of these things in a single operating system, regardless of where you deploy.
The solution has freed us from worrying about storage as a limiting factor. I'm very confident that the NetApp platform will do what they say it's going to do. It's very reliable. I know that if there is an issue, I can quickly move that data wherever I need to move it with almost no downtime. It gives me a lot of data flexibility and mobility. In the event that I did need to move my workloads around, I can do that.
I would give it a nine out of 10. The only reason I wouldn't give it a 10 is because I would like to see some architectural changes. Other than that, its simplicity and the ability to automate are probably the two biggest things. Being able to move data in and out of the cloud, if and when we decide to do that, it gives us the most flexibility of anything out there.
We do not use this solution for AI or machine learning applications.
We are talking about automatically tiering cold data to the cloud, but we are not doing it yet.
The primary use case is enterprise storage for our email database system.
We have just been using on-premise. We are looking to move the workloads to the cloud, but right now it's just on-premise.
From an operations standpoint, we pretty much set it and forget it. We don't have to manage anything because of the AFF speed and low latencies. Because a big requirement in the healthcare industry are the low latency type response times, It has been perfect.
With the thin provisioning, we can overprovision our boxes, but there are still applications which are storage capacity hogs. So, we still have to report.
It simplifies our IT operations and makes them more efficient.
The most valuable feature is it's fast. We do not use the solution for artificial intelligence or machine learning applications, but our overall latency is low. With our SQL Servers and Oracle servers, compared to the older meta filers, like 7-mode, the 8000 custom mode, or performance on Pure flash systems, you can't compare. We are seeing submillisecond, which is pretty nice.
The solution has enabled us to move large amounts of data from one data center to another (on-premise) without interruption to the business using SnapMirror.
The solution has improved application response time. Compared to the 3250s and 8000s, it has been night and day.
We would like to have NVMe on FabricPool working because it broke our backups. We enabled FabricPool to do the tiering from our AFFs to our Webscale but it sort of broke our Cobalt backups. I think they're going to fix it in v9.7.
The SnapDrive is just another piece of software which is used to manage the storage on the filers. They could use some updates.
We are still a lot of things that we have to think about, like storage and attributes, to be able to go ahead with it.
We haven't gone to their standard Snaps product yet, but that's supposed to centralize everything. Right now, we have to manage individual hosts that connect to the stores. That's sort of a pain.
We've been using NetApp for the last 15 years.
So far, the stability is good. It's great.
For the AFFs, I haven't had any problems with the scalability. We went from two to six nodes without a problem.
It helped us easily move about 10 petabytes of data from San Diego to Phoenix.
The technical support has been awesome. Whenever we have a problem, we just give NetApp's support a call, and they fix our issue.
With the newer versions, we have needed less support. The solution has just been working.
We didn't switch over. We have been using NetApp for 15 years.
This solution has reduced our data center costs because when we went from the 8000 and 3200 series that took us from 20 racks of storage down to two.
The initial setup was straightforward. We've been deploying NetApps for the last 15 years. We are pretty familiar with the boxes.
I've been using the technology for years. For every model and version, the deployment is basically the same.
My team did the deployment.
We use a private cloud, which is Wesco, and it definitely saves us a lot of space.
The pricing is good.
We did go through the whole vetting out process of scoring different vendors and NetApp won, when we went through a Greenfield environment.
Check out the AFF. It is super fast and reliable. We've been using it for a long time. It's the perfect system for us.
I would rate the solution as an eight out of 10 because there's always room for improvement. To make it a 10, it would have to have super submillisecond performance at a cheaper price. It is about latency in our environment. We want submillisecond for everything across the board. If something can guarantee that performance all the time without increasing costs, that would be cool.
We have a pretty amazing story about using AFS. When I went into this organization, we had a 59% uptime ratio, and at the time we were looking at how to improve on efficiency, and how to bring good technology initiatives together to make this digital transformation happen. When the Affordable Care Act came out, it started mandating a lot of these health care organizations to implement an electronic medical record system. Of course, since health care has been behind the curve when it comes to technology, it was a major problem when I came into this organization that had a 59% uptime ratio. They also wanted to implement an electronic medical record system throughout their facility, and we didn't have the technology in place.
One of my key initiatives at the time was to determine what we wanted to do as a whole organization. We wanted to focus on the digital transformation. We needed to determine if we could find some good business partners in place so we selected NetApp. We were trying to create a better, efficient process, with very strong security practices as well. We selected an All-Flash FAS solution because we were starting to implement virtual desktop infrastructure with VMware.
We wanted to throw out zero clients throughout the whole organization for the physicians, which allowed them to do single sign-on. The physician would be able to go to one specific office, tap his badge, sign in to the specific system from there. That floating profile would come over with him, and then you just created some great efficiencies. The security practices behind the ONTAP solution and the security that we were experiencing with NetApp was absolutely out of this world. I've been very impressed with it. One of the main reasons I started with NetApp was because they have a strong focus on health care initiatives. I was asked to sit on the neural network, which was a NetApp-facilitated health care advisory group that focused and looked at the overall roadmap of NetApp. When you have a good business partner like NetApp, versus a vendor where a vendor's going to come in, sell me a solution and just call me a year later and say that they want us to sign something, I'm not looking for people like that. I'm looking for business partners. What I like to say is, "My success is your success, and your success is ours." That's really a critical point that NetApp has demonstrated.
Everyone looks at health care because health care has been an amazing organization to be in. We're seeing the transformation of how we're becoming a digital company. Every organization is becoming a digital company, and we're starting to see the advancements of technology really come in to place. Your new CEO is the patient, and that's the bottom line. That's my CEO. As an organization and as a technologist, I have to build a very strong patient-centric strategy that focuses the technology on the patient's needs, because at the end of the day, that patient could choose to either go to your organization or to another. We want to keep that good loyalty and that good specific patient in our organization, and we want to make sure that we are creating very strong, asynchronous tools that benefit a patient both inside and outside the organization. That's why I always say patient care is number one. AFS has supported our overall business initiatives.
Applications are a critical point. I think that All Flash FAS is an amazing thing when it comes to speed, efficiency in what it's doing. We've been very impressed with regards to it as well. We look at different initiatives, and we're starting to focus on different initiatives when it comes to data analytics and data mining. Having that specific availability, and making sure that we can focus on those initiatives and those strategies, we're very confident that the solutions that we are choosing with NetApp are going to give us the edge advantage of moving forward into the future.
I think when you look at artificial intelligence and at machine learning, you look at predictive analytics. You have to have very strong data silo in order to get that clean data. I think with all the data that we're creating in this health care organization, we need to make sure that we can create well-structured data which will allow us to data mine that information to come out with some good valuables, meaning better patient care, better ways to reduce readmission rates, better ways to increase revenue. There are so many benefits in regards to good, strong data mining that produce great analytic reports.
Right now we do have a very strong cloud initiative. We are moving forward to the cloud because the thing is I think the future of health care, the future of artificial intelligence improvements is really moving a lot of these health care organizations over to the cloud where there is that data mining capability of really bringing in all these algorithms and all of these good collaborations because collaboration is definitely key. If we can collaborate, and if we could start focusing on more of interoperability, meaning that we're sharing information more successfully, because right now, health care, has no interoperability. Everyone talks about interoperability, but we don't have interoperability. You go from one facility to another, it's like you're getting completely different services. I want that information from one facility to another to go and share information, which I think is going to be a success, because, you come to one facility, you get poked for lab results, you get exposed for radiology results, meaning radiation, then you go over into another organization that's saying that they can't retrieve your lab or radiology results and now we're going to have to re-poke you and re-expose you to radiation. Those are problems.
Another one of my main focuses is on cybersecurity initiatives and cybersecurity improvements. I think NetApp has really focused a lot on cybersecurity. I was really impressed on some of the cybersecurity sessions that they had because you figure health care's one of the most attacked sectors out there and we hear about these health care organizations being ransomed all of the time. If we do get ransomed, we need to think about how we are going to restore that information and making sure that we have the capabilities that are in place. NetApp has done a great job with it. They do see a huge priority when it comes to cyber security, so it's very important for them to continue to focus on those initiatives.
The user experience has been absolutely amazing. We're about 80% virtualized on the desktop standpoint, so we do utilize VDI very highly. Using the All-Flash FAS solution, we had to basically determine that there was going to be some efficiencies and some speed as well, too, because you figure we're giving all of these health care users a virtual desktop, plus the utilization of All-Flash FAS, we need to make sure that their specific process is really rolling and moving in an efficient way, because the health care industry is a fast-paced organization. We're basically taking care of patients' lives. The technology that we bring has to be very efficient to provide the best patient care that we can have, and NetApp All-Flash FAS has really proven that point.
Considering that NetApp has health care view and that really strong health care initiative, they really need to consider what they need to do next to improve better data sharing and to make sure that the information that we are sharing with one another is fully encrypted, meeting HIPAA and HITECH regulations as well.
Stability has been pretty amazing as well. I came to an organization that was 59% uptime which was throughout the whole enterprise. That's a major problem because when you start measuring downtime, that is a loss of revenue for the organization. Since I've implemented a lot of these new strategies, we have done a complete 360. We've implemented these strong technology initiatives that have really produced better business efficiencies. We went from a 59% uptime to a 99.9% uptime ratio, which is absolutely mind-blowing. If you look at the before and after pictures, it's going to blow minds because we've been able to do some amazing things. We're a three-time Most Wired winner, which is given to health care organizations, top health care organizations making the most progress of health information technology. It's been an honor to have been able to design the team that I have, the very strong core team, and the good initiatives that we've had together because I always say that we must leave our egos at home. Collaboration is definitely the key to digital transformation, and we need to come together to make a difference in the future.
Scalability, the improvements that we see with AFS, and the reliability has been such a critical element. I think the technology that NetApp has, especially when you look at a disaster recovery standpoint because you figure we're a health care organization and any type of outage is considered revenue loss, we really want to try to avoid those specific elements.
Tech support has been absolutely amazing. I think on the technical aspects as well, my staff is able to get great support from the NetApp technical support resources that we have. What I love about NetApp is they have a health care division. At times, it's such an amazing thing because if we have a healthcare-related issue, there's no one better than having prior CIOs from health care organizations that NetApp has hired, and that are part of the healthcare team, to help out with any of those initiatives and support problems. Support has been absolutely phenomenal.
We could definitely spin something up pretty quickly. It takes about ten minutes which is pretty quick. We have a very good team that does that as well.
The total cost of ownership has increased a little. When I look at building very strong, good strategies that get presented to the board of directors and the additional executive teams, I look at two things: I look at ROI and I look at total cost of ownership. At times, my overall goal is that I want to get out of the data center business. I know that TCO really does increase because you have that on-prem solution, but I think moving forward into the cloud-based initiatives that we have, we're going to definitely start seeing a decrease within that TCO because now we don't have all of this inventory to take care of. We're being a lot more efficient and a lot more agile as well too.
I am part of the NetApp A-Team. I've been a huge advocate towards NetApp. I would say that nothing is perfect, but NetApp is leading the way when it comes to digital transformation and digital efficiencies as well. Their focus towards health care has been out of this world. I would give that specific product a nine, moving forward to almost perfect ten.
My primary use case for All Flash FAS that we have is pretty much everything. It is the go-to storage device that we use for block fiber channel devices on our heavy SAP workloads as well as user base files and file shares for databases.
AFF improves how our organization functions because of its speed. Reduction in batch times means that we're able to get better information out of SAP and into BW faster. Those kinds of things are a bit hard to put my finger on. Generally, when we start shrinking the times we need to do things, and we're doing them on a regular basis, it has a flow on impact that the rest of the business can enjoy. We also have more capacity to call on for things like stock take.
AFF is supporting new business because we've got the capacity to do more. In the past, with a spinning disc and our older FAS units, we had plenty of disc capacity but not enough CPU horsepower and the controllers to drive it and it was beginning to really hurt. With the All Flash FAS, we could see that there are oodles of power, not only from disc utilization figures on the actual storage backend but also from the CPU consumption of the storage controllers. When somebody says "we want to do this" it's not a problem. The job gets done and we don't have to do a thing. It's all good.
All Flash FAS has improved performance for our enterprise applications, data analytics, and VMs which are enterprise applications. It powers the VM fleet as well. It does provide some of our BW capabilities but that's more of an SAP HANA thing now. Everything runs off it, all of our critical databases also consume storage off of the All Flash FAS for VMs.
For us TCO has definitely decreased, we pay less in data center fees. We also have the ability with the fabric pool to actually save on our storage costs.
The valuable features are the fabric pool. We are taking our cold data and pumping it straight into an estuary bucket. Also, efficiency. We're getting about two and a half times upwards of data efficiency through compaction, compression, deduplication, and it's size. When we refreshed from two or three racks of spinning discs down into 5U of rack space, it not only saved us a whole heap of costs in our data center environment but also it's nice to be green. The power savings alone equated to be about 50 tons of CO2 a year that we no longer emit. It's a big game changer.
The user experience from my point of view, as the person who drives it most of the time, is a really good one. The toolsets are really easy to use and from the service offered we're able to offer non-disruptive upgrades. It just works and keeps going. It's hard to explain good things when we have so few bad things that actually occur within the environment. From a user's point of view, the file shares work, everyone's happy, and I'm happy because it's usually not storage that's causing the problem.
I would like for them to develop the ability to detach the fabric pool. Once we've added it to an aggregate it's there for life and it would be nice to disconnect it if we ever had to.
One to three years.
Stability with AFF has been really great. We blew an SSD drive which we thought may never actually happen and it just kept on going. We've not had any issues with it even though we actually went to a fairly recent release of data on tap as well that just works.
Scalability is a really cool part of the product in terms of growing. We don't see that we'll actually need to do much of that. We'll take more advantage of fabric pool and actually push that data out to a lower tier of storage at AWS and our initial projections on that suggest that we've got a lot of very cold data we're actually storing today.
AFF tech support we've had a couple of calls open and it's always been brilliant. I really like the chat feature because one of the things that annoys me is the conference calls that usually come when you have to contact the hardware vendor. You get stuck on a webex or a conference call for hours on end where it's just easier to chat to the techo at NetApp in real time and if he isn't able to help you he'll just pass you on to the next one and you end up staying in the chat which means that I continue working while dealing with a problem.
We knew it was time to switch to this solution because it was costing us a fortune in maintenance, especially when our hardware was getting over the three to five year old mark. With spinning disc, it's not like we can neglect that because drives fail all the time and the previous iteration of storage we had was a NetApp FAS, so we've gone from NetApp to NetApp.
We implemented in-house. It was dead easy. All you have to do is throw it in the rack, plug in the network and fiber cables, give it a name, and away you go. There is very little that actually needs to happen to make it all work. I think we managed to get one of them up in two or three hours.
We also considered Dell EMC and Pure Storage. The biggest reason we picked NetApp was the ease of actually getting the data to the next iteration but also the other vendors don't have a product that supports everything we needed which is file services and block services. It's a one stop shop and I didn't really want to have to manage another box and a storage device at the same time.
I would rate AFF a ten out of ten. If I was in the position to tell someone else about All Flash FAS and why they should get it I would simply say just do it. I think everybody in the storage community is pressured to live on more with less and this product basically enables that to happen.
We have deployed NetApp AFF with four nodes; two of these are in our primary data center, and the remaining two are in the second data center. We are using Cluster Mode configurations.
Our organization has improved because this solution provides a Highly Available storage system with DR configurations, deployed across two data centers.
The features that I found most valuable are SnapMirror and SnapVault; these provide DR and backup for data redundancy. The High Availability and Cluster-mode Setup are also very useful.
I would like to see an improvement in the High Availability of the NFS and CIFS sharing during upgrade and patching; this would help to avoid downtime.
VMware datastores over NFS for DL585 G7 hosts on a 10G switch.
NetApp FAS was unable to keep up with the I/O. A200 has performed without a problem.
Having separate storage virtual machines with completely different setups for NFS and Windows solves problems the FAS has when the domain controllers are unreachable.
The system commander web management is good, but it is easy to make bad configurations, and it takes a lot of jumping around to work a single issue.
Shared storage for virtualized environments.
Reducing data fingerprint (deduplication) and speeding up access to data.
Synchronous replication and active-active environments.
Mixed sharing between Windows and Linux using CIFS and NFS is the best solution you can experiment with.
The best is you can use the same volume for different flavors of OS. In fact, that feature gives solutions to some cases where you have limitations for some applications when it does not support the OS, maybe when you have old apps that are not possible to migrate.
Communication with the customer for showing and exploring the new technologies is available.
VMware multi-tenant and SnapMirror destination, multi customers' filesystem too, no problem with multi AD and domain
Reliability. flexibility and multi tenant. we host 20 client virtual dc on our a200.
I scaled out our previous 2 node cdot cluster on the fly by adding cluster's switches and then the 2 node a200, after that data migration between fas 2554 and a200 was made non disruptively and on business time.
The full bundle is too expensive. It's needed to implement native replicas (i.e. snapmirror) and backup (i.e. snapvault) features
our system is very stable and reliable, of course it needs to be maintained and monitored, even in case of network switch failure a200 keeps to serve data, very important is the initial setup, so you have to focus on the final architecture.
tech support is very responsive and effective to find solution to some issues, most of the issues can be resolved reading KBs
fas 2554, need to scle out with space and performances
initial setup maust be done by cli, storage space privisioning made by gui, good interaction with vmware with vsc
I'm the vendor team and storage administrator
I need to ask for it to my ceo
full bundle too expensive I.e. full licenses to implement native replicas and backups
starting from a fas 2554 it was the best solution
good deduplication and compression ratio
It has a high quality of integration that is way beyond the competition.
Its efficiency and scalability are the most valuable features.
The scaling needs improvement. NetApp is limited for scaling options.
With other options, you need to buy a couple of different products to achieve the same outcome.
In comparison to other options, NetApp is the most complete. It is the single software choice that can give you every option that you need in the enterprise world.
Our primary use case for AFF is for all of the filers. We're also doing a lot of workloads for virtualization. All of our virtualization workloads are currently running on All Flash FAS.
We use almost all of our virtualization workloads on All Flash. Before we migrated to All Flash we used to use a different vendor for NAS solution. Some were NAS and some were Block storage. Now, logging ETLs are maybe ten times faster currently than what they used to be. We are getting amazing speeds off of FAS that we never had before.
We also use a lot of the AFF for end user storage. All the shared file systems, all the file systems that a particular user has, as a G drive, E drive, F drive or shared drives between various customers and various departments are all running off of the All Flash File system. So now, the rendering of FAS is so much faster than what it used to be. On top of that, we used to do Block. We would take Block, we would do NFS or do Samba to share those file systems for the users. Now, because they are coming straight off of NFS 3 and 4, the speed is marvelous. They are almost five to seven times faster rending all their files, saving all their files, retrieving all their files. It's amazing.
I don't know how much IT support has any bearing on All Flash File system. Now the only thing that we have provided that is better now is the speed and stability. Now if you can add that to capabilities, then, of course, IT has provided additional capabilities of having faster rendering and just getting their work done a little quicker.
The biggest workload that we have is maybe 95 to 97% of all virtual workloads are now running on All Flash. It has dramatically changed the way all of our VMs work. Now, not only they are faster but a couple of things that are in addition is that we do snaps off of our flash storage. Not only are the workloads faster but if the virtual machine goes down, the restore is 20 times faster now than it ever used to be. We don't have to go to a spin disc, we can just flash off of our flash back onto a no spin disc and the restore takes almost seconds to come back.
Total costs of ownership have two different values to them. One value is just strictly the capital cost of it. Number two is the operational cost. You've got to look at the CapEx and how much it cost. That is currently a little higher than it would be in two or three years. Now, Apex is where things are getting really nice. The maintenance is less. The discs failure are really low. Data issues or corruption is really low. The CapEx is currently high and Apex is getting to almost insignificant numbers.
The most valuable features for AFF are the speed, durability, back up, the time, the workloads that we are using currently are much faster than what they used to be. We're getting a lot of different things out of All Flash.
We have not connected our AFF to public cloud yet. We are not sure if we are going to do it because of PHI. For any healthcare, it's extremely important to safeguard the security of your patients. We are looking very deeply into how we are going to either go to public or keep some for private. Also, because data analytics is coming our way we want to make sure that the data that we are going to do analytics on is not on public cloud. Because of ingress and egress, we don't want to pay a lot of money to pull it back. We are not there yet but maybe in the next year and a half we will think about it publicly.
Two things have happened with stability. Number one, the platform that renders the file system is so much better. It's ONTAP and NFS, they're much more superior. The stability of the file system is much better. Behind the scenes, the cache is better, the CPUs are better and of course, there are no spin discs, so it's all flash. That is way more stable than what it used to be. Coupled together, the stability is maybe six to seven hundred times better now than it used to be ten years ago. That's just the way it works now.
Scalability is almost a catch 22. It's excellent because you can quickly scale, it's ONTAP, you can keep adding clusters without a problem, both the nodes, the controllers and of course the disc or the flash itself. The bad part about having scalability is the expense. It is currently extremely expensive, to be able to scale so fast on flash. What a lot of people are doing is that they make part of it all flash but as the data gets bigger, the archival, the older, the colder, migrate onto a slower, less expensive disc. That's what we are doing as well.
So far NetApp is amazing. It depends on what type of team you have. What type of sales team that you are working with. Our sales team is phenomenal. Our support goes through them and they know all the right people to call and we get great support. Now, that is not true all across. There's great support, and there's some mediocre support. For us it's phenomenal.
The initial setup for AFF was very quick and almost painless. We had professional services come in, they put it together and before we knew, we were carving all our discs, all our LUNs, and migrating data. Of course, the data migration was also really fast for us. We used to have older infrastructure. A little less than a year ago, we got brand new infrastructure that's all flash and we migrated it less than a year ago. It was no pain whatsoever.
I don't think anybody is doing a NAS solution or a filer solution better than NetApp. If you only talk about NetApp's filer, All Flash, I would give you it a nine and ten out of ten. It's one of the best of the breed currently in the market.
The primary use case that we have for NetApp's All Flash FAS is for on-premise storage that we've used for presenting LANs, NFS, and SIF shares for servers for analytics and ESX data storage.
NetApp AFF has improved our organization through the use of clusters. Previously we had migrated from Dell EMC and we had a lot of difficulties moving data around. Now, if we need to move it to any slower storage, we can move it with just a vault move within the cluster. Even moving data between clusters is extremely simple using SnapMirror. The mobility options for data in All Flash FAS have been awesome.
AFF has given us the ability to explore different technology initiatives because of the flexibility that it has, being able to fit it in like a puzzle piece to different products. For example, any other solutions that we've looked at, a lot of times those vendors have integration directly into NetApp, which we haven't found with other storage providers and so it's extremely helpful to have that tie-in.
This solution has also helped us to improve performance. We have hybrid arrays as well so that we can have things that are on slower storage. For the times that we need extremely fast storage, we can put it on AFF and we can use V-vaults if we need to to have different tiers and automatically put things where they need to be. It's really helped us to nail down performance problems when we need it to put them in places to fix them by just having the extreme performance.
Total cost to ownership has definitely dropped because with deduplication compression and compaction always on, we're able to fit a whole lot more in a smaller amount of space and still provide more performance than we had before. Our total cost per gigabyte ends up being less by going to All Flash.
Some of the most valuable features of All Flash are the speed, integration with vCenter, being able to clone VMs instantly, and the ability to move data around quickly.
The user experience with AFF is much like others of NetApp's products: fantastic. It's extremely familiar. It's very intuitive. We can find all of the features that we're looking for through the GUI. The CLI is tap complete so that if we aren't exactly sure what the syntax is for a command, we can just tap-complete it which makes it a lot easier than having to look up every single thing that we're trying to do and the way to do it.
Our use case for AFF with the public cloud is that it allows us burst ability so that when we need additional capacity and speed instantly, especially if we need more and we haven't bought new nodes yet, it allows us to burst into the cloud quickly.
The setup and provisioning of enterprise apps depend a lot on the automation, which has had really fantastic integration, just for being able to use things like WFA for provisioning. It has sped things up with the extra software that NetApp provides to be able to speed things along.
NetApp's always got their eye on new features and new use cases for things before we even get to them. It's been pretty amazing that they'll come out with new features, and we haven't even been thinking that this is a way that we might be able to use this in the future. I've been really excited about some of their other products, like SnapCenter, which is fantastic. We are also interested in the single pane of glass to be able to do snapshots and backups for anything in our environment, as long as it involves NetApp.
As for AFF itself, I don't have any suggestions of what I would be excited to see. I think that adding the support for the rest of APIs to AFF would be super handy. I think it's something that we've been waiting for for a while which would be fantastic.
Stability's fantastic. In the past, I've seen problems with ONTAP where we'd hit bugs and things. Since NetApp has changed their development schedule to every six months with a lot more scrutiny on their code, and a lot more checking of their code before they include it, we've hit far fewer bugs. We've also had extremely stable systems with solid performance.
The scalability's fantastic. Many times we have had to add capacity which included the compute power and the storage. We've just added HA pairs to the cluster and it's extremely easy to migrate over to those. You can just do vault moves to get over to the new nodes and then evict the old nodes from the cluster. The fact that you can scale up to 24 nodes gives you a great deal of scalability possibility.
Their tech support is fantastic. NetApp is amazing with getting you through difficult problems. When you call into global support there's somebody that answers the phone quickly and they're extremely helpful. We have other NetApp resources like our sales SEs and people that help us out. There's always somebody there to point you in the right direction and help you to get the solutions to the problems you need.
There has been an amazing improvement on ROI due to racks base and power usage going to AFFs, like A700S's being so small and so efficient, take up way less space per terabyte which is a great improvement there.
I give AFF a ten out of ten because there are amazing features on it. It's extremely fast, it's extremely usable, and the support's fantastic.
I would advise someone considering AFF as a possibility for storage, I would tell them to look at all the features, positives and negatives of all the other storage vendors. In the past year, I've done an evaluation of a lot of different storage vendors and their features. The cost-effectiveness of their products and NetApp have come far ahead of all the others and so don't just buy into somebody from NetApp telling you these are all the great things about it. If you research all of the other companies and all of their offerings, I have no doubt that you'll decide that NetApp is the top provider. From the speed of their product to their flexibility to move into the cloud to their awesome support.
Our primary usage for All Flash is for the Oracle Database.
All Flash is improving our organization because we used to have the databases on different tiers and now All Flash is reducing the report time. All of the reports and processing is taking less time, so all the information is ready in the morning for the executives to make decisions.
This solution is also bringing up a new initiative for our company to include more databases or more reports into the All Flash because of the speed of getting the information.
For enterprise apps, we mostly use Oracle. All of the Oracle applications have been improved a lot since we began using All Flash. All of the processing and ETL, for instance, used to take 25 hours, now it is taking three. That improves a lot of parts of the price of applications.
TCO has decreased. After we acquired the AFF 8080, we got a couple of A 700s, and they are cheaper than the 8080.
As the main uses for the all-flash we have is for Oracle. For us to provision a new VM with new databases takes 35 minutes exactly.
The most valuable feature for us is the speed of the read of the information. We can get the information as fast as possible.
The user experience we are getting from All Flash is excellent. The performance is great. The administration is exactly the same as all the other storage in NetApp which is great. It is very good, we are so pleased.
I would like to see the ability to include more applications from applications to managed storage. If we can have more applications or more interface in more applications, that would be great.
One to three years.
The stability is even better with version 9 with all the Oracle Databases including OVM, which is a virtualization of the Oracle.
Scalability of the All Flash is the same as the other. We can increase the amount of storage needed as we need it. As we buy them we just add them up with no downtime required. We just go ahead and increase the size, that is it.
NetApp tech support is so good. Their tech support has always been so stable and the people are so good in case of any failure or any good feature that needs to be updated or features that supposedly can help with performance to improve some performance. NetApp support is one of the best that I deal with.
I would rate this solution a ten for the huge improvement in performance between All Flash and the hybrid storage to the All Flash with the ONTAP 9. From 8.2 to 8.3 to 9, the performance is almost double. Ten is the best answer I can give.
The primary use case of this solution is for its speed. We're using the AFF as a cache disk. We have terabytes of data that we have to move quickly off a system. The only way we could do that is with the 40 gig backbone that all-flash array provides and the speed of the disks.
Besides for the speed, one of the most valuable features that the AFF gives me is the robust hardware that it has. It's simplistic. It deploys very easily. It's already built from the factory to take advantage of the all-flash array.
I would describe the user experience of the solution as very simplistic. There's a very easy GUI to use, and then when you need to get very, very detailed, you have a robust command line that you could do anything you want with to enhance performance for your solutions. Really what we're using the AFF for is solely for speed. We really need the power of the backbone and the speed of the disks because we have to move so much data.
Setting up and provisioning enterprise applications take minutes. It's just not difficult. We only have to use the GUI, curate the spaces, and go. I've set up entire NetApp systems in a morning.
I don't need anything improved. This solution does what I need it to do. I would like to see a cleaner GUI and better help pages. The solution itself doesn't bother, a lot of times it's that after it's installed. I have more issues with the support after the setup. I want it to be more simplistic than it already is and I would love to see the GUI be more simplistic.
So far the system has been excellent, no complaints. NetApp has always been built as a massively fault-tolerant system. If we have a problem, it just doesn't show it.
Scalability is excellent. If we need more space it's a no downtime solution. It's harder to get the funding than it is to get the solution itself.
I go to tech support with difficulty because I installed NetApp for many years I know what to expect when I call. When I don't get the support tech that I'm expecting and I'm trying to get to the right one, it can get very frustrating for me to push my way to the right person. NetApp has the right people, it's just a matter of getting to them.
I installed NetApp for many, many years. The initial setup of NetApp is very simplistic. Even as an installer, for years upon years, there's a giant poster board that I still use to this day, because that tells me exactly where my cables are supposed to go. It just gets me off the ground quickly and then it's just a matter of following the GUI and knowing what you're doing.
I would rate the product at least an eight. I should give it a nine, if not a ten, but there's always room for improvement.
I would tell someone considering this solution that it's expensive, but it's worth the money. You're going to get the speed and the backbones that you need to accomplish what you do. If you need that kind of speed and that kind of performance, you can get it out of the AFF.
AFF is our primary source for our data centers. We use it for our multi-tenancy data center. We like the crypto erase function available on the SSDs and we needed the high performance, IOPs that you can get from SSDs.
This solution makes everything a lot faster. The time to move data around, boot, and migrate VMs is much faster. The speed has also helped improve performance for our enterprise applications, data analytics, and VMs.
We like the high security, self-encrypting drives, and the NVMe.
I need faster Fibre Channel over Ethernet. They top out at 10GBs today and I would like that to go to 40 or 100.
I find it very stable. Everything's been up and running well. We actually had an outage in our testbed data center and everything shut off hard and came back up without any problems.
The tech support is good, although I don't use them that much. The product is good.
We have always been a NetApp customer, it's a very good product. We knew that we wanted more performance. It wasn't a hard decision.
The setup was pretty complex. There was a lot of compliance and there was a lot of security requirements, but it went pretty well.
It took us two to three days to set up and provision enterprise applications using AFF because we're a little different. We do short duration uses which means that we build everything from scratch, tear it down, and build it again.
Our total cost of ownership has increased. SSDs are expensive.
In the early days, we were considering Dell EMC but we decided to go with NetApp because its adoption across the DoD is widely understood.
The user experience is the same as it ever was, only faster.
I would rate this solution as a nine. It's not a ten because we would like to see the faster speeds on the Fibre Channel over Ethernet. AFF is definitely a good product.
We use it for NFS and CIFS to structure data. We have about a couple of petabytes of all-flash.
Some of the volumes for our response times were 30 to 40 millisecond. When we move to all-flash, our response times were reduced to microseconds. There was a tremendous improvement. In terms of the dedupe and compression, it is squeezing the physical size where we are now seeing an 80 percent reduction, which is very positive.
The solution has affected IT’s ability to positively support new business initiatives.
It has improved performance for our enterprise applications, data analytics, and VMs. These improvements are a result of all-flash, throughput, reliability, compression, etc.
One of the features that I am looking for, which is already in the works, is to be able to take my code and automatically move it to the cloud. I believe this is coming out in version 9.4.
We have been running it for two to three years. It hasn't gone down yet. It can't get anymore reliable than that.
Thanks to dedupe, our physical footprint is quite a lot. All the scalability that we have done, we have so far done it within our organization. We haven't expanded it physically yet.
Since the product hasn't gone down in three year, there hasn't been a need to contact technical support.
The initial setup was straightforward. Nothing to it. The professional services from NetApp came in to help us out, and they knew their stuff.
We used NetApp for the deployment and our own resources. The experience was very positive.
The vendors on our shortlist were Oracle, Dell EMC, and Hitachi.
We chose NetApp because we were already using it, which make things simple, and its pricing. Also, some of NetApp's features are dominant in the market versus its competitors.
With all-flash, you can never go wrong. I am in the process of converting everything to all-flash.
We are not currently connected to the public clouds. We are looking to connect to them in 2019.
It takes us days to setup and provision enterprise applications using this solution.
We chose this solution because vendors are choosing all-flash over hybrid.
We use data storage for our big environment. It creates an environment where students and teachers can work together.
We did the installation two months ago. Now, we are reviewing its affect on behavior over time, which has been incredible. We have less latency within all applications.
There are many reports accessing the applications. We receive them very quickly. We used to wait a long time for them. Now, you just need to wait a moment.
It takes us just minutes to set up and provision an enterprise application using AFF.
We would like to have more behavioral reporting. We would also like to have more optimization and credit check reporting.
In addition, I am waiting for the version that has SnapMirroring with FlexGroup.
Stability is 100 percent. I don't have any downtime.
I am very impressed with the scalability.
The technical support is invaluable. If you need answers to a problem, they provide good answers. I am very happy with it.
If you are compare it with our last application, IBM FS840, AFF is incredible in comparison.
The setup was not complex, but we have good project management skills.
We used an integrator who was very professional and helped a lot. They finished the implementation on time.
We have seen ROI.
Our TCO has increased by 15 to 18 percent.
I am not using VMs today, but maybe in the future I will.
We have not yet connected to public clouds.
NetApp is introducing All Flash FAS with the all-flash array. Our customers like performance, they don't want to deal with latency. Using an all-flash array, our customers get impact from performance.
I can definitely say it has helped our orginization. We have an SQL application server, which is in our NetApp storage. The records contain the number of transactions. Since my company is a financial company, we always look into transactions. NetApp all-flash array is faster than we're used to. The read and write, and the random IOPS are all up to speed. I don't see much of a difference when I run the 100k random IOPS with a 70% read and 30% write, and vice versa, 70% write and 30% read. That's a big improvement that we've seen since we started using this solution. It is a valuable asset.
They have come up with good back-end architecture. The features are the same as NetApp ONTAP. The only change is all-flash. There are no 7k, 10k, or 15k drives, only flash drives.
My favorite part is all-flash solid drives. All of my applications are running on an all-flash array. Before, we used to get too many severity tickets on performance, but as soon as we migrated everything to an all-flash array, our critical applications are at top performance.
We are very happy with the user experience from the all-flash array. Because their usual latency for the application depends on the critical application - they used to see four-millisecond latency with the non-all-flash array - with the all-flash array, they don't even see microseconds of latency. They might see microseconds, but that is not impactful.
To be more competitive in the industry, they can develop deduplication, compression, and smarter features in the same array instead of all-flash.
It's better with all-flash.
Scalability is good. Compared to the different vendors, the scalability is very flexible, in the sense that you can scale up to whatever you want, expand your storage, expand your clusters, expand your nodes. NetApp makes it possible. Some vendors have come up with models that won't expand their nodes, which creates the need to buy different clusters. For example, let's say I have four nodes. My four nodes have the capability of taking one million IOPS, but my storage backend isn't complete, so I can't expand that. So the nodes are of no use. NetApp is not only thinking from the customer's point of view, but they are also thinking about every other prospective use and they include a lot in all-flash drives.
It's very good. I have never personally seen any issues with the technical support.
Our previous solution had performance issues. I see a lot of value in faster policies. I don't like when critical applications are running on drives with different speeds. When customers need to track all of their data and it's sitting on a 7k drive, the drive is working hard. The response is slow. With all-flash, it's better.
The initial setup is straightforward. It's not complex.
We have connected to AFF public clouds but I'm not really dealing with it.
It took us less than two minutes to set up and provision enterprise applications using AFF.
We used NetApp, but we could've deployed it ourselves. NetApp Support knows the best practices. A good thing about NetApp is that even customers can easily deploy the storage. With other vendors, you usually have to entirely rely on them for deployment and all facets of the solution.
We definitely see ROI. We save a lot more money with this solution.
Using NetApp, our total cost of ownership decreased by 17%.
Other vendors aren't as straightforward as NetApp when it comes to the deploying, installing, and configuring. NetApp works more efficiently. By saving time, you're saving money.
AFF has affected IT's ability to support new business initiatives. Nowadays, customers in financial companies are looking for more storage. From a business point of view, you need a faster response in order to compete with other financial companies. From the customer's point of view, they are looking for a faster response from their financial company. Using all-flash array, they can retrieve their old files within seconds. That's an important edge.
AFF helps us improve performance for our enterprise applications, data analytics on VMs. It helps us with records. We need to be able to calculate more performance matters. Customers have complained that the performance latency exceeds more than three milliseconds for some applications. They will have delayed performance latency. When I used the 7.2k drives, applications could only support 300 accounts per second. If it was more than that, it would crash. NetApp all-flash array gives us one million IOPS.
I would rate this product a ten because of flash. Because AFF is better for the customer, provisionally, deployment, and performance-wise.
Whenever we face any issues with performance, particularly any performance with our high outreaching storage site, we are recommended to use an all-flash service, because we rely on our primary solution at all times. If it seem like there are issues, we have bring in different vendors as a buffer. We have adopted an all-flash primary solution with this use case.
From the automation point of view, we want zero down time for our clients with good scalability and good performance. Client satisfaction is the most important to us.
We haven't received any negative feedback yet. If we are not receiving any complaints from the client side, then it says that the client is okay with the product.
This solution helps us improve performance for our enterprise applications, data analytics, and VMs.
There are some bugs with the solution which need to be fixed.
The client should not record with any type of stability issues, whether it be latency or features being affected. We should not find any module portions being affected because of performance issues. There should be continuous good performance as long as product performs.
The scalability is good.
For vendor coordination, the technical support has been good. They do good work and analysis on things that I need. They specifically provide good answers to my questions.
Our previous solution had issues with capacity, monitoring, and performance. These are the core areas where the customer was feeling the pain. So, we get them to a different place with a proper solution and fix for the issues. I feel like AFF has the features the customer needs.
Other vendors, who do other similar solution products, envy the features that come with this NetApp product.
Our shortlist was Dell EMC and HPE. These are the vendors with whom I have worked. I feel all the vendors are very good, along with NetApp. However, NetApp has file-based and block-based features, which gives it additional value.
We have connected this solution to public clouds. We have different clients using the public cloud solution. Our public cloud has clients signed up for SAP HANA. There are many applications which are running on front-end databases, like Oracle, MySQL, etc.
We use it for block storage.
It takes no time at all for our production instance to be snapped over to development and QA servers.
Because so many other features and products interoperate with NetApp, the IT team is able to expand our horizons and broaden our scope for future projects.
It takes a good administrator or someone with knowledge of the product in order to manage it. That was one of the downfalls that we had with AFF. We have a lot of offshore team whom we have to spend a lot of time training to be up to speed. However, once they're up to speed, they know the product pretty well, and it seems to be okay.
The hardware is a little difficult to configure and operate. However, with the configuration and operation, you get a different nerd knobs that you can use to design and critique the environment.
The stability is great. I like the capability and the upgrade functionality of all the clustered environment. We can go through and do an upgrade without worrying about any issues with the process.
It takes a node offline, and we don't even receive an alert for that. We click a button, and it's done unlike other storage systems which are out there
One of the scalability problems that we've had is the amount of storage per node, as it is 600 terabytes. This still seems a little low. However, there is a compute issue with large capacity, so it's just smarter to add additional nodes into a cluster. So, the scalability is there.
Technical support is a little lackluster. Some of the issues that we've had were opening up tickets. They seem to be routed in the wrong direction or it takes one or two days to get a call back for simple tasks. However, if we want immediate assistance, we have to open up a Severity 1 case, and sometimes it's not a Severity 1. But if we need a response back within four hours, we'll open it as a Severity 1, then once they contact us, we can drop the severity of the ticket.
Calling technical support with NetApp, you talk to ten unknowledgeable people to get one half decent person. It becomes frustrating, especially if you have an immediate need for an enterprise outage.
We were running into a lot of storage roadblocks that were performance based. Also, the IBM product that we were using was at the end of life for 90 percent of our enterprise.
I spent 15 years with IBM. Anytime I go into a data center, and I see Big Blue, it is the first thing that I replace.
The initial setup was very straightforward, but complex. With the new clustered environment, you have to have a virtual server instance to run anything through the cluster, so you have to create a B server and a data logical interface to use block, then you create a separate lift if you want it to use files. The virtual instances have to be in place before you can actually use the product.
I did the deployment, integration, and migration. We've done two petabytes in less than six months, and we're almost done.
The experience was great when it comes to our virtual environment. It was a very simple process. We use vMotion and it moves everything across. It is a little more painful when it comes to standalone systems and Oracle Databases, but the integrated migration product (Foreign LUN migration) that they have, once configured properly, works well.
Our TCO decreased significantly because we were paying maintenance on nine different arrays throughout the country. We've condensed those down to three arrays, and our maintenance fees from the IBM product dropped by over a half million dollars a year, saving us $500,000 USD.
We just migrated two petabytes of data storage from IBM over to NetApp All Flash. Some of the performance improvement that we've seen is 100 times I/O and microsecond latency.
The two vendors that made it through the evaluation process were Pure Storage and NetApp. We had Pure Storage and NetApp proof of concepts. Both of them performed admirably. Pure Storage beat out on the performance, but on price per terabyte, NetApp was considerablely cheaper.
NetApp, being the behemoth company that it is, if you're looking to have a solution provider be end-to-end when it comes to file, block, scale, and cloud, NetApp is probably the leader of the market.
Depending upon an application, provision enterprise applications could take from a day to a week. A lot of times, if it's just a simple application that we need to install, it takes an afternoon. However, incorporating it and twisting the nerd knobs and making sure that everything is operating as efficiently as possible that takes a week of deployment to make sure it's on the right tiered disk and making sure it has the right connectivity and it is on the right network. Sometimes, on our old, antiquated network environment, it takes a little bit longer.
We might connect to public cloud in the future, but we are not connect at the moment.
The primary use case is availability, performance, bandwidth, and throughput with respect to our applications.
We are currently using an on-premise solution.
The user experience is fantastic. I'm looking forward to the AFF 800 storage box, which is all-flash with NVMe technologies. This will certainly give a boost to our applications, and make for a better user experience.
The most valuables features is the response time that we are receiving from the AFF storage box. We are looking for performance and delivery times of the response from the host, which we are happy with.
We are looking forward to the all-flash NVMe which is coming out.
Going forward, I would like improvement in the response latencies, capacity size, cache, and controller size. It also needs more fine tuning in regards to all-flash and AML workloads.
Even though the complete workload will fill out the AFF storage box, it will give us sustained stability.
One of the key features of the AFF storage box is its horizontal scalability.
Our new business initiatives, which are coming, demand more IOPS and performance. Our applications are scaling, which demand more performance in a very short span of time. This solution will improve technology driven things.
The technical support is fantastic. No one else is like their team. We're happy with them.
Our previous solutions were Hitachi, Siemens, and NetApp. We switched to AFF because it had all-flash, better performance, and better response times. It also scales better.
We used to do applications running on mechanical disk. With the introduction of SDDs and AFF All Flash, this has given us substantial improvements in our applications' performance.
The initial setup was easy for us. The consultant was always there to support us. They have always been helpful in understanding the technical points, how it will help us going forward in terms of implementation, future scalability, and possible upgrade of storage components.
We used a NetApp consultant for the deployment, who we have also used for the sizing. Our experience with them was very good.
It does have good ROI.
We are able to set up and provision enterprise applications using AFF quickly. We have seen tremendous performance, stability and growth in it.
NetApp met our requirements.
It is the first company who introduced NVMe protocols, which is end-to-end. It also has very good response times.
The NVMe technology that we're evaluating will certainly help us with artificial intelligence going forward.
We use it for high performance, block storage, and file storage.
The highest performance need apps are usually deployed on AFF. We're using adaptive QoS to identify what applications require higher performance and moving those volumes over to the AFF.
We are able to offer higher performance to meet the business needs. We see far less issues with applications complaining about not getting the throughput they need, the IOPS, or that they are getting to high of a latency. We put it on AFF and the issues go away.
The user experience with AFF is fast and secure, with continuous access to data. Our users typically don't know where we're putting their data unless we have some benefit in telling them. If they say, "It's not fast enough," we put it over here, and they say, "It's good now. We're happy." Though, we have to be judicious in how we move it, because storage is a bit expensive. Although, the higher storage efficiencies somewhat compensate for it.
The solution is providing IT more headroom so we can give higher performance to more applications. Like every business, our data footprint is growing. Our applications account is growing, and we're just able to keep up with it now somewhat better than we were before.
We are spending less time putting out fires, so there's a tangible benefit right there.
On the roadmap, NetApp is improving the solution's storage efficiency, compression algorithms to achieve more space savings, and the management interfaces. We are looking forward to these feature additions in the next release.
Like every NetApp platform, it's very stable. Occasionally, we hit a bug, but you encounter that everywhere. We've never had any problems specific to AFF. Overall, our problems with NetApp products have been minimal. It is a solid platform.
It scales well, probably more so than the FAS. Because of the storage density with the SSDs, we can't buy enough SSDs to max one out.
As with all NetApp tech support, it's outstanding. It is the best in the industry. It is very easy to escalate.
We didn't technically switch solutions. We just augmented it because we have been a NetApp customer for awhile. Thus, we're going from FAS to AFF, which is just a natural progression.
The initial setup was not complex. Even though it's a higher performing platform, you run it, manage it, and administer it the same as you do any FAS.
We have a VAR, Tego Data Systems, whom we work with closely. They know our environment as well as we do. So, when we come to them with a need, we don't have to spend a lot of time feeding them background. They're ready to hit the ground running.
Our TCO has probably stayed about the same per terabyte of user data.
We looked at other vendors (Kaminario, Pure Storage, Dell EMC, and IBM), but decided that it made the most sense to stay with NetApp.
I would look at the performance of AFF, its reliability, and its outstanding tech support.
AFF is the wave of the future. Spinning disk will be going away and it just makes sense to go where the industry is going.
AFF helps us improve performance for our enterprise applications, data analytics and VMs. We have moved our primary data stores for production over to AFF, and a lot of the problems that might happened have gone away.
To set up and provision enterprise applications using this solution is quick. We're integrating it with ServiceNow, so it is a hands-off storage allocation. A user submits a request and can have storage in five to ten minutes.
We are not yet connected to any public clouds.
We use it for data storage, applications, and CIFS shares.
Through its Cluster-Mode, it's quicker. It also improves Exchange and SQL Databases.
I am still trying to wrap my head around all its features.
The stability is solid. It doesn't fail on us, which is exactly what we want. We are in a critical business that we can't have any percentage of downtime. Therefore, if it stays up, that is what we want. We have been dependent on NetApp for almost a decade now.
For capacity of storage, we manage about three petabytes of data. It is exactly what we need in terms of scalability.
Technical support is first rate. We are very satisfied with it.
Our last solution was at end of life and warranty. We went from NetApp to NetApp, so we stayed with NetApp, but we move to the latest, greatest solution.
It's always a little bit complex when you're trying to integrate a new piece of hardware, with cluster mode as well. There's always a learning curve, but with that curve, there is knowledge which stays with me for the life of that technology. So, that learning curve is essential.
We were migrating from Data ONTAP 7-Mode to its Cluster-Mode. Therefore, we had to get swing gear, then do the migration from loner gear and back onto our new gear. This was a bit difficult. It took us several months to do multiple migrations. Fortunately now, we are on Cluster-Mode and don't have to do that again.
We used a combination of a reseller/consultant. They did a great job handholding us all the way for any type of issues that we had with mission critical data. E.g., multimillion dollar uptime everyday ensuring we had virtually no issues.
We have seen ROI, especially in terms of data points and availability.
We did not evaluate other solutions. Our history with Net App is that it is a stable platform and does what we want it to do. It's not extremely complicated, and it's something which is tangible that we have used and want to continue using.
Figuring out the basics as to what NetApp offers. It is not something that you can just dive into as you will need to have a bit of background knowledge of it. However, there is plenty of help out to to learn the technology, and it's very tangible.
Give it a go. I would recommend it. We are very satisfied with it and the whole deployment of it. We have almost seamlessly transitioned our production environment into a completely new hardware environment on the back-end.
We use it for all of our VM storage.
I don't know if it improved the way our organization functions, but I know we don't have any storage outages or slowdowns at this point. We just did a refresh about six months ago to the A700s and we have been very happy with the performance of those boxes.
Our latency is extremely low. We average below a millisecond.
The replication would be one of the most valuable features. That's not just on the All Flash FAS, but that's a big one. The performance is also good.
I'm not sure if they can do it. We are using encryption. I'd like the deduplication crossed volumes encrypted. But I don't know if that's really technically possible.
The stability has been really good. We've had just a couple of minor hardware issues but nothing big; DIMMs that were bad and that had to be replaced. But it's been very good so far.
I know it scales but we are not looking to scale it out at this point.
Technical support is a little hit and miss, at least with the particular things that I've called for. The SRA stuff that intergrades with SRM is a problem point. It's a pain point. The support personnel aren't always knowledgeable on that product. At times, they are not even aware what product is supported and what is not, when one has been deprecated and there is a new one out, and what the bug fixes of the newer version are.
It was straightforward. We did greenfield. We went to two new data centers so the installation of it was pretty straightforward.
We used an integrator. It was very good. We partnered with them a couple times before, which makes for a pretty easy and seamless transition. And ONTAP is easy that way anyway, but they do a really good job of making it an easy transition.
We were pretty heavily invested in NetApp. We did look at INFINIDAT, but it just wasn't something that we were comfortable with.
The product is about a nine out of then. We have been very happy with the performance. There have been a few minor issues. We failover a couple times a year. In some of the failovers, the SRAs haven't worked exactly as designed. If the SRA was better, maybe not bundled in with the whole Snap solution, that might help.
We use it for electronic medical record storage.
Because we use the production environment and copy down to test environments, we've taken it from days to hours.
The next solution needs to simplify the day-to-day operations.
The stability is excellent. It's highly stable. We've just never really had a failure since we put it in. It's been two years.
There have been no issues of scalability, for our use.
Technical support has been very good. We use scripting called WFA, and we've had a little bit of an issue with that, going from the first generation to the second generation. But the actual hardware, product, and support itself have been excellent.
We were moving to a new data center, so we needed it.
The initial setup was complex. The fact that it has to interact with both IBMs - AIX - and with the Epic application, means there are three vendors in the mix.
We used an integrator, Sirius. Our experience with them was excellent. Sirius already knew the environment it was coming from, the reseller was an IBM flash storage environment. They brought it over to a NetApp flash environment.
There were really only two on the shortlist: IBM and NetApp. We chose NetApp because we had an opportunity to make all of our environment NetApp.
I definitely recommend it. It's very complex to set up. Everything is. Even though it's complex, NetApp, out of the other two options, would probably be the least complex.
I would rate it a nine out of ten. We haven't had any failures in the production environment. The only issue, as I said, is that we've had some trouble with the scripting. Otherwise, we'd give it a ten.
We use it in the healthcare industry.
It's helped with latency. It has improved our job flows.
It's fast and reliable.
I would like to see more functionality with the external software, SnapCenter. There should also be more integration with the flash side of things. But overall, it's been pretty good.
My impression of the stability is that it's good.
It's pretty scalable. When you add more to the environment it helps things, overall.
Technical support has been really good. NetApp support has been really helpful. We have a SAM that we use as well, and he helps us with issues that come up, bugs, etc.
We were pushing what we had too far on performance. It wasn't so good, so that's when we looked at All Flash.
It was really straightforward, for the most part. We were used to working with FAS already and this is just adding All Flash and SSD to the mix. It's a lot of the same standards we had already.
For the installation and configuration, we've done the recent ones directly through NetApp. Our experience with them has been positive.
We'll have the solid-state drives around longer so we won't be turning over controllers or disk as fast.
Our shortlist was really just NetApp, in our situation. We're pretty much all NetApp. We didn't evaluate anything else for this particular project.
I would recommend NetApp.
I rate it at nine out of ten, and close to a ten. We've been pretty happy with the All Flash.
We use it for data storage.
We have more storage capacity. Managing it is easier and it's available anytime we want it.
Everybody's moving to the cloud. We, as a financial company, are moving to it as well. We need to find out what about the security of the information that we have on it. That's the main thing that they need to talk be talking about. How secure is that information?
The stability is extremely good. It's very stable. We've been running it for about four years now. We haven't had any hiccup with it so far. Okay, there have been a few here and there, but they have been easy to resolve with the engineers that we have.
The reason we have it is that it's very scalable.
Technical support is excellent. We have an excellent team with NetApp. They help us and they are available anytime that we need them.
We knew we needed to invest in a new solution because everybody is moving forward. We don't want to stand still.
The initial setup was straightforward. They had all the codes with them, they just implemented them on the system and, next thing we knew, it was up and running.
We used a consultant for the deployment. Our experience with them was extremely good. They knew what they were talking about, they made it easy, and didn't take a long time.
The amount of data that's stored is increasing day by day. We are a financial company so we have new customers every day and we need to keep their information safe and secure. It definitely has that return on investment in that we didn't have to invest in something else, outside of what we have now.
There was one other option we looked at but it didn't have the scalability. It also didn't have the support that we needed. The experience that we have with NetApp support is excellent.
I would definitely encourage colleagues to go ahead with it. I have had a great experience with it. I would definitely encourage them that this is the way to go.
I rate this product at ten out of ten. It's easy. Once you know your way around it, there is nothing to it. You can do it in a flash.
We are it for CIFS, NFS, and NAS. We are also using it for the cloud environment.
They have come up with top of the line inline deduplication. They are delivering compression and aggregate compaction, as well. Everything is improving with their new features coming out on a day-to-day basis.
These features are missing from other products in market.
The product should be more competitive and come up with additional features. They should keep the client always in mind and as the top priority. This would be the best way to compete with other solutions.
It is stable. In my three years working with the storage, I haven't seen any issues with our NetApp product.
We started with a cluster of two nodes, then we reached a six node cluster. We have scaled this up, as needed, whenever we saw a requirement coming up from the client.
It's pretty scalable. It can scale up to 24 nodes.
From a technical perspective, the technical support is good.
The initial setup is easy and straightforward; there is no complexity.
We used our vendor partner for the installation. We do have multiple vendors with whom we deal with for the procurement of NetApp devises. So, we call with them to come and do the deployment for us, as per our company standards. Our experience with these vendors is good.
I would recommend NetApp. It is a good product to use.
We use it for our EHR. We have 4,000 users who need to have access to a very large EHR called Epic. We are sharing a cache database through AIX servers.
It made everything faster. The user performance went from about eight seconds, for certain screens, down to three seconds per screen. That was the primary reason. Our users can multitask faster. The way Epic works is that you have multiple screens up at the same time. When you have multiple screens up at the same time and you have a patient sitting in front of you, speed is quality. Where before, the patient would have to wait for answers, now they get them almost instantaneously. Our users can run multiple things at the same time. For the users, the nurses and doctors, it is faster. All around faster.
As for IT's ability to support new business initiatives as a result of using this product, we are upgrading to Epic 2018 next year. The older system couldn't have supported it. That is another reason we went to a faster system. Epic has very high standards to make sure that, if you buy the upgrade, you will be able to support the upgrade. They advised me, top to bottom, make sure you can do it. Our new system passed everything. It's way faster.
We have VMs and we're were running VDI. We're running VMware Horizon View. We have about 900 VMs running on it and we have about another 400 Hyper-V servers running on it. Our footprint is very tiny now versus before. We now have some 30 servers running 1,000 machines where we used to have 1,000 machines running 1,000 machines. We have Exchange, SQL, and Oracle and huge databases running out of it with no problem at all, including Epic. It's full but it's very fast.
It takes us a minute or two minutes to set up and provision enterprise applications using the product. We can spin up a VM in about 30 seconds and have SQL up and running, for the DBAs to go in and do their work, in about two minutes.
It would primarily be speed. That's why we got it. Storage is costly but it's very, very fast. Very efficient, very fast.
Zero downtime so far. We've had it for two years.
We have not had to scale it. We bought it at about 128 terabytes and, right now, we are probably at about 80 or 90. Because of the upgrade, next year we are going to grow 30 percent. We will probably upgrade in 2020 or increase the space.
Zero downtime, so we've never really called. The engineer who supports it will call for firmware upgrades or for a yellow light: "Why is it on?" For the most part, we haven't had any issues with it at all.
We were on a standard NetApp but we upgraded to the FAS because of performance. We had it in for a test and it succeeded. That's why we bought it.
I have been with the company for 20 years and we have had NetApp for 20 years. We did switch over to IBM, about ten years ago, right before we went to Epic. But Epic said, "No IBM. NetApp." We were switching from NetApp to IBM, because IBM had a little bit of advantage, a long time ago. Then Epic came in and said, "No, switch back." So, we're back.
We have clusters but our guy doesn't know how to do the cluster side of things. That's what the reseller did, primarily.
We used a reseller, IAS. They have helped us. Our experience with them is good. We have had them for 20 years.
The benefit of getting the product, versus not getting the product, has allowed the clinic to do more. Since they are doing more, the return on investment is shrinking. We bought it two years ago and we have probably already paid for it.
The old NetApp we had was paid for. The new NetApp was about $3 million and we paid for that in about two years. It was well worth it because we can do more. For example, our advanced imaging is all pictures, videos; huge amounts of data get used up. Now they can triple and quadruple the amount they could do because of the speed. So instead of seeing ten patients a day, they're seeing 30 or 40 patients a day.
The total cost, the pricing of it, has gone up quite a bit.
Dell EMC. We looked at them briefly when they were EMC. We looked at IBM. But Epic pretty much says that NetApp sets the standard and we have to follow that.
If you have the money, you can't compare it to what we had at all, you just can't. In fact, the one that we had for production for the entire clinic is now sitting in our DR as cold storage. It went from state of the art to boat-anchor in about two years.
We use it for typical data center workloads: Exchange, file shares, and SQL.
We have a big problem in our organization where I can't get the application engineers to give me performance requirements. Now, with the SSDs, I don't need to worry about that anymore. All of our applications are high. Our test applications perform at a higher level now.
It has improved performance of our enterprise applications, data analytics, and VMs because we have a higher IO from the disk now. We run a lot of write-intensive VMs. For sure the solution helps out.
Our total cost of ownership has decreased because of the nature of the SSDs, their mean time to failure is much higher. They don't fail as often and that's going to reduce it. And because we upgraded to the All Flash and the bigger SSD, we reduced our footprint. I increased my capacity 500 percent and reduced my footprint in the data center by 95 percent.
The most valuable features are
And the CLI portion of ONTAP, in general, is much easier to use.
It's a little behind on security. It's starting to get into multi-factor authentication, they just started to introduce it but not for all products. In my area, we are really big on security, using smart-card authentication. Multi-factor authentication is a big thing for us, being on the federal government side of things. We need all the products to have the ability to do smart-card authentication. That's the biggest one. That's the drawback of this solution. But otherwise, it's getting there. It's starting to catch up.
It has been very stable so far. It's about a year old, we haven't been using it for long, but so far it has stood up very well.
We haven't needed to scale it yet. We probably won't. But obviously, because we are in a multi-node cluster environment, with the switches we can scale out very easily if we need to.
I mostly interact with my sales engineer who is very sharp. The few times that I've had to interact with technical support, it has been very good.
The gear we were on was about ten years old. We always buy behind the technology curve. I noticed that spinning disk was going away and that the industry moving towards SSDs, so I wanted us to try to get ahead of the curve a little bit, to give us some more horsepower to do some more initiatives that we want to get done in the future.
It was very straightforward. There are setup tools so if you're not very familiar with NetApp, they walk you through the process step by step: How to configure all the interfaces and the SVMs, etc. I'm more experienced with the command lines, so I deployed it that way. But it's very receptive to PowerShell scripting, so it's easy to use.
We used an integrator, reseller, and consultant for the deployment. Resellers are resellers. I don't have a good or bad opinion of them. As for the integrators we had, I'd rather do it myself quite honestly. But it was okay.
Because we're federal government, we really can't choose. We've had NetApp for years. I did evaluate a lot of other products. Honestly, at the end of the day, storage is storage and disks are disks; it's all the bells and whistles on the front. Other solutions could probably have accomplished the same task. Ultimately, it comes down to dollars and cents, but I'm not really involved in that side of it. I'm sure they chose NetApp because of the cost.
Know your workload, know your customer. Know what your requirements are, know what your future requirements are. Determine what's important to you. Think about the administrators, if you're not the administrator; I'm not, I just engineer it. Think about them and how they will use it. Think about the future, where you think your business will grow.
When it comes to setting up and provisioning applications using the product, it depends on what you're doing. But I I can have an Exchange server up and running in about 30 minutes.
At the moment the solution is not having any effect on IT's ability to support new business initiatives. I got it to support things like ADI and solutions like that. So hopefully, going forward, it will play a role in that. We have not connected the solution to public clouds. We do plan to in the future.
I rate the solution an eight out of ten because there's room to improve. There's always room to grow. The security side of it: They have a large government customer base but it seems like they really don't pay attention to that side of things. There are a lot of security things, a lot of customers can't send their stuff offsite, and I'm one of them. So coming up with better ways to satisfy that part would be great.
We are mostly using it for NAS, CIFS, and NFS protocols.
Logical data might be very high, but the physical data, because of efficiency features (such as, dedupe, compression, etc.), has been greatly reduce data. Therefore, we are getting 10 to 20 times the efficiency on this product.
Data efficiency is the most valuable feature of NetApp.
I would like to see aggregate level encryption in the next release. This is critical.
Disk level encryption is already in the solution, but it is very costly. Its pricing should come down.
It is stable.
It is scalable. On the NFS side, we have around 24 nodes, so that is pretty scalable. Also, the scale up is very high.
Technical support is always great from NetApp. It is the best.
We were not previously using another solution.
The initial setup is very easy.
We have seen ROI from the product.
We were looking at NetApp and Dell EMC. However, NetApp is know for their NFS solution.
This is the best solution in the market.
NetApp is a good company. I use to work there.
We use it for our VWware environment. We run virtual machines and our plan is to migrate all of them to the All Flash platform.
The improvement for us has been space savings on the All Flash FAS platform. The data space savings are almost three times better than the what we have right now, a two-to-one ratio.
Regarding the user experience, it's pretty fast. For applications where they require a high throughput, this platform is pretty solid. It also helps improve the performance of enterprise applications, data analytics, and VMs because it's pretty fast. We are on a different level of tiered platform, where the All Flash is completely hybrid, SSD aggregate, so it tripled the performance for the customer.
The most valuable features are high performance and encryption. It also provides aggregate level dedupe.
The system is pretty stable but most of the ONTAP versions are not really stable. There have been multiple bugs in different ONTAP versions. The hardware is really stable but we see some glitches here and there with the software. That's how the system works.
Right now, we are on a pretty stable version: 9.3.8.
We have not had to scale it. We have a two-node cluster.
Technical support has been pretty good. We have had to involve them two or three times per month.
Our old solution was working fine but the system was going out of support so we needed to do a refresh.
It is straightforward. The whole cluster configuration is pretty straightforward. Just bring up the node and add to the existing clusters. We didn't see any difficulties.
It takes us one day to set up and provision enterprise applications using this product. Migration takes a lot of time but provisioning is setting up the cluster and that takes one day.
We used NetApp Professional Services and they were pretty good.
Because we are government, it is an open contract. People have to bid on government projects. We don't have a say in the options.
I would say this is a good solution but talk to the NetApp guys and see how it really fits in your environment.
We do not connect it to public clouds at the moment. We have plans to do so in the future, depending on the use cases.
I rate the product at seven out of ten. Their system is pretty good but we are still facing a few issues, mainly on the software side where there is an SVMDR. We had it in the previous configuration. We did an ONTAP upgrade but had some issues replicating the whole configuration. There are a few other glitches here and there. Other than that I would say it's pretty stable.
We do storage across the United States.
We have SQL clusters across the United States. It has sped up our IOPS and made it a lot easier for users.
I would like them to roll in global monitoring instead of having to buy another product for it. If it was built into the solution, that would be awesome.
We haven't had any issues, so far.
We are scaling up to the new solution. We haven't had a lot of scalability yet. We are looking forward to what it can do.
Our technical support experience hasn't been very good. However, we are hoping with our new contract that it will be a lot better.
We were using HPE EVAs, which are very clunky and old, so we moved over to NetApp.
We were just bought out by another company who has been using Dell EMC. They're not happy with that solution, so we brought them into NetApp.
The initial setup was a little complex, because we weren't very knowledgeable in the NetApp at the time. We were using a third-party, and they didn't have a lot of technical individuals, so it took a while to get it out.
We used a reseller, EVOLTECH. It has been okay so far. There are not a lot of technical individuals with their group.
From an application standpoint, we have seen a lot of return investment on the speeds and responsiveness of the actual storage.
NetApp and Pure Storage were on our final shortlist. NetApp just came in with a better price point that my VPs and CIO couldn't refuse.
Do your research. There are a lot of different storage vendors who have a lot things which are good. Pick the one that you feel is best for you.
We have a range of customers, from manufacturing to oil & gas, in Malaysia. We have been using NetApp for quite some time, but now performance is a big issue for our customers, along with other challenges for them, so they are opting to go to All Flash.
NetApp is doing a good job of delivering to and satisfying customers. All Flash cloud technology has helped them a lot.
We try to provide a value-added proposition to customers, as a partner to NetApp. Most of them have been dealing with us for quite some time, five to ten years. They've been using a traditional base of NetApps and some other products. We have transitioned some of our customers from other companies' products to NetApp.
It provides our customers with a secure, fast, and always reliable solution. It also definitely affects the ability of our clients' IT departments to support new business initiatives because things become simplified for them, easier to deploy and to get off the ground faster. It gives them more flexibility to scale in the future.
In terms of it helping to improve performance of enterprise applications, data analytics, and VMs, I have one customer that is running SAP on NetApp. The performance improved about 40 to 45 percent. That was a great improvement for the IT infrastructure services team.
The most valuable features are the low latency and high-performance. Some of our customers are dealing with seismic data from the oil & gas industry, so they need data extracted and transported to the application faster. That's one reason we bring in All Flash.
We'd like to see improvement in the time to retrieve from the cloud, whether it's on-prem to cloud and whether it's public or private cloud. That's the most important thing we need.
We don't have many issues related to the appliance itself. In terms of the OS, we do have some hiccups here and there. Our support team and the technical support from NetApp are able to handle that.
At this point in time, a few customers are looking at scaling it. Since NetApp provides vast scalability, whether they scale up or scale out, it gives them better flexibility.
Technical support is good. We have not had to involve them much. Most of the first-level and second-level cases are handled by us because we have a range of certified engineers. Only if it's really a critical issue that urgently needs an expert to dive in, then we will engage them NetApp support.
We have customers who are not NetApp customers. We teach them what the capabilities and challenges are. Our main goal is to comply with and meet our customers' challenges. If NetApp really fits their needs, we move on from there. In a case where we need to transition the whole infrastructure from a different storage brand to NetApp, we'll do that.
If the customer is an existing user, it's easier for us to convince them. If they're a non-NetApp user, it takes time because we have to do proofs of concept to justify it to them. If they agree technically, then the commercial conversation starts. Normally, the commercial conversion does not take that long, because the technical team has agreed to the solution.
The initial setup is straightforward. It is GUI-assisted. There are a lot of step-by-step guides, which are easy for certified engineers to follow. That makes things simple and we are able to make a good impression on our customers.
We are an integrator and a consultant for our clients.
For some of our customers, within one-and-a-half years, they get a return on investment. One year after the deployment, the customer will either scale up or scale out. That will give the customer's site a better footprint.
First thing first, I would advise you to gather the exact requirements and challenges. Try to blend those requirements with the NetApp solution, or part of the product, that suits you. Doing so will create a better engagement in the discussion. Otherwise, it could be very difficult to say that NetApp is the best product for the use case.
It takes less than half a day to set up and provision enterprise applications using the solution.
So far we have not connected any of our customers to public clouds. We have some challenges in Malaysia where some of the data, especially from the banks but also from the government and oil & gas, can't go out of the country. So we are not able to do that. In those cases, usually our customers will engage a managed services provider locally in Malaysia.
I give this solution a seven out of ten. There's still a long way to go and there are a lot of new start-up companies that also provide all-flash and hybrid. For some of our customers' applications, the new solutions are better.
We have a multi-tenant shared solution that we use with Quality of Service to provide bare metal as a service and IP storage to our customers. We keep it very simple. It's an automated solution which customers configure on a portal and then it automatically configures storage for them.
The solution has drastically and positively affected IT's ability to support new business initiatives. It's a very easily automated solution using REST APIs.
Combined with OnCommand, the solution the solution helps improve the performance of our enterprise applications.
The most valuable feature is the ability to do QoS and keep customers from harming other customers in that solution.
It's very stable. We have not yet had any issues. All solutions have issues, but we have not yet had any with this one.
We scale up to 64 nodes in a cluster and then we just keep scaling clusters. We've had no issues with scalability.
We've been a partner of NetApp for a very long time. Their support is very good. We use a lot of direct NetApp engineering resources, as a partner at our scale. We tend to work hand in hand with NetApp.
For our use case, we were automating what we were doing so we chose to use the All Flash REST APIs.
Our initial setup involved a lot of development. It was complex mainly because we had to make it simple. We had to simplify it for our own customers, so it was complex for us but it's a very easy solution for enterprises.
The solution is too new for us to see ROI yet.
Dell EMC was our other option. Both Dell EMC and NetApp are partners of ours. We went with NetApp because of relationships and ease of set up.
It's a pretty stout solution. NVMe is coming and pretty much everything we want is on their roadmap.
In terms of connecting it to public cloud, we are a public cloud so we connect to ourselves. When it comes to setting up and provisioning enterprise applications using the solution, it depends on the customer use case. Some are quick, some are really complex.
Our primary use case is escalating a more global performance, which wasn't achievable with the regular spinning drives. We wanted to have higher breakthrough performance with a flash-based solution using all SSD drives.
I am looking forward to the enhanced features coming out: The upgraded version of ONTAP and more support on the protocols.
I would like to see more frequent updates at a faster pace.
There needs to be compatibility with upgraded applications. We don't want the system to be upgraded, but not have backwards compatible to existing applications.
It needs to be able to integrate with Intel and other NetApp family products, besides ONTAP.
It's a combination of the hardware along with the operating system which produces the stability. Based on the data protection factor and on its sustainability in case of a component failure, it is well-designed on the hardware and software fronts.
I am satisfied with the stability.
The scalability is amazing. It is like an entry level box which scales up to almost a 144 drives. It is more than what an entry customer usually needs. It is suitable for expandability needs and can grow with the customer.
Customers were already using the application. We took their feedback. It was the best product based on our requirements.
I work on the phase when the solution when it is being designed. My involvement would be more on solution designing. Once the solution is finalized and has gone through, the implementation is not that difficult of a task.
The initial setup is very simple. System Manager 3.0 is built into it, which makes it easier to set up the system. It probably takes about 15 to 30 minutes.
We used a reseller for the deployment. We had an amazing experience with them.
This solution helps us improve performance for our enterprise applications, data analytics, and VMs. It is why we provisioned it. Analytics require huge amounts of processing power. With this solution, the processing happens in a tick of a second, which would not happen with regular spinning drives. With SSDs, All Flash FAS, and the help of ONTAP, it nails the performance.
Our total cost of ownership (TCO) has decreased by 40 percent.
Dell EMC was an option, but we liked the operating system of NetApp.
With an increasing amount of data cranking out every day and a lot of analytics running on processing applications, more performance is required from storage devices. This is a database solution which is All Flash FAS is suited.
I have not connected AFF to public clouds yet, but possibly in the future.
It takes half an hour max to set up and provision enterprise applications using AFF.
It is a diversified solution.
It's, mainly it's for storage, we have various databases with different applications and we are using it just for storage, mainly as just a storage for our systems.
A while ago, they performed slowly, but now they are quite fast.
I think the major thing to improve is in terms of the implementation, especially where that technology is implemented for the first time. Be sure the partners are well aware in terms of what needs to be done from the moment the sale is initiated, or a purchase order is provided, to the point of being implemented.
I think it is a very stable product.
Implementation was not easy.
When evaluating a possible solution, I look for:
Always consider whether you can afford the solution.
We also looked at IBM and EMC, but eventually we chose NetApp AFF because we already had people experienced with NetApp AFF. We did not want to invest in new technology completely.
Make sure that you are very clear in terms of what you want to buy. Your specifications have to be very clear, so there are no gray areas. From there, it`s up to which vendor provides you with the right proposal, and if its cost-effective go for it.
We are using it for VMware and Hyper-V data stores.
We have probably doubled the number of virtual machines that we've provisioned since getting an AFF.
It has done everything we have needed it to do.
It's been very stable. We only had a few upgrade issues. Other than upgrading, it has been 100 percent completely stable.
It should scale far beyond our needs. I don't think we will ever hit the edge of it.
Support has been good. I've had a few cases where support wasn't able to answer the question or they took quite a while, but majority of issues have been answered fairly quickly.
We were at the edge of the performance on our previous system. We took a risk with the AFF because it was more expensive than going with the newer model of what we had, but it was definitely worth it.
The initial setup was straightforward. I'm very familiar with NetApp, so it's more of the same. I didn't have any problems.
I did the deployment myself.
The cost savings has been higher than I expected.
Our space savings through dedupe and compression is over 50 percent, so we are saving. I think our 8080s has 20TBs. We are saving at least 10TBs and that's over 50 percent of the capacity that we're using.
I would like the pricing to be cheaper.
Our shortlist would have been EMC, NetApp, and possibly Dell. This was before Dell bought EMC.
NetApp was there because of the NFS support. That's why we chose NetApp, because of the NFS support plus their compression and deduplication. The cost savings on that alone was worth it.
It's worth the slight increase in cost for performance. In the end, you save money in the long-term (ROI).
We use it for medical systems.
NetApp has always been very reliable. We have never had any data losses. They are a work horse.
I found the reliability of it to be the most valuable feature because it supports all the patient critical systems in our hospital. We have had the NetApp system for 18 years with no downtime.
I would like to see if they could move the virtual storage machines. They have integrated a DR, so you can back to your DR, but there's no automated way to failover and failback. It's all manual. I'd like to see it all automated.
We have never had a failure.
Over the past 18 years, it has been extremely easy to upgrade to newer products and technology. We can upgrade as we move along. So, we have been able to keep up with the newest technology with zero downtime.
The scalability is endless. There have been no limits that we have come across yet.
Technical support has been excellent. We have local technical support. If we give them a call and need somebody onsite, they could be there within ten to 15 minutes.
I think we were previously using IBM FASt100 in the 2000s. From there, we moved on to NetApp.
I never found it to be complicated, but I have a lot of experience with NetApp setups.
After upgrades, it's very intuitive and easy to pick up.
A NetApp support person did all our installations, upgrades, etc. Our experience with them was excellent.
We have been able to utilize and leverage equipment which was purchased a decade ago up until this past year. So, we were running disk shells for 13 years and all we were doing was upgrading the filings and controllers, and using the same disk shells. Therefore, we were able to do something where we didn't have to invest that much. Recently, we had to upgrade all our disk shells, but it was a lot less because the technology had changed a lot since those times. It is faster now, and we have SSDs. We have larger drives that are 4TBs and 6TBs. Everything can condense so we are saving disk shell space and rack space. We are paying less now than we did at that time. So, we've gotten our money's worth out of it.
Look at the different options that NetApp offers. Look for a model and option which fits your needs correctly. Don't buy a low-end product for a high-end job.
NetApps offers a lot of different options. Just take your time and work with the consulting teams. Lay out what your needs are to ensure you are purchasing what will help you be successful.
We have put our trust in NetApp, and they have given us the customer support and a stable, reliable product.
Sometimes, I have to get rid of the equipment and upgrade because it is no longer supported. It's not like we are getting rid of the equipment or upgrading because there's something wrong with it. It will last forever. I have had disk shells that we've had to just let go, which are still working, because they aren't supported.
NetApp is our primary storage device for our line of business. We use NetApp as our primary storage device and also for our DR.
We are a workers' comp insurance company that has been in business for a 120 years.
It has helped us improve the performance for our enterprise applications, data analytics and VMs across the board. We recently upgraded from a FAS3250 platform to the AFF A300 all-flash array. Batch times went from approximately seven hours down to about two and a half. Functionality during the day, such as taking or removing snapshots and cloning instances, is higher than it has ever been.
We are employing the native encryption on disk along with NVMe. Therefore, it is a more secure solution. Our user experience and performance have been remarkably better as well.
A lot of application administrators have a lot more time. We have been able to do some things that we were unable to do before, so it has helped streamline our business a lot.
We enjoy the native built-in replication and the snapshot functionality (to take snapshots).
I just got through the session where it looks like they are going to support Oracle running on Linux with SnapCenter. That is one of the main things that we are hoping to get integrated.
NetApp has always been a stable platform with very few problems at all.
It is very scalable. Because of the cloning and snapshots that we do, we are getting a data efficiency ratio out of our production array of about 32:1, which is a high ratio. So, we took quite a bit of data and shrunk it down in size, letting it scale out better.
We are going to be adding another shelf to it, but storage to the NetApp application has always been easy to do. We usually do it ourselves without getting a third-party contractor involved.
NetApp's support has always been top-notch. I haven't met anyone in the NetApp institution who hasn't been a remarkably intelligent, easy-going person to work with. It is amazing. Everyone from their support crews to their sales engineers are good. We have a good relationship with them.
A big guiding point for upgrading hardware of any type now is to look at the support costs. If support costs get high enough, it financially doesn't make any sense to not upgrade.
Usually once a new technology matures enough, you can look at TCO and decide to make the decision to move ahead. So, we invested in this solution because of costs and the technology improved to the point where we knew it would be stable.
The initial setup was very straightforward. It was intuitive to set up storage volumes and get the networking functioning. Their engineer was very helpful. We got the current array on our production site the very same day it was shipped in. We had it up on the network and started to put some storage on it.
We used a NetApp professional services for this deployment. It worked out really well. We had involvement of several different support engineers to help with all aspects of the rollout.
The total cost of ownership has decreased a great deal. As far as percentages, it's hard to gauge, but we did have quite a few personnel staying up, making sure batches ran well every night. Now, batches are being done by 8:00 in the evening, so we don't have to do that anymore. When you start adding the employee hours that we have for people working in the off-hours, and it is not an issue anymore, I suspect TCO might have gone down 25 percent.
Setting up storage for an application (storage provisioning) is quick and easy. Maybe a quarter of the time is now spent getting the application up and running, or even less.
We also talked to Tegile and HPE, but nobody else offered up the functionality or snapshots. It was a no-brainer.
We have been an NetApp customer for about ten years and have enjoyed the relationship a lot.
The important thing for anybody to check out is the snapshot functionality of NetApp, and how well it works to provision for backup. It also provisions test environments with it. There are so many advantages to the way they do snapshots compared to other companies, and they have all these wondrous tool sets to leverage the snapshot functionality. Anybody who is looking into a storage solution needs to look at all of the attributes to the NetApp platform.
Connecting it to public cloud is our next project. We are looking at DR using NetApp cloud services, so that will probably be coming up first quarter of next year.
We are looking at a new series arrays for our building video security storage as well, and there is no doubt that we will be going with NetApp. NetApp just does a solid job, and their support is top-notch.
We use it for data storage for Citrix VDIs.
The improvement to our organization is in the ability to put more into the same storage platform. We came from EqualLogics and the ones we had didn't have that nice compression and deduplication to get a little bit more out of the storage.
Also, the protection of the data, being able to replicate between sites easily. We were a "backup shop". The replication doesn't quite back up so I haven't won that fight yet, but at least it protects us offsite, easily.
The most valuable features are deduplication and compression, so we get more out of our storage. The replication is also important.
I would like to see a little more flexibility in customizing some of the SnapMirror stuff. We have been having a little trouble and, in the first round with tech support, they say, "Well, this is how we do it."
It's not exactly throttled but it's limited in the number of connections it makes. We would like to be able to tweak that, to increase it a little bit, because we don't have half a dozen large areas that we are protecting, we have more like 40 or 50 areas. They run into each other a little bit and I don't want to spend time on them.
It's very stable. It's always there when we need it. With the Dual Controller, if one drops out, the other one comes right online. We don't use any iSCSI so there is a little bit of a latency break but, over the NFS, we don't notice that switch-on. We can do maintenance in the middle of the day, literally rip a whole controller out of the chassis, and do what we need to do with it.
We have not needed to scale it.
Technical support is generally very good, once they get a good idea of what the issue is. Occasionally you need to be a little more specific about your problem to get the right team working on it. But they're normally very good, very responsive, efficient, knowledgeable, and very patient. They're willing to take the time to make sure you understand their analysis and their recommended solution.
The reasons we switched were performance and the number of IOPS in the previous product. It was an older product which was dog-slow. Some of the larger file servers were the worst. And that played out to everything else that was sharing the storage with it.
There were a few initial setups. Two of them were relatively straightforward and one of them was a little bit more complex, the AFF8080. On that one there were a lot more network interfaces to figure out where they go.
We also leveraged the IP Spaces which was really good because we house some data for an affiliate, rather than somebody in-house, so that was amazing.
We used a reseller for the deployment. The only problem with doing it that way is that I find we did not have a good idea of the current roadmap. On some of the projects we purchased for, we might have made a different decision had we known what was coming six or nine months down the road.
Some of that was on us. We probably could have pushed for that, but having that reseller "middle-man" made it more difficult.
We haven't had the time to do a proper analysis of ROI yet.
The next closest option that we considered was Dell EMC.
Try to get behind the sales guys to the people who do pre-sales tech support to really understand the roadmap and other aspects of the product. The sales guys are great but they're sales guys. If you can get to the tech guys behind them and really talk to them about what your problems are, and what you are trying to attack, I feel that works much better.
We are a multi-cloud provider and we use NetApp All Flash as the base for providing the cloud services.
It gives us the power and agility to spin up VMs as quickly as possible.
We have also standardized on NetApp. All the storage that we have for our services runs on NetApp. Being standardized, it's easy for our Operations. We can train them on a single platform.
It helps improve performance for enterprise applications, data analytics, and VMs. With the power of flash, we moved from a traditional hybrid storage to all-flash. Having the full-fledged power of flash, and the controllers, it has doubled the performance compared to what we used to get.
Finally, our total cost of ownership has decreased by approximately 10 - 12 percent.
The most valuable feature is the efficiencies that all-flash brings. It helps us reduce costs and be competitive in the market. It's quite easy to operate and monitor, to do business as usual.
Whatever they talk about it delivers. It's fast, it's efficient, it's agile.
With the new version, they have the FabricPool which works for me. I can extend the hyperscaler storage. The features we require today are present in ONTAP.
It would be great if they had a single pane of glass or a single dashboard where all the NetApp ecosystem storages could be viewed and monitored simply. That would help my Operations.
Being a service provider, we cannot afford any downtime. It's working fantastically as of now. It's sturdy and just rocking.
It's an all-flash so you just add more clusters, nodes, and you're done. Scalability isn't an issue. That was one of the evaluation criteria, we needed something that would scale out.
Tech support is not just for AFF, we have a long-standing relationship with NetApp. Overall, the support guys are very proactive. They help us with new fixes and patches - we keep up with them. We have a very good relationship.
We haven't really had much of a need to escalate issues. We don't actually get into "escalation mode." We just talk with senior management and things get done. We're happy with the support.
We did not have any other flash solution. We were running a tiered storage approach but because of market demand, where our customers wanted efficient performance, agile cloud storage, that is what drove us to evaluate the newer technologies. With all the technical evaluations we did, we settled on All-Flash.
We chose NetApp because we had the SolidFires in place and we already had the standardization. We also went with NetApp because of the partnership and the support that we get from NetApp. In addition, it proved that it was technically better than the competitors in the benchmarks.
I was involved in the technical and commercial analysis, but not in the actual environment setup. That was taken care of by another team. The initial setup was straightforward but there was definitely a lot of planning that went into getting it deployed smoothly.
Being a services provider, every customer has unique requirements, which makes it more complex for us. We took a good amount of time to understand, evaluate, and come up with a proper deployment plan so we wouldn't get into trouble at the deployment phase.
We had an in-house team do it.
I haven't calculated ROI because, being into the OpEx model, since we're providing serivces, typically the ROI is 36-plus months. We're not there yet.
We evaluated Nimble, 3PAR, Dell EMC.
You should definitely look at NetApp AFF and evaluate it.
In terms of how long it takes to set up and provision enterprise applications using AFF, we have a back-end provisioning tool so it's all automated. I cannot define it only with respect to AFF because the entire orchestration works. But on average, we take about five minutes to provision a VM.
I would rate the solution at eight out of ten. It has definitely helped us bring our costs down and gives us a powerful storage at the back end to serve our customers. It would be a ten out of if they brought my TCO down even more.
After testing with early ONTAP 9 versions including storage efficiencies, we found that AFF systems can decrease the data footprint with MS SQL databases (real customer multi-TB DB) to 1:4, while aggregate dedupe wasn't available at the time of testing and post-compression and dedupe were disabled. Snapshots, provisioning, cloning were not included in the result of 1:4 data reduction. Alongside with AFF systems, we tested EF & IBM FlashSystem for comparably in price. AFF showed not only the best storage efficiency, but also the best storage performance (based on overall application performance, using MS SQL DB).
Therefore we found AFF systems very competitive in terms of performance, storage efficiency, feature richness, and scalability.
A centralized storage solution for Telecom organizations. Where NetApp FAS 6200 was connected to HP-UX, AIX, Linux, VMware, and Windows, this storage is used by the OLTP solution (database and application) as well as a data warehouse application.
The graphical interface is still heavy and slow. Needs more improvement in this area.
Yes. It was a bug in an older version related with NVRAM. However, they have fixed it in both the FW and ONTAP levels.
The technical support team is really cooperative. I have experienced slow responses several times, if the ticket has only been opened in portal. On the other hand, a single phone call to them improved the case support tremendously.
Also, if the AutoSupport is well configured, then you need not to do a monitoring. You will get call and mail when any issue is completed.
Earlier used EVA, MSA and XP from HPE. In order to enhance our capacity, we proceeded to switch to NetApp. Interestingly, after proceeding to NetApp, we discovered more features, which we had not even thought about.
Setup was simple and easy.
Implemented by vendor (local partner and OEM engineer). They are really experienced.
So far, I understand the cost is less than many other storages of same/similar performance benchmark. If you go for Replication, Vault, and NAS, please ensure that the license has been ordered at the very beginning. However, licenses can been added or modified without rebooting the system at any time.
We considered the product from EMC.
This can be used as a storage (SAN/NAS) as well as a SAN's volume controller
Used to run an older FAS with FC drives. We were always having trouble with performance. AFF is fast, with low latency, and plenty of I/O headroom. Management is fairly easy as we know our way around NetApp from experience with the old FAS.
The speed is important; no more problems caused by high latency.
MetroCluster provides business continuity and is a critical part of our contingency setup.
Stability could be improved.
No issues with scalability.
In the first years it was great, after that it has become worse.
NetApp is getting too expensive.
The valuable feature for us was, we started our VMware solution on a mid-tier NetApp solution. When we went to All Flash FAS our changes went form about a 5 or 10 millisecond response time to 1 millisecond. The systems actually started acting like real computers, not like a virtual system.
The benefits for our organization are that our customers actually noticed, and that's pretty hard to do sometimes. It was really good because they actually noticed the response times changing and that our virtualization system actually became more responsive for them.
Our stability has been very good. We haven't seen any down-time for five or six years probably.
Scalability on NetApp is unforeseen. I'm sure we're going to buy more. I'm sure the fact that we are using clustered NetApp, we can take that stuff and move the next heads into the next cluster and then just migrate things, and nobody notices in the background. That's probably the best thing about the scalability.
The technical support is really good. We don't use it that much because I have a few guys on my team that are really good with the product. But the technical support, whenever we need them, is great. We actually work with Sirius Computer Solutions, our partner. They help us figure out where we should upgrade to. They'll come in and they'll do technology things to make sure that we are going for the next solution that will help our product.
We did the initial setup. I would say it was an eight out of 10. There were some issues but it was okay. They helped us fix it, and we figured it out. That's mostly because we just like to do it ourselves, because we want to see what we're doing and what's in our datacenter.
Yes, we evaluated other solutions but the NetApp solution seemed to be the best one for what we were doing, and for simplicity of moving from the current solution to the next solution.
If a colleague was evaluating storage solutions I would tell them to buy NetApp. The decompression, the dedup, all those things that happen, are just better then everybody else's platform.
Scalability, really, for us. We have a lot of customers who purchase other companies and they need scalability; the NetApp solutions really lend themselves to that.
I think for us the pricing point was pretty important too. In Australia, we find that selling solutions now, the features and functions are one thing, but the price point is pretty important as well, and NetApp provides a good price point.
There is a variety of features and benefits to customers using this solution. A lot of our customers are coming over from EMC, and the integration with cloud is pretty important to them. NetApp has a lot of roadmaps on cloud inspiration. That's important to them. That's one of the reasons I'm here, to understand more about the cloud inspiration, and having those on-site/off-site features. A lot of people are now looking at cloud. There are a lot of hardware solutions that are coming up, and NetApp really lends itself to them.
I don't really know. After this conference, maybe I'll have an idea of other features that I'd like to see, but at the moment the features provided are adequate for the customers' needs.
I don't give a 10, or a nine out of 10, straight off the bat. I'd like to work more with it before I can give it a better rating.
Probably about two or three months.
So far, no issues at all.
Most of the companies we do solutions for acquire other companies, so it's important to them at the beginning to know that, even though they don't know what their sizing is going to be like for the next three to four years, if they do purchase companies and a lot of data comes on board, the solution is easily scalable.
I think I did one call with tech support and it was pretty quick. They got me the right answer immediately and I think the call was closed within one day.
I've actually shadowed a NetApp consultant and it looked to be straightforward. I can't wait to do my own in the future.
EMC, we do a lot of Celerra and VNX implementations; HPE EDS, and Hitachi.
My experience so far, compared to other solutions, All Flash FAS has been pretty good. I think the documentation in NetApp is pretty good. I think the interface and your working tools are pretty good, compared to some of the other vendors where, with them, it gets complicated. I think other vendors have add-on components to their solutions. NetApp's seems to be native. Those are great benefits to us.
The way my company integrates with customers is our sales force checks with the customers, they decide on a solution and then it gets passed over to technical, which I'm part of. We inherit the solution and then we try to make the best of it. We do give our sales boys a lot of pros and cons for each type of vendor.
I suppose that's where the sales guy, when he has his initial discussions, works out a technical solution for the customer at a high level and then also works out a price point.
I'd say the price point's an important factor. I think a lot of solutions provide similar functionality and I think that the edge would really be the price point, for us.
Sometimes the customer has had a relationship with another vendor and they get to a point where they'd like to move over to something new, because of support issues, or there might be some kind of issue with their sales rep. Lots of factors sometimes influence them. That's why it's important for our sales force to exactly understand what the issues are.
The most important criteria when selecting a vendor start with, "Is it going to work for the customer?" We'd like to do best-of-breed for customers and we don't like to just push a solution down because of any relationship with the vendor. It must work for the customer.
So far, NetApp solutions that we have put together have worked for the customer. It is sometimes hard to get NetApp into a customer when they have another vendor, like EMC. It's hard to push the other vendor out, because not only the storage but there are also other parts that the customer sometimes aligns to a certain vendor, so it is hard to push it.
Do good research. Make sure that the customer doesn't have any pre-existing relationships that might deter them from going to another vendor; that's really important. Sit down with the customer and go through the pros and cons of it. Sometimes it's good to point out the cons as well, so that they understand those and not realize those six months or a year down the track.
I've had a really good experience. It's pretty straightforward. It meets the customers' requirements. The price point is really good. But I'm going to reserve the 10 out of 10 until I get a bit deeper into it.
We have seen a speed improvement, and our applications are a lot faster.
Probably on the management side of things. It is very complex.
Probably six months.
It is pretty scalable.
Tech support is very good, so give it an eight out of 10.
It was an older system. It was a disc based system. So, we were looking for performance improvement.
It was a natural progression from the previous system, so it was just more of an upgrade rather than a new system.
It was reasonably straightforward. We received a lot of knowledge on the net about ONTAP systems, so the setup has improved.
The NetApp ONTAP system is a very good system to work with and use. Very versatile and once you know how things work in the NetApp world, then it makes it very easy to keep the systems for a long time, to work with them, and they work very well.
It is a brand new system, and it works extremely well. Performance improvements are as expected.
Our biggest use cases for the AFF are virtualization and data bases. We use it for file storage.
For any of the performance intents of applications, it's just been night and day from when we put them on. We had them on spinning disk, then converted them to the AFF. The latencies have become really low and my customers are all happier for it.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
Speed. it's very performance designed. It's designed to have a lot of high speed.
I like what they're doing with their management tools. It makes it really easy to manage them. They're always improving and going with those. It's been really great, especially with the APIs. We can use them to make our calls and to manage it. It's been good for us.
Cleaning up false positives on alerts. We get a lot of those. If we could find some way of not getting so many, so that the alerts that do come in are real and valid, and not so many false positives, that would make a big difference.
We've been really happy with their stability. We did run into a bug that nobody else knew about and they came up with a patch for us to help fix it, and it's been rock solid ever since. So we're happy.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
With their clustered ONTAP we can scale as big as we need to.
I've been happy with them. They've gotten me the answers every time I've called in. I haven't had any problems with getting the escalation I need. I just ask for it and they're able to kick it up and get the response that we need.
It was a little complex. There were a few changes that we were not privy to. For instance, they had the 40 gig converged NIC that we didn't even know was available until we got it. Learning how to adjust that and manage that was a little bit different, it was a little bit of a learning curve, but it was not horrible at all.
We've been a customer of NetApp for a long time and they're a good, strong company and we have a close partnership with them. We are more likely to consider NetApp for mission critical storage systems based on our experience with AFF because they're a great company to work with. They put out some good products.
The most important criteria for us when selecting a vendor would be
They're a good strong system. I don't think that anything is perfect, but it's pretty close. It takes care of everything that we need. It's a fantastic solution. We haven't regretted getting it.
More performance features. We need our jobs to run faster.
Yes, it is stable.
Yes, it is scalable.
Helpful for troubleshooting.
We did not have a previous solution. We chose NetApp because we have other NetApp systems.
It was an easy setup. It was done very quickly.
The big benefit is the performance increase over the previous versions and the previous systems.
Also, to be able to do things such as moving machines around, moving volumes around, the little maintenance and everyday things you need to do. The tasks become that much quicker, and that makes it that much easier to do. You're not, say, waiting for a Storage vMotion to take half an hour to run, where on an all-flash system if it takes half the time of what you were used to. That's awesome.
In addition, less time that you have to worry about troubleshooting stuff.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
Ease of use. To integrate it into our virtual environment is very easy, the integration with VMware is very nice. I think it's better than other vendors have. It makes it easy, even for people who aren't familiar with NetApp, to use. For example, a virtual administrator or Windows administrators who just come to it and need to provision a virtual machine that could use the VSE easily, as opposed to having to know how to connect this and that, specifically.
Also, for disaster recovery, the SnapMirror; FlexClone for being able to do testing on the fly is pretty awesome. Being able to do tests very quickly, and within seconds have a clone up that you can attach to your virtual environment; and you can even have it automated, so you don't even have to do too much of the work.
To be able to have that flexibility, do testing, do failover, disaster recovery testing, and restores with snaps that are super easy.
I've definitely thought about this at earlier times, where I would probably have more stuff than I do now. The integration is pretty good.
I think there could probably be some more functionality out of like the VSC-type of plugins for the virtual environment.
The backup-type of functionality that comes from NetApp is okay, but I could see some enhancements in that regard, too.
It's definitely impressive. I haven't had a problem with the system. Been running it for about nine or 10 months now. It's stable, absolutely, 100%.
We have a smaller environment, just a two-node cluster, one our primary side and one on our secondary side. One of the benefits that NetApp brings to the table is being able to add nodes to it if you want to, if you need more storage or you need more power, more processing speed - and boom! You can just add nodes and that's it.
I've used them many times. There are always some techs that are better than others, but I've found that NetApp support is better than some other vendors, even non-storage related vendors, whose tech support you have to call.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
We mainly run virtual environments, VMware NFS. We were previously using just SATA and SaaS disk and we went to the All Flash and the performance was way better. It was a great improvement over the previous system.
We maxed out our previous system in terms of its space and also the IOPS and the actual performance we were getting out of it, as we continued to grow.
We were a small company. Our parent corporation rolled us into our own corporation, we did an IPO. Then we grew a lot from that, so we had our older system that we had previously and, as we grew, we threw more databases and the like at it. We saw the performance was definitely not able to keep up. Once we implemented the All Flash FAS, it really wasn't an issue any more.
It was very straightforward on the setup.
The upgrade was actually very easy too. We didn't even really need to do a traditional migration when we did our "migration" to it. We didn't have to do the setup by migration tool. It was easier to set up the new cluster next to the old one, and then set up intercluster links and SnapMirror all the data over, and then just bring that volume. We did a planned failover, like we would for a disaster recovery, where you just bring up the new system, bring down the old system; that's how we did it.
Actually, we took that old system to make our disaster recovery, so we just sent that to our failover site and then we already had the data in sync too. We didn't have to do that whole process of syncing the data across the LAN, we were able to do it right next to each other on our LAN, so it was super fast, and then sent over our system, and then just resume the SnapMirrors.
We had NetApp already, so they were always a front runner, but we were looking at EMC, EqualLogic. And even, instead of having a NetApp, a different DR solution altogether, where we would have a third-party replication system that could replicate our data - instead of having another All Flash FAS or another FAS on the other side - and just relying on a different DR system altogether.
Once we took into account the easy integration of everything, and how everything worked together, and since we already had that familiarity and that comfortability with it, it was easy to decide on NetApp; the company and the product.
Right now we just use it for file storage. We were using block and file. I'm going to be using block in the future as well.
In terms of my impression of NetApp as a vendor of high profile SAN storage, before I purchased AFF, I always liked NetApp. I was always impressed by the company in general, as a NetApp customer previously. But the All Flash FAS definitely has even increased that and enhanced my opinion of them more, based on the functionality, the new stuff in ONTAP 9. We were using an older 7-Mode system, so the transition was pretty easy; and just the overall benefits of the system and the new functionality.
We are more likely to consider NetApp for mission-critical storage systems in light of our experience with AFF because of the reliability, the ease of the failovers, and the high availability of the system.
Our most important criteria when selecting a vendor include responsiveness of the company to their customers, what they need and they want. I feel that NetApp has a very good finger on the pulse of their customer. They have good relationships with their partners and the third parties, so it is a very easy transition when dealing with NetApp partners. It makes the actual buying, and dealing with the quoting, very simple.
Also, in selecting a vendor, support is definitely an important issue; having someone to lean on if there is an issue - and when there is a mission critical issue - that you know you can rely on. It's important to have someone who is going to respond right away, so that you're not waiting for someone useful to help you.
Do as much hands-on testing as you possibly can. It's hard to test it out in the real world. The NetApp Insight conference is cool because you can see the product up close and personal, and they do demos and labs. But definitely do your research, as much as you can and pick something that works, that makes sense for your company, and organization as a whole.
Our use case is really just our Exchange environment right now. In terms of block or file storage, we present it to VMware and then present it off as RDM's to the virtual servers. Our AFF is not currently part of a cluster together with other NetApp FAS systems.
Because of all the inline deduplication and the integration with SnapManager, it allows us to set the storage and forget it with the Exchange team. They do all the restores through the Snap Single Mailbox Restore.
And it's quick, it's fast, even though IO is not huge for the Exchange environment, it's still nice to have that speed for when they do have that need.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
Its integration with SnapManager products, really, is the main reason that we've stuck with it. Without having that integration it wouldn't allow our Exchange team to operate without us.
For us, probably the best feature would be an ONTAP-as-a-whole feature, the fabric pulling directly to cloud with unaccessed blocks over time. For us that would be the feature to revolutionize where NetApp stands, and bridge their connection with the cloud. It's actually a feature that they're introducing now, it's just not mature.
Right now you're only aging snapshots up to the cloud, and only if the aggregate is at 50% or more. It would be cool if the feature was that the fabric pulled just aged/unaged blocks. Who cares if a block is still there or not after it hasn't been accessed in three years? Just age it up to the cloud, if suddenly I need it just pull it back.
That should be automatic without extra things. You could use FPolicy to do it one way or you could do it a different way. But if that was just in the array and part of the normal hybrid flash pull array with the fabric pull on the end, to get rid of that extra old data.
It's really stable, in our experiences, this stuff has been pretty rock solid.
We haven't had to deal with scaling yet.
I use NetApp's tech support all the time. I actually think they've done a great thing - the introduction of chat support has been really great.
Increasing hours for that would probably be good because it's easier to be on a chat call and be troubleshooting with something. Sometimes a lot can be lost on a phone call.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
We've been a NetApp customer for a while so we've used disk-based and hybrid storage from them.
We use Nimble for our primary VMware storage right now. We haven't switched that back to NetApp yet. We're going to see how the next few years go and then we'll figure out from there.
We were using Exchange, we were using NetApp storage before, and we knew the SnapManager products were a huge part of that. And when you couldn't get the same functionality out of trying different things with different vendors, you don't want to beat your head against the wall reinventing the wheel with what you're doing. It was a natural progression for us.
It was pretty straightforward. Our need and setup for it wasn't crazy.
Our impression of NetApp as a vendor of high performance SAN storage before and after we purchased AFF was good. For our primary VMware storage, before, we went with a different vendor for a little while. Then we pulled back to NetApp for this, because of the ease of functionality and ease of use relationship with ONTAP.
Based on our experiences with AFF we are more likely to consider NetApp for mission critical storage systems in the future because of its reliability. We've tried out other vendors, and we might end up going back to NetApp for those solutions, given our different experiences.
When selecting a vendor to work the most important criteria for me would have to be:
As for advice I would give to a colleague in a different company who's looking at AFF and other similar solutions, it depends on how they support their Exchange environment. But if they were willing to pay for the SnapManager and the Single Mailbox Restore suite, it's really hard to beat what NetApp has done with it. If you set up everything properly, and restores are pretty much a non-storage event, you can mostly push that off on your Exchange team, and just worry about when they need large data increases.
It has resulted in more customer revenue. We've got a very diverse crowd as far our customers go. Different customers are asking for faster, more performance, more service, and AFF pretty much delivered that.
The performance. The flash performance helps move data pretty fast.
I don't know if it's really specific to AFF, but metrics as far as performance. I would like to see a lot more of that.
Also, ZAPI is kind of difficult to use. You know, it's SOAP-like, it's not really SOAP. I would like to see it more of a REST-based JSON, instead of XML.
One of the biggest things that would really help is if it were driven like AMQP on the EMS would be really nice, so I can actually see when things are being created instead assuming things are created based on API calls.
It's very stable. I don't think I've noticed any problems with it at all. It's one of those things you don't really think about until you run into a problem. I haven't run into a problem, so it's actually very stable.
It's scalable. As far as NetApp products go, in general, they're very scalable.
I haven't used it directly. We have residents so usually, if I need support, I go to the resident or one of the Professional Service guests that works with us. But the support they provide is excellent.
I'm generally not involved in initial setup. Usually where I get involved is after it's gone through RDS, and I do the automation orchestration as far as our customers' provisioning and billing, etc.
For the most part our use case is databases. We use AFF for both block storage and file storage. We've got arrays for both. We've got a very mixed NetApp setup. We've got some that are just AFF, some that are AFF FAS systems - flash pulls and the like.
I've always been a fan of NetApp. I've dealt with other vendors but I like NetApp because when we need support, they're usually there, they show up, whereas other vendors don't quite do that. As far as AFF specifically, it's just another good product that NetApp put out. We're definitely more likely to consider NetApp for mission critical storage systems in the future, based on our experiences with AFF due to the support that NetApp provides. Very good support
When selecting a vendor to work with, the most important criteria for me are
I would pretty much tell colleagues to go with NetApp because of the support. When something goes wrong, that's usually the most important thing to me: how do I get support? NetApp's always delivered on the support side.
One example is we're moving a legacy application over. I'm actually in the middle of a project for that right now, where it's four Windows servers each with eight terabytes, that our actuary department uses for data analytics. With the efficiencies on the AFF, that eight terabytes has gone down to about two and a quarter of actual capacity used. So we're going to save a lot of space there, in addition to letting them run more simulations and get more simulations done more quickly because of the storage being so much faster than what they're on now.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
Some of the best things about AFF are that it integrates seamlessly with what we're used to for FAS as well. We can use the same ecosystem, OnCommand Unified Manager, but get the performance, the raw performance of flash. It's great that way.
I think that's the most important thing, the integration with the existing features that we already have and existing management systems. Among those features are the ability to do SnapMirror or SnapVault for data resiliency and backup. The other features are the data efficiencies, compaction and inline dedup compression, that let us use it more efficiently too. Those are huge on the list.
Looking at the road map that's out there, I think they're heading in the right direction. Additional performance, additional data efficiencies, that's what everybody wants right now.
And then the integrations that I'm really excited about - and part of the reason I'm here at the NetApp Insight 2017 conference - is to look at the integrations with AFF and things like StorageGrid Webscale. So you're getting even more efficiency out of the platform and offloading cold blocks that you don't need right away.
We haven't had any issues, even going back to the longer experience I have with the FAS platform. They're typically few and far between, especially compared to some of the other vendors we've worked with. When we do uncover an issue, we typically get escalated to the right teams and get it worked out.
It's really good. There are some that things that could be done better there, like NetApp is doing; it's other products like Webscale and SolidFire. As long as you're aware of the design considerations, it's very, very easy. Shelves go in like a snap. As long as you make sure you have the proper compute to go with it, you're good to go.
We're not really having scalability issues, it's just you have to make sure that you're not exceeding the capacity of your heads when you're expanding your logical storage out, that's all.
It has caused problems for my company in the past, but I think that was the result of not having storage administrators with a high level of proficiency and knowledge of NetApp. They made some very poor sizing decisions, but you can't blame the vendor for that. It's more of the admins' fault for not specking them out properly.
For the AFFs, I don't know if we've had to specifically leverage NetApp support yet. I don't think we've had an issue major enough that we've had to reach out. That's been more on the FAS side.
Support has generally been pretty good. Occasionally there are struggles getting to the right people but, once you do, they know what they're talking about.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
Yes and no. We're in the process of retiring some old storage frames, old Hitachi frames actually. I believe it's just disk-based. There are actually three different Hitachi frames and they're different. One is all flash, one is hybrid, and the other one is purely disk-based. So there's a mix. We have another all-flash platform that we could move workload to, but the NetApp fit the workload a lot better for this in my opinion. So it made sense.
The original intent was actually to extend our NAS - we primarily use NetApp for NAS and a lot of our environment. But we've pitched the AFF that we just installed, the A700, primarily as a SAN platform. So we're really trying to leverage more towards that now.
It will eventually be used for both block and file storage. It was originally slated for file usage NAS, but we're leveraging it more for block.
I had worked with NetApp as block storage in the past, and I always had a high opinion of it. I think NetApp is the best in the industry at providing a unified platform for file and block. Hands down.
We don't get too deeply involved in the cost analysis, but management and engineering rely heavily on the input from myself and my co-worker on the storage team, for these kinds of decisions, on a technical level.
We had Pro Services, but we were heavily involved.
For someone who is experience with any NetApp platform it's very, very straightforward, very similar to anything else that you would do. Obviously there are some specific guides, specific to AFF. You want to make sure you're following those best practices, but other than that it's a cinch. It's something that I could have done on my own without Professional Services, that's how easy it was.
We have storage frames from most of the large vendors, so EMC would have been on the table, IBM would have been on the table, Hitachi. And really with the ecosystem that NetApp has built up around it, it just makes the most sense from a management perspective for sure. And the performance and value for money is there as well. It's a tough combo to beat.
We have a 8080 EX HA pair, an 8040 HA pair, and an A700 all in the same cluster. That's our production cluster. We also run an AFF8040 for non-production and then a couple of other FAS heads: two HA pairs, 8040s for DR. So we've got some NetApp spread around.
Based on our experience with AFF, we are definitely more likely to consider NetApp for mission critical storage systems in the future because it's the same quality and the same value for money as we have always come to expect from them.
This is the direction the industry is going. My personal opinion is that SaaS 15,10k is going to be dead, completely within the next three to five years. Everything is going to be flash for performance and cheap and deep SATA, probably object storage for archival. I just think this purchase puts us better in alignment with where the industry is headed as a whole, it's more future proof.
When it comes to the most important criteria when selecting a vendor to work with I think what's important is performance, value for money and, in addition to that, having support that's easy to work with, that can get you the answers quickly when you need them. That is the other big thing.
I give it a nine out of 10 because there's always room for improvement. I don't think anything is perfect in IT, but it's pretty darn good. It's really pretty impressive technology when you get it running.
What would make it a 10 goes back to what we talked about above, with the additional integrations and single panes of glass and getting a whole functional flow; what NetApp keeps pitching on the roadmap as the "Data Fabric," getting a single pane of glass for everything in your infrastructure and tying it all together.
Advice as far as choosing a solution? Everybody's requirements are different, but if they don't have NetApp at the top of the list as candidates, they're doing something wrong.
It solves the performance issues of the past.
The primary use case for my customers is enterprise vSphere workloads or Oracle workloads. We have customers using it for both block and file storage.
This is not a directly specific to AFF, but I like the idea in the cluster that the data from ONTAP would allow having a mix of All Flash HA pairs with hybrid arrays. This allows for a somewhat tiered approach for storage. So, that is cool.
I am excited to see how the data fabric story plays out from the entire NetApp portfolio that connectivity of all the different devices. I know in the beginning when it was first spoken about, SnapMirror was something talked about. I liked that idea of just having the ability to transfer data between different NetApp platforms, and that would obviously include the All Flash line.
Cluster data ONTAP as an operating system is very stable and very mature. We seemed to like with 9.2 that there is inline deduplication at the aggregate level. That is a welcomed addition.
Since we are talking 24 nodes for NAS, that is really good. I forgot what the scale number is for block on clustered data ONTAP, but I have not run into any opportunities where we had to go beyond what we had.
When you are looking at NetApp as a scale-out NAS player, they have been in the SMB in the FAS space for long time. They have done it well. They have done the multi-protocol access, NFS to NTFS access and reverse really well. They have the ability to have a cluster of disks contained of different kinds of disks, which has been useful. Also, as a unified box, it is like the Swiss army-knife of the unified boxes.
It is the flexibility of configuration. It is optimized for flash, so we do not have to manage the configuration of what optimizes flash, but we do have the flexibility to configure what optimizes our environment.
It has improved our applications' overall performance, and it has simplified our management of it.
We use it for all of our VMware infrastructure as well as for our X-ray data storage, for the short-term storage. We use both block and file storage.
Now, we can manage failed disks in our SAN before we replace them or manage how quickly they are replaced. All these kind of decisions, we can make. This flexibility is critical to having a comfort level with our environment.
Being able to move SVMs from one cluster to another.
We have had two issues:
Overall, the stability has been pretty amazing.
Scalability is excellent. There has never been a question as to whether it could scale out. It has been more a question of, "Do we have the finances to be able to do it?"
They have always been good about being responsive. I love the auto support. The people that we get on the phone are usually pretty knowledgeable, and if they are not and they don't know what to do, then they hand it off to somebody who does.
We also have Pure Storage.
It was pretty straightforward.
We did have a rep on site as well that helped us with the installation. We have used it as part of a cluster to connect with other methods.
NetApp does a good job of being able to provide a lot of options for its customers and supporting those options with information. Even before AFF, we always used NetApp for mission critical stuff.
It offers everything we need.
If you are considering this solution, ensure you do the research and know what you are actually getting. Also, make sure you know what your needs are before you start doing that research.
Flexibility in some of our big things. We're constantly doing new projects or new directions in IT, because it obviously changes all the time. NetApp has been great working with us, being flexible on having to do migrations, if we want new solutions without taking any of our applications in our current systems down. That has been a good benefit. And they've grown over the years to get better at that.
For us, it's probably along the lines of keeping everything up and running, critical, 24/7. DR's been a big push for us over the past couple of years with the environment. Different things happen and you need to keep all of your critical systems up and running. All the new technologies that NetApp has come up with, helping us do that, has probably been of the biggest benefit for us. The flexibility and being able to change on the move.
Some of the applications have changed over the years. Their complexity was there before, but moving forward we've seen a few features being taken away in some of those applications, that we had grown to love. But that happens in any type of software. You get stagnant, you like a feature, change comes along. It can be a little bit difficult to do.