We primarily use it for storage for VMs and backup units.
We use this solution on a daily basis. In Sweden, typically small to medium-sized companies use this solution.
We primarily use it for storage for VMs and backup units.
We use this solution on a daily basis. In Sweden, typically small to medium-sized companies use this solution.
MetroCluster functions, SnapMirror functionality, and ease-of-use are the most valuable functions for us.
Their backup software could be improved.
In the next release, I would like to see a complete S3 protocol. Also better compatibility and integration with VM-ware.
I have been using AFF since its release.
Nowadays, AFF is very scalable — ever since they implemented the ClusterMode. I think it's very easy to scale, both up and out. It's also very stable.
They provide different types of support. When an accident happens that impacts your business, they respond very fast and give very good help. Sometimes, when you have problems with their software, it can take a long time — that should be improved. Overall, their top functions, operating systems, the storage controller, they are very strongly enforced.
The initial setup is very simple. How much time it takes depends on the size and what the initial setup should be. It can be a long process.
We do everything from the initial setup, to the integration with system backups, the whole chain, including the hardware, the software, the daily work, as well as the daily administration as well.
It depends on how you look at things, but they are in a higher price range.
They have different license models. You can get a license model where everything is included, but you can also purchase more licensing and buy what you need. It really depends on what you buy.
I would absolutely recommend this solution to other companies.
Our primary use case for AFF is to host our internal file shares for all of our company's "F" drives, which is what we call them. All of our CIFS and NFS are hosted on our AFF system right now.
We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point.
Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.
One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.
AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.
AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.
AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFF dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.
The most valuable feature on AFF, for me as a user, is one of the most basic NetApp features, which just:
A user comes to you and says, "I need more space."
"Okay, here, you have more space."
I don't have to move things around. I don't have to deal with other systems. It's just so nice.
Other things that have been really useful, of course, are the clustering features and being able to stay online during failovers and code upgrades; and just being able to seamlessly do all sorts of movement of data without having to disrupt end-users' ability to get to those files. And we can take advantage of new shelves, new hardware, upgrade in place. It's kind of magic when it comes to doing those sorts of things.
The simplicity of AFF with regards to data management and data protection — I actually split those two up. It's really easy to protect your data with AFF. You can set up SnapMirror in a matter of seconds and have all your data just shoot over to another data center super quickly.
But I find some issues with other administrators on my team when it comes to management of the data because they have to either learn a CLI, which some of them really don't like to do — to really get into managing how volumes should be moved or to edit permissions and stuff like that. Or they go into a user interface, which is fine, it's web-based, but it's not the most intuitive interface as far as finding the things you need to do, especially when they get complicated. Some things just hide in there and you have to click a few levels deep before you can actually do what you need to do.
I think they're working on improving that with like the latest versions of ONTAP. So we're kind of excited to see where that's going to go. But we haven't really tried that out yet to see.
One of the areas that the product can improve is definitely in the user interface. We don't use it for SAN, but we've looked at using it for SAN and the SAN workflows are really problematic for my admins, and they just don't like doing SAN provisioning on that app. That really needs to change if we're going to adopt it and actually consider it to be a strong competitor versus some of the other options out there.
As far as other areas, they're doing really great in the API realm. They're doing really great in the availability realm. They just announced the all-SAN product, so maybe we'll look at that for SAN.
But a lot of the improvements that I'd like to see around AFF go with the ancillary support side of things, like the support website. They're in the middle of rolling this out right now, so it's hard to criticize because next month they're going to have new stuff for me to look at. But tracking bugs on there and staying in touch with support and those sorts of things need a little bit of cleanup and improvement. Getting to your downloads and your support articles, that's always a challenge with any vendor.
I would like to see ONTAP improve their interfaces; like I said, the web one, but also the CLI. That could be a much more powerful interface for users to do a lot of scripting right in the CLI without needing third-party tools, without necessarily needing Ansible or any of those configuration management options. If they pumped up the CLI by default, users could see that NetApp has got us covered all right here in one interface.
That said, they're doing a lot of work on integrations with other tools like Ansible and I think that might be an okay way to go. We're just not really there yet.
We've been using AFF for file shares for about 14 years now.
The stability of AFF has actually been great. This is one of the areas where it has improved over time. During the Cluster Mode transition, there were some rocky periods here and there. Nothing serious, but you'd do a code upgrade and: "Oh, this node is being a little cranky." As they've moved to their newer, more frequent, deployment model of every six months, and focused more on delivering a focused release during that six months — instead of throwing in a bunch of features and some of them causing instability — the stability of upgrades and staying up has just improved dramatically. It's to the point where I'm actually taking new releases within a month of them coming out, whereas on other platforms that we have, we're scared to go within three months of them coming out.
Scalability on AFF is an interesting thing. We use CIFS and that doesn't scale well as a protocol. AFF does its darndest to get us up there. We've found that once we got into the right lineup of array, like the AFF A700 series, or thereabouts, that was when we had what we needed for our workloads at our site. But I would say that the mid-range stuff was not really doing it for us, and our partners were hesitant to push us to the enterprise tier when they should have. So for a while, we thought NetApp just couldn't do it, but it was really just that our partners were scared of sticker-shock with us. Right now we've been finding AFF for CIFS is doing everything we need. If we start leveraging it for SAN I could have something to say on that, but we don't.
Don't be scared. They're a great partner. They've got a lot of options for you. They've got a lot of tools for you. Just don't be scared to look for them. You might need to do a little bit of digging; you might need to learn how the CLI works. But once you do, it's an extremely powerful thing and you can do a lot of stuff with it. It is amazing how much easier it is to manage things like file shares with a NetApp versus a traditional Windows system. It is life-changing if you are an admin who has to do it the old-fashioned way and then you come over here and see the new way. It frees you up from most of that so you can focus on doing all the other work with the boring tools that don't work as well. NetApp is just taking care of its stuff. So spend the time, learn the CLI, learn the interfaces, learn where the tools are. Don't be afraid to ask for support. They're going to stand with you. They're going to be giving you a product that you can build on top of.
And come out to NetApp Insight because it's a good conference and they got lots of stuff [for you] to learn here.
NetApp certainly has options to unify data services across NAS and local and the cloud. But we are not taking advantage of them currently.
I'm going to give it a nine out of ten. Obviously you've heard my story. It's meeting all our needs everywhere, but the one last piece that's missing for me is some of those interface things and some of the SAN challenges for us that would let us use it as a true hybrid platform in our infrastructure. Because right now, we see it as CIFS-only and NAS-only. I would really like to see the dream of true hybrid storage on this platform come home to roost for us. We're kind of a special snowflake in that area. The things we want to do all on one array, you're not meant to. But if we ever got there, it would be a ten.
The primary use case of this solution is for our production storage array.
We have not used this solution for artificial intelligence or machine learning applications as of yet. This product has reduced our total latency from a spinning disc going into flash discs. We rarely see any latency and if we do it is not the discs, it's the network. The overall latency right now is about two milliseconds or less.
AFF hasn't enabled us to relocate resources, or employees that we were previously using for storage operations.
It has improved application response time. With latency, we had applications that had thirty to forty milliseconds latency, now they have dropped to approximately one to three, a maximum of five milliseconds. It's a huge improvement.
We use both technologies and we have simplified it. We are trying to shift away from the SAN because it is not as easy to failover to an opposite data center.
We are trying to switch over to have everything one hundred percent NFS. Once the switch to NFS is complete our cutover time will be one hour versus six.
The most valuable features are the FlexClone and SnapMirror. The ease of use, the SnapMirror capabilities, the cloning, and the efficiencies are all good features.
The simplicity of this solution around data protection and data management is extremely easy.
With Data protection there is nothing easier than setting up SnapMirror and getting it across and protecting our data. Currently, we have a five minute RPO, so every five minutes we're snapping across the other side without any issues.
This solution simplifies IT operations by unifying data services across SAN and NAS environments.
There are little things that need improvement. For example, if you are setting up a SnapMirror through the GUI, you are forced to change the destination name of the volume, and we like to keep the volume names the same.
When you have SVM VR and you have multiple aggregates that you're writing the data to on the source array, and it does its SVM DR, it will put it on whatever aggregate it wants, instead of keeping it synced to stay on both sides.
This solution doesn't help leverage the data in ways that I didn't think were possible before.
We are not using it any differently than we were using it from many years ago. We were getting the benefits. What we are seeing right now is the speed, lower latency, and performance, all of the great things that we haven't had in years.
This solution hasn't freed us from worrying about usage, we are already reaching the eighty percent mark, so we are worried about usage, which is why we are looking toward the cloud to move to fabric pools with cloud volumes to tier off our snapshots into the cloud.
I wish that being forced to change the volume name would change or not exist, then I wouldn't have to go to the command line to do it at all.
This solution is stable, it's the best. I can't complain.
We move large amounts of data from one data center to another every day without any interruptions. In terms of IT operations, it has cut our ticket count down significantly, approximately a seventy percent reduction in tickets submitted to us.
This solution is scalable, it's phenomenal.
This solution's thin provisioning has allowed us to add new applications without having to purchase additional storage. The thin provisioning has helped us with deduplication, maintaining compaction, and efficiency levels. Without the provisioning, we wouldn't be able to take advantage of all of the great features.
We are running approximately a petabyte of storage physically, and logically approximately ten petabytes.
The technical support is one of the best.
Previously we had not used another solution. We have been using NetApp for years, we went from refresh approximately two years ago, then sixty to forty to the A300 All-Flash.
The initial setup was straightforward.
We filled out a spreadsheet ahead of time that contained everything necessary to get us going. When it came time for the deployment we went with the information on the spreadsheet and deployed it successfully.
We used an integrator to help us with this solution, we used Sigma Solutions, and our experience was excellent. We worked hand in hand with them.
It's expensive. It's in the hundreds of thousands.
It's beneficial, but at times, I feel compared to other vendors, we are paying a premium for the licensing that other vendors include.
You're locked in with NetApp, and you already have everything setup.
We have not evaluated other solutions, it's not worth it.
We are not at the point where we are allowed to automatically tier data to the cloud, but we are looking forward to it.
I can't see that this solution needs any other features other than what it already has. Everything that I need is already there, except for the cloud and it's there but we haven't taken advantage of it yet.
I would advise that you compare everything and put money aside, really take a look at the features and how they will or can benefit you.
It's a total win for your firm.
I would rate this solution a ten out of ten.
NetApp AFF is used to store all of our data.
We're a full Epic shop, and we 're running Epic on all of our AFFs. We also run Caché, Clarity Business Objects, and we love the SnapMirror technologies.
Prior to bringing in NetApp, we would do a lot of Commvault backups. We utilize Commvault, so we were just backing up the data that way, and recovering that way. Utilizing Snapshots and SnapMirror allows us to recover a lot faster. We use it on a daily basis to recover end-users' files that have been deleted. It's a great tool for that.
We use Workflow Automation. Latency is great on our right, although we do find that with AFF systems, and it may just be what we're doing with them, the read latency is a little bit higher than we would expect from SSDs.
With regard to the simplicity of data protection and data management, it's great. SnapMirror is a breeze to set up and to utilize SnapVault is the same way.
NetApp absolutely simplifies our IT operations by unifying data services.
The thin provisioning is great, and we have used it in lieu of purchasing additional storage. Talking about the storage efficiencies that we're getting, on VMware for instance, we are getting seven to one on some volumes, which is great.
NetApp has allowed us to move large amounts of data between data centers. We are migrating our data center from on-premises to a hosted data center, so we're utilizing this functionality all the time to move loads of data from one center to another. It has been a great tool for that.
Our application response time has absolutely improved. In terms of latency, before when we were running Epic Caché, the latency on our FAS was ten to fifteen milliseconds. Now, running off of the AFFs, we have perhaps one or two milliseconds, so it has greatly improved.
Whether our data center costs are reduced remains to be seen. We've always been told that solid-state is supposed to be cheaper and go down in price, but we haven't been able to see that at all. It's disappointing.
The most valuable features of this solution are SnapMirror and SnapVault. We are using SnapMirror in both of our data centers, and we're protecting our data with that. It is very easy to do. We are just beginning to utilize SnapVault.
We are using the AQuoS operating system, which allows us to get a lot more out of our AFF systems. It allows us to do storage tiering, which we love. You can also use the storage efficiencies to get a lot more data on the same platform.
The read latency is higher than we would expect from SSDs.
The quality of technical support has dwindled over time and needs to be improved.
This is a stable solution. We are running an eight-node cluster and the high availability, knowing that a node can go down and still be able to run the business, is great.
We do not worry about data loss. With Clustered Data ONTAP, we're able to have a NetApp Filer fail, and there is no concern with data loss. We're also using SnapMirror and SnapVault technology to protect our data, so we really don't have to worry.
Scalability is pretty easy. We've done multiple head swaps in our environment to swap out the old with the new. It's awesome for that purpose.
My experience with technical support is, as of late, the amount of expertise and what we're getting out of support has kind of dwindled a little bit. You could tell, the engineers that we talked to aren't as prepared or don't have the knowledge that they used to. We have a lot of difficulty with support.
The fact that NetApp's trying to automate the support with Elio is pretty bad, to be honest with you. In my experience, it just makes getting a hold of NetApp support that much more difficult, going through the Elio questions, and they never help so we end up just wasting minutes just clicking next and next, and let's just open a support case already, type thing. So it's been going downhill.
Prior to this solution, we were running a NetApp 7-Mode implementation with twenty-four filers.
We went from twenty-four 7-Mode filers to an eight-node cluster, so we've done a huge migration to cDOT. With the 7-Mode transition tool, it was a breeze.
We use consultants to assist us with this solution. We do hire Professional Services with NetApp to do some implementations. The technicians that we have been getting on-site for those engagements have been dwindling in quality, just like the technical support. A lot of the techs that we used to get really knew a lot about the product and were able to answer a lot of our technical questions for deployment. One of the techs that we had recently does not know anything about the product. He knows how to deploy it but doesn't know enough to be able to answer some of the technical questions that we'd like to have answered before we deploy a product.
We are looking at implementing SnapCenter, which gives us one pane of glass to utilize snapshots in different ways, especially to protect our databases.
I used to work on EMC, and particularly, the VNX product. They had storage tiering then, and when I came onboard to my new company, they ran 7-Mode and didn't have a lot of storage tiering. It was kind of interesting to see NetApp's transition to storage tiering, with cDOT, and I really liked that transition. So, my experience overall with NetApp has been great and the product is really great.
I think some of the advertisements for some of the products, that can really help us, is kind of poor. The marketing for some of the products is poor. We were recently looking at HCI, and we really didn't have a lot of information on HCI, prior to its deployment. It was just given to us and a lot of the information concerning what it was and how it was going to help wasn't really there. I had to take a couple of Element OS classes, in order to find out about the product and get that additional info, which I think, marketing that product, would have helped with a lot better.
My advice to anybody who is researching this type of solution is to do your research. Do bake-offs, as we do between products, just to make sure that you are getting the best product for what you are trying to do.
I would rate this solution a nine out of ten.
We use this solution for in-house data.
The simplicity around data protection and data management is good with the snapshots and then being able to lock them up. We can conserve the data for our space and then set the layers that we set with the administration. It's very feasible.
Our data staff is smaller than it was because it's easier to manage in one portal. We have moved several employees into different departments.
The IT operations have been simplified through the unification of data services because we have just one window where we can manage it all.
With regard to application response time, I can say that the speed increase is substantially noticeable, but I do not have any numbers. It is probably twice as fast as it was.
I know that the data center costs have been reduced because we have fewer people managing the data, but I do not know by how much.
This solution has lessened our concern about storage as a limiting factor. It comes down to the easy manageability, the deduplication, and the compaction. Our volumes aren't growing as fast as they were.
The most important features are the IOPS and the ease of the ONTAP manageability.
The deduplicate process is performed in the cache before it goes to storage, which means that we don't use as much storage.
The versatility of NetApp is what makes it really nice.
The certification classes are good, but they don't cover enough of the material, and the exams only test on what is covered in class. When I leave those classes, I only feel half-full. I have to do so much research and I'm trying to get the data for my tasks, and it's a little complicated at times.
The NetApp AFF is very stable and we haven't had any issues.
From what I can't tell, this solution is very scalable.
The NetApp technical support is very good. They have the website and they have the forums where you can get questions answered. You can get a lot of things answered without even talking to anybody.
Prior to NetApp AFF, we were using an HPE Storage solution. It was a little more difficult to swap out the drives on the XP series. You have to shut down the drive and then wait for a prompt to remove it. It's a long process and if somebody pulls it out hot and puts another one in then you're going to have to do a complete rebuild. It is not as robust or stable when you are swapping parts.
NetApp is very easy to set up.
All of the solutions by different vendors have setup wizards but with NetApp, it walks you through the steps and it is easy. It has NAS, CIFS, NFS, and block, all at once. Building the lines and going through is done step-by-step. With other vendors like EMC, you have to get a separate filer. There are a lot more questions that have to be asked on the front end.
NetApp also talks seamlessly with VMware, and most people are on VMware.
We performed the implementation.
Our shortlist of vendors included EMC, NetApp, and HPE, because we have relationships with all of them. Ultimately, NetApp gives us more versatility.
This is my favorite storage platform.
I would rate this solution a nine out of ten.
Our primary use for this solution is NFS and fiber channel mounts for VMware and Solaris.
Prior to deploying this product, we were having such severe latency issues that certain applications and certain services were becoming unavailable at times. Moving to the AFF completely obliterated all those issues that we were having.
With regard to the overall latency, NetApp AFF is almost immeasurably fast.
Data protection and data management features are simple to use with the web management interface.
We do not have any data on the cloud, but this solution definitely helps to simplify IT operations by unifying data that we have on-premises. We are using a mixture of mounting NFS, CIFS, and then using fiber channel, so data is available to multiple platforms with multiple connectivity paradigms.
The thin provisioning has allowed us to add new applications without having to purchase additional storage. The best example is our recent deployment of an entire server upgrade from Windows 2008 to Windows 2016. Had we not been using thin provisioning then we never would have had enough disk space to actually complete it without upgrading the hardware.
We're a pretty small team, so we have never had dedicated storage resources.
NetApp AFF has reduced our application response time. In some cases, our applications have gone from almost unusable to instantaneous response times.
Storage is always a limiting factor, simply because it's not unlimited. However, this solution has enabled us to present the option of less expensively adding more storage for very specific application uses, which we did not have before.
The most valuable feature is speed.
The price of NVMe storage is very expensive.
We haven't had a problem with stability since it has gone online.
We haven't needed to scale yet, but I can imagine that it would be seamless.
The NetApp technical support is outstanding.
Our previous NetApp system was a SAS and SATA spinning disk solution that was reaching end-of-life, and we were overrunning it. We were ready for an upgrade and we stuck with NetApp because of the easy of cross-upgrading, as well as the performance.
The initial setup was fairly straightforward, in that we were doing this migration from an old NetApp to a new one. However, because of the problems with latency they were having on that, it got a little bit complicated because we had to shuffle things around a lot.
The technical support helped us out well with these issues, and on the grand scheme of things, it was a very straightforward migration.
We used a company called StorageHawk, and our experience was phenomenal.
Comparing this solution to others it may seem expensive, but the price to performance for NetApp is greater. You get a lot more for the money.
We considered solutions by EMC, but they were very quickly ruled out.
I have experience with a previous version of NetApp from quite some time ago, and everything about the current version has improved.
NetApp AFF performs well, we haven't had any issues with it, and I suspect that it is going to be pretty easy to upgrade. It would be nice if the NVMe storage was less expensive, even though it's worth it.
I would rate this solution an eight out of ten.
Our primary use case for NetApp AFF is unstructured data. We set up it up for high availability and minimum downtime.
This solution simplifies our IT operations by unifying data services across SAN and NAS environments. We are using it on the fiber channel side, as well as the iSCSI side, for both CIFS and NFS, so it across the entire infrastructure.
We have used NetApp AFF to large move amounts of data. We just recently did a migration using SnapMirror and SVM DR. We did have some scheduled downtime, but there was no unplanned disruption in service.
Even with this solution implemented, I still have to manage the storage side and the availability of it, so we still have to worry about it being a limiting factor.
The most valuable features are the flexibility and level of technical support.
This is a very reliable solution in terms of keeping the system online.
This solution should be made easier to deploy. A lot of systems nowadays just come with a box where everything is included. With AFF, you have to manage it, you have to install ONTAP, and you have to configure the networking.
The stability is good. This is a very reliable solution.
It can be set up as a cluster, HA, and when one node goes down the others hold the data, so the customer barely notices that there is a failover.
I would rate the scalability an eight or nine out of ten.
We can grow this solution very easily, just by adding storage. All we need to do is buy a shelf and expand the storage side of it.
I would rate the customer support an eight out of ten. They are really good in terms of responding to the customer.
We have a large amount of unstructured data, so we felt that AFF was the right solution for us.
In terms of complexity, the initial setup is somewhere in the middle. It is not straightforward where you can run it out of the box. You have to set it up and configure the network.
We had a jumpstart, but I can handle the installation on my own.
We have not seen ROI so far.
We did consider using other vendors, but NetApp AFF was the best in terms of reliability.
In order to automatically tier cold data to the cloud, you would have to use third-party software.
I would rate this solution a seven out of ten.
We did it for consolidation of eight file repairs. We needed the speed to make sure that it worked when we consolidated.
We do a lot of financial modeling. We have a large compute cluster that generates a lot of files. It is important for us to get a quick response back for any type of multimillion file accesses across the cluster at one time. So, it's a lot quicker to do that. We found that solid-state performs so much better than than spinning drives, even over multiple clusters. it works.
It is helping us consolidate, save money, and increasing access to millions of files at once.
It is very important in our environment for all the cluster nodes. We have 4,500 CPUs that are going through and accessing all the files, typically from the same volume. So, it is important for it to get served quickly so it doesn't introduce any delay in our processing time.
Solid-state drives are the most valuable feature. It has the speed now to do workloads. We're not bound by I/O from the drives. Also, we are just starting to hit the sweet point of the capacity of the solid-state drives versus spinning disk.
I would like there to be a way to break out the 40 gig ports on them. We have a lot of 10 gigs in our environment. It is a big challenge breaking out the 40 gig coming out of the filer. It would be nice to have good old 10 gig ports again, or a card that has just 10 gig ports on it.
Stability has been really good. It's been solid. We had a couple of problems when we first set it up because we set it up incorrectly. But we learned, we change the settings and things are working a lot better now.
We haven't had to scale it yet. We literally reduced 18 racks worth of equipment into two and still have room in those two racks to do additional shelves, expanding into that footprint. So, it's expandable and dense, which is great.
The process was easy to consolidate into one AFF HA pair. It was simply doing volume copies and across SnapMirrors in the environment. It just migrated right over. It wasn't a problem at all.
It is reducing our data center costs. We consolidated eight HA pairs into one AFF HA pair.
We would like it to be free.
For our workload, it's, it's doing what we need it to do.
I would rate the product a nine (out of 10).
We do not use the solution for artificial intelligence or machine-learning applications right now.