The most valuable features are the deduplication and compression, along with NetApp's Snapshot technology.
I'm looking forward to the compaction feature after the code upgrade in a few months.
The most valuable features are the deduplication and compression, along with NetApp's Snapshot technology.
I'm looking forward to the compaction feature after the code upgrade in a few months.
We have been looking for a flash solution that scales horizontally along with a proven application integration stack. NetApp has been helpful and stable, and enabled us to buy capacity as needed, as well as help in quickly refreshing UAT/DEV environments as needed.
The product still uses the concept of decoupling hardware with multiple HA pairs where system resources like CPU/memory is bound to a single controller. This approach definitely helps keep the system more resilient and stable, but it makes the environment a little complex for the end user to decide where to place their application for best performance. This is being mitigated by a few of the performance and automation tools they provide, but it may not be the most efficient approach in real time.
I have used it for one year.
There were no issues with stability.
Scalability in regards to capacity hasn't been an issue. The product really scales well.
With regard to performance, storage pools/aggregates are tied to a single node, so a storage device/LUN can only use CPU/memory of that particular node.
NetApp technical support has been excellent for years and they are also improving with their deep software engineering skills/customer reports.
We used to deploy other large storage vendor products that didn't integrate well with the application stack. Automation and efficiency has been a driver in the company, which made us switch to NetApp.
Initial setup was straightforward.
Snapshot/FlexClone are the core licenses that I would recommend to others. Opt for a converged infrastructure like FlexPod, where the Cisco UCS server platform is involved.
We evaluated other large flash vendors including EMC and Pure. Every vendor has their own niche in the flash industry.
Decide your current and future requirements in terms of performance, capacity scaling, application (SQL/Oracle/SharePoint/Exchange/SAP) integration, storage efficiency (dedupe/compression), operational overhead, etc., and decide on a vendor based on it.
No vendor is perfect in every aspect, so chose the vendor based on your requirements and test them!!!
The most valuable features are that it's all flash and it's super fast. The only problem is, it's a little too fast in some situations. It's actually causing problems with our applications because it's too fast.
Other than that, it's great because it gives us enough IOPS to manage our whole system. Except for that one, it works great.
We're able to bring in a bunch of SANs together, into one solution, instead of having a bunch of separate ones. We had about two or three other ones we were using, and now we just use one.
It's as fast as it's going to be. The problem is the whole application somehow manages to eat up 450,000 IOPS, which is insane. It just has bursts of speed because it's programmed badly. We've been trying to fight with the vendor about that because that was originally why we went with the solution.
Other than that, I can't see any areas with room for improvement right now. I haven't used it for too long. It's only been a couple of months, because it's relatively new.
Stability hasn't been an issue at all. It's just been that one program, pretty much, lately.
Scalability hasn't come up yet. It's pretty nice because we're planning to expand on to an offsite location, as well, to have redundancy. Scalability seems pretty good.
We haven't yet needed to use NetApp technical support. We have gone with the vendor that sold us the NetApp. They've been helping us with it, when we have any questions. We haven't had to directly contact NetApp.
We were having performance issues with that specific application and we were trying to fix that. Then, once we moved, we came to the conclusion it wasn't the speed problems; it was the application itself. So now, we're trying to get them to fix it. It was actually more proof of that for them.
In general, when I choose a vendor, the important criteria that I look for in a vendor are cost and performance. That's what it comes down to: Who has the best prices? The most bang for your buck.
Initial setup seemed pretty straightforward. The vendor pretty much took care of most of it, but it was more of the implementation of the VMware. That's what we were working on, or what I was working on, anyway. It was fairly simple.
I think we looked at EMC a little bit, but I think they were too expensive. They were out of our price range, and we wanted to go all flash. That's pretty much why we chose NetApp.
Make sure all your applications aren't the problem with what you're trying to fix. There really weren't that many problems with it. It just worked. It works like any other SAN really; it's just really fast.
There’s probably more VMware-type issues that you might have to run into. I’d look into how to set up a lot of iSCSIs if you have a lot of databases. Other than that, it wasn't so bad.
The important features are space savings, deduplication, compression and compaction. By enabling the deduplication, we save a lot of space, because we use it for VDA. We also see some performance improvement compared to the SAS spinning disks.
This solution gives us better throughput, better performance and better space-saving efficiency. These are the benefits the user group has seen.
They should really prove the performance numbers they show you. They provide some general performance numbers, but performance varies for every different customer site and different workloads. What they say it will do doesn't necessarily match what it does. But we have seen some difference in workloads other than the VDA. So they should say, “For this kind of workload, here are the performance statistics and for other workloads, it varies.” They should not simply say that these numbers apply to every situation. That should not be the case.
We assumed that the performance statistics they provide are applicable for everything and we purchased it. Then, we found that this is not a scalable solution. We did not get the performance we expected. They could provide a clear indication that the numbers they show are only for a particular type of workload. They could also improve the performance to match the numbers.
Stability is good. We have been using NetApp products for a very long time. We are the first customer for NetApp and we have been involved in various other FAS deployments. Stability-wise, it's gotten better.
Three years back, we deployed many customer systems; we have a big 24-node cluster. So scalability is very good.
For this particular deployment, we have only one HA pair. Currently, there is no requirement to grow from a scalability point of view. Our requirement is very small. In the future, we may think of adding additional HA pairs and we can grow that scalability; we can distribute it in the future.
The initial setup was straightforward. It was just like any other FAS system. Just install and enable some features for the AFF systems. It was not like a regular FAS system, but other than that, configuration is exactly same; simple and easy.
Initially, we approached multiple vendors for this kind of solution.
We have a NetApp on-site PSE and a systems manager – a NetApp group – sitting in our company. They suggested, “Why don't you explore this All Flash FAS for the VDA?” Then we evaluated the E560, a NetApp product, as well as AFF. We also evaluated other vendors such as XtremIO from Dell EMC.
Finally, for the simplicity and the flexibility, we thought of going with the AFF system.
This is a newer deployment. We used to use just the FAS system with the spinning HDD. We have changed it to all-flash.
You definitely should consider it.
One important factor for working with vendors is flexibility. The ease to use many features like FlexClone, SnapMirror and disaster recovery features. Other than that, the support prospect is very important to us. So the storage unit itself was not the only thing we considered before deciding to go with this particular solution.
It's an all-flash array and it integrates with the NAS solutions we use; that's a key part. We were looking at the different arrays. For example, SolidFire doesn't integrate with the NAS. Our solution mainly focuses on the NAS part of it, so we we're looking for a high-performance array. AFF basically is geared to those needs, apart from the base services which come with the NetApp product.
It has improved, in terms of latency and performance issues we were having on the spinning media; those will be gone. We can sell the customer what they need; all customers.
I haven't thought much about additional features or improvements. We’ve only been using the product for a short period of time; the main part is that it integrates with the NAS solutions and all the backups, SMVI, we would like to do. We're happy as of now.
Maybe thinking from my current problems or customers is why I can’t really think of anything. Maybe our environment is not as challenging as others. That could be a reason that we're not looking for extra things.
An example of something that is lacking, not necessarily for the AFF, as such, and that we might not have faced, is that in the FAS series, we were told about the faster 3200, if we get into an issue wherein it’s looking at a cluster interconnect, we need to basically replace some motherboard. Sometimes even doing a failover and give back wasn't even possible. We had to do a forced takeover and give back, and we basically corrupted couple of databases; it went to that extent. Hopefully, those are not issues in AFF. We haven't faced that yet but you never know until you actually use the product for a while.
Basically, they could do better in terms of software integration. There are a lot of features that, when we try to do it or, when NetApp tries to do it, they come across a lot of bugs which could affect us as customers.
Bugs need to be addressed at a much earlier level. There could be more QA done at NetApp itself before they get it out as a product.
We have been using it for three months.
We have not been using AFF for a long time, but the FAS series has been stable. We had issues with the 3200 series, wherein motherboards needed to be replaced under certain conditions, which we didn't like. We had to take some hits on that. Otherwise, if we go to the higher-end arrays, they're very stable.
There haven’t been that many issues. We do not have a lot of performance issues or demands, so we haven’t had many issues, in terms of scalability or performance.
We have used technical support. Whether it's a hardware or software issue, we do use it. We use it through a partner, if not directly with NetApp. They're helpful. It’s generally been a good experience with technical support.
We were previously using the FAS series with spinning media.
One of the key factors in our decision to move to a new solution was that NetApp was marketing it very well. We were running five-year-old hardware and we were about to do a tech refresh on them. We looked at spinning media, FAS and the AFF solution. AFF was making some sense cost-wise and performance-wise, so that's why we went to AFF.
We used professional services from another vendor for the initial setup, so we didn't feel it was that difficult.
The training for AFF was not difficult; it wasn't complicated.
We looked at Tintri for the VM piece of it. Finally, we went to the AFF.
In general, when I’m choosing a vendor, I look at what kind of products or aspects of the product we are looking for, whether they satisfy that or not, as well as performance. Third but not least is the cost, as well as how much difference it is from our current NetApp solution because our staff needs to be trained on that.
It does integrate; if you know the FAS series platform, it's not much different if you know CDOT. It's not much different doing implementation.
Determine which volumes need to go where; do that preparation from the customer’s perspective: how they want to use the product rather than how to deploy a product.
The most valuable features are the speed and the predictable performance. Compared to the spinning disk, I don't have to worry about IOPS anymore. I can rely on the IOPS being there. I can worry about CPU now. It's one less thing I have to worry about as far as performance.
The latency is very predictable and lower. It's very sustained, we know what it's going to be, and it doesn't get impacted by snapshots and so forth.
The AFF, which is what turns on the bit so that you can have an all-flash array compared to the hybrid array; I'm having troubles in my environment buying systems for smaller sites because I want the all flash array and I want the speed. I can go hybrid and still do SSD but it's making choices hard for me when I'm doing a lot of SnapMirrors and SnapVaults between sites.
I want the all-flash but I know I can't because I have to have SATA for the low-cost SnapMirror and SnapVault. It'd be nice if they would turn the switch on per aggregate, or maybe even per node, so that I could use it on some nodes. That way I wouldn't have to choose. Right now, I'm having a hard time choosing between hybrid or flash. I want the flash but I can't get it if I have to go hybrid.
I’m also looking forward to more CPU and power that's coming out in the AFF 700 and so on.
Other than that, so far, I'm pretty happy.
We had a stability issue. We got bit by a bug that was a compression problem, and we had to do a WAFL check. It was the first time we've ever had to do that only on the all-flash array.
The bug had already been identified, but nobody had hit it. We were the first one to hit it. The QA lab had found it. They should have notified all AFF customers before we hit it, because then we could have turned off compression and not hit it until the bug fix was released.
Technical support needs improvement. We need access to the backend people without having to go through two layers to get to them, because we're always above the two layers. It's a waste of our time to have to work through them.
We previously used a different solution, which was coming to the end of its lifecycle.
Initial setup was good. It's quicker, now that they've started sending out the pre-configured systems, or optimized systems.
There weren’t any other flash storage vendors on our shortlist. We were already in a four-year cycle with NetApp, so we just stuck with the same vendor.
In general, when I look at a vendor, the most important criteria is that they have our interests at heart and want to partner with us. Since we're a non-profit organization, we need them to understand what we're doing because we don't have a lot of money to throw around. They have to invest in our belief of what we're trying to do. Cost is part of it, but we still try to pick the technology over the cost, first.
We decided to use the All-Flash because of speed. Most of the time, when we looked at the SAP database, what we found was, by using the All-Flash, we got almost 100% improvement on our jobs.
The best part about it is the density; otherwise, earlier, we used to use a lot of 300- or 600-GB disks. It saves space, saves power and makes us more efficient. The main thing is performance. If you can get the report done in half the time, it's good.
I would like to see the All-Flash FAS support virtualization better. I find that lacking in some areas; application and for disaster recovery. I know we have to do a lot of setup and we need to know exactly what needs to be done, but I would expect NetApp to make those best practices available automatically. Why do they say, “Do this, do this,” when they could say instead, “For DR, click this button”, which would automatically implement the best procedure, rather than having to figure it out yourself? That should be automated.
There are several other improvements that can be done, especially with the clustering. I don't know why we had to make back-end decisions. With software-defined networking, most of the decisions can be made at the front end. Right now, how NetApp works is, you get the data to the head, take it to the back end to make a decision and then pump it back. I just want to eliminate the switch in the back to the cluster. Why not make those decisions? Maybe they need to do something on the software-defined networking; maybe have some module in the switch to make the decision at the front-end, distribute the workload for the clusters in the back. I really don't like having another switch in the back. You know your data comes from this network.
So far, we have not had any major stability issues because I look for stability, then performance; the product has to be stable first, then comes the performance.
My uptime is 99.99%. Other people say “All five nines,” but I say, “Hey, when the CFO or the CEO wants access and it's down, it doesn't matter what you're doing.”
Stability is very, very important. The first thing is stability, then performance. Performance is important because performance is everyday work. Stability is like, you say nowadays, “IT infrastructure has to be like air. You don't look for air, right?” You can automatically breathe it like that. Storage has to exist all the time. That's the main criteria on stability.
So far, I don't know the exact size that we have. I know we can add more storage. We just procured some more disk shelves to add. I don't know the limits. I probably need to go check out how large we can be.
Also, we're trying to keep our environment separated. That way, there's no contamination. There are also regulations and other things we have to worry about. If we're putting everything in one box, putting all the eggs in one basket, we need to be really careful about stability, performance, and making changes.
If we want to scale out in the future, I think the system is capable. We should not have problems; I hope that will happen.
We might have used technical support a little bit but most of the time, it is working, so I don't think we made any calls. I don't think we are using it. We're paying for it but we're not using it much.
Our vendor was good, they did the initial setup; they helped through the setup. If you set it up right the first time, you probably don't have to mess with it a lot. If it is stable, there isn’t much else to do.
We were previously using NetApp with spinning drives, and we were also using some of the EMC DMX.
Now, we are using NetApp exclusively.
Initial setup was pretty easy. I think it only took maybe half a day to do everything; put it in, power it, connect all the cables, configure it. I think we put it in production within like half a day; not difficult.
We did run the eval and our PoC through other vendors, other storage suppliers.
There were two other flash players, and we finally ended up going with NetApp All-Flash. The reason being the migration would be much easier. We added our existing cluster to the same cluster, so that we could do the migration whenever we are able to do it. We didn’t need a big downtime to migrate it.
Also, when we buy other technology, we have to have people to manage it. We need to decide whether, “OK, do I need to use the current talent pool to migrate to All-Flash, or bring in a new player where we have to support both?” It adds to the cost.
When we are selecting a vendor to work with, we look at whether they want to work according to our interest or according to the vendor’s interest, because we need to make sure they can support us in the long run; that they are reliable; and that they have good people who know the product and have a good attitude working with customers. Most of the technical knowledge and other things, you can acquire, but attitude is important.
If you are a NetApp customer and considering a new technology, you need to look at the additional cost of doing things or administrating another thing. If you are completely moving from NetApp to a new vendor altogether, can they do everything? Transitioning from one storage to another takes a long time. At the end of the day, your servers and other things, they don't have anything there, like transient, that you can replace any time. But when it comes to storage, your storage is important.
If you give me the storage, I can do pretty much everything. If your data is available, you can figure out how to reroute it or do things with that, but if your data is not there, you have servers, everything is useless; network. Everything is useless. I still see people invest a lot of money on networking. I say, “Look, if the storage is not available, you don't need network; you don't need servers.” You need to look at your storage; it’s very critical. It has to be stable, perform well and you need to be able to protect it. If those things are there, you can take the storage anywhere and make it work. If you don't have compute, Amazon EC2 can give you compute, Azure can give you compute, but you need to protect your storage.
It reduced the overall latency in our Citrix infrastructure. We have a pretty robust Citrix infrastructure. Before putting in the All Flash FAS, end users would see a lot of latency. That's been the biggest improvement, along with a lot of improvement in the overall performance of our SAN and a few other data intensive applications.
There was noticeable latency before the All Flash FAS. Since the All Flash FAS, it is extremely fast, no latency whatsoever.
I would basically just like to see improvements with the reporting; consolidating metrics, performance and any sort of issues. Right now, there are a lot of different tools, a lot of different places to go to see the overall health of the system. I would like one place, a dashboard, to see everything. I know there are some things that NetApp has released and are releasing, but we haven't gotten to the point where we've implemented those yet.
It is extremely stable; never had any down time or issues with it. It's fully redundant. All of the updates have pretty much been non-interruptive; it’s an extremely stable platform.
It scales out well. It’s a new All Flash FAS and we looked at the overall capacities that we needed before. It's only been in place for about six months. From a scalability perspective, we know that it will scale out if we need it to, but it's a new implementation, so no issues or anything like that.
So far, we haven't needed to use technical support for this one, yet.
We did not previously use a different solution. We were a NetApp shop before that, but we were using a different controller and we weren't an All Flash FAS shop. We could see the latency. We used all the utilities, so we could see what was going on, the need and how it would help our business.
Initial setup is generally straightforward, but NetApp has good technical articles and guidance on moving from one NetApp controller to another NetApp controller. It was pretty straightforward for the most part.
We looked at a number of other flash systems and solutions for our latency issues. At the end of the day, we just decided to continue and move forward with another NetApp controller.
Reliability and availability are the most important criteria for me when selecting a vendor to work with. We need them to be available. There are a lot of vendors out there that have a lot of people, but if you're building a reputation and you can't get the people you need, then it's a problem, regardless of how good the controller is.
It does what it’s meant to do; works extremely well in our environment. We have multiple data centers and the replication works really well. Overall, it's pretty easy to use.
Look at your individual company's needs. In general, look at your nice-to-haves and must-haves, and then weigh the options and see what works best. NetApp has been a great, established company. We've had a good relationship with NetApp for a long time and so we would recommend them to a colleague.
Backups are the most valuable feature, because our company has very intensive backups; we need it forever. They have to be fast, so we cannot keep them on tapes.
Actually, we are looking for better Oracle backups. In production, it takes about 24 hours to run the online backups. We decided to take the backups in the DR. Currently we do the backups in DR, we do not back up production. We were looking for some solution from NetApp; it could be SnapCenter. We are looking at that.
That would make backing up faster. In the next six months, maybe, we plan to implement that.
For the last two years, we haven’t had a major outage; so far, it looks stable.
The cluster mode is really, really scalable. Before that, we used to have 7-mode. We are migrating everything from 7-mode to cluster mode, and we are seeing huge benefits in our company.
Before, we had a 7-mode cluster, and we were having CPU issues. We could not migrate a volume to another node without an outage. Now, we have something like six nodes. When we have a performance issue, we can just migrate the volume to a different node.
Technical support is 7/10. I’ve had good experiences and also bad experiences.
For example, we were in the middle of a performance issue, and we called support. The support person takes all the information, and then he confirms it that he received everything. He said hew would analyze the logs and get back to us. After two days, they started asking for more logs – "Can you send me these logs? We didn't get it." – even though we had confirmation that they had received them. We lost two days. Then, we had to escalate it, and only then did we get a response. We had to be proactive on our end too.
We previously used EMC products for backups, then we migrated our data to NetApps because of the SnapDrive, which is really easy to restore. I am not comparing it to EMC; but we are more happy on the NetApps regarding the backups. We see a big difference between NetApp and the EMC solution we were previously using, and it's multi-protocol. Right now, there might be many products are offering it, but NetApp has been offering multi-protocol for years. We use NFS, we use CIFS, we use iSCSI, we use fiber channel; all in one really. It's got everything in one solution.
Setting up cluster-mode, initially, close to two years ago, was a little bit difficult, but after I started using it and after I went for NetApp training, I now feel it's easier than 7-mode.
I haven't checked the new startup companies, but we compared NetApp with Oracle and EMC. NetApp costs a lot less than both EMC and Oracle. We looked at Exadata, and we ended up buying all-flash because it offered a better ROI. Exadata was not even all-flash, but it cost more than the all-flash.
We compared it to other vendors, and also with the return on investment we were expecting. This is cost efficient. We went to all the vendors to see how it would impact our IT budget.
We have been using it for a long time. As our storage increases, we keep on adding NetApps because we are happy with it.
I have been working with NetApp for something like 10 years, and I have worked for about a year with IBM and EMC. The choice depends on the company and the user. For some companies, NetApp might not be suitable for different reasons. For example, my previous company used fiber channel more.
Every company thinks that NetApp is a NAS solution, not a SAN solution. In that case, if they need a SAN solution, they think it has to come from a different company. My previous company thought the same way. However, we implemented some SAN on the NetApp side, and they're happy.
Hi, I'm a NetApp trainer and wanted to point out a new capability in ONTAP 9.1 regarding your Scalability/Improvement comments:
"With regard to performance, storage pools/aggregates are tied to a single node, so a storage device/LUN can only use CPU/memory of that particular node."
Since 9.1 FlexGroups are GA. Check them out. They decouple FlexVol performance from nodes and aggregates/StoragePools... Check out TR-4557 and TR-4571 for Info and Best Practices.