Ariel Lindenfeld - PeerSpot reviewer
Director of Community at PeerSpot
  • 28
  • 119

When evaluating Enterprise Flash Array Storage, what aspect do you think is the most important to look for?

Let the community know what you think. Share your opinions now!
PeerSpot user
24 Answers
Terence Canaday - PeerSpot reviewer
Principal SE at Pure Storage
Jan 27, 2021

Flash changed the whole way to address storage in the first place. And of course, being able to use components, that are not measured in hours lifetime but instead in write access makes it possible to use these far longer than three or five years. Flash Storage systems make it possible to change the whole way of lifecylces compared to the early days where systems were end of life after five years at last and you had to buy a new system without a chance to use the SSDs or DFMs after the five year lifespan of a traditional system.
Where you discussed aspects of IOPS and Raid groups in the past, you now discuss dedupe and compression efficiencies and lifetime of the system. >100.000IOPS with the smalles systems should be enough for everyone. :-)

Search for a product comparison in All-Flash Storage Arrays
it_user208149 - PeerSpot reviewer
Presales Technical Consultant Storage at Hewlett Packard Hellas Ltd.
Mar 14, 2015

Primary requirement for me is the data reduction using data de-duplication algorithms. Second requirement is SSD's wear gauge. i need to be sure that SSD's installed in a Flash Array will work as many years as possible. So the vendor who has the best offering in those 2 topics has the best flash array.

it_user221634 - PeerSpot reviewer
User at Hewlett-Packard
Apr 10, 2015

Customers should consider not only performance, which is really table stakes for an All Flash Array, but also resilience design and data services offered on the platform. AFA is most often used for Tier-1 apps, so the definition of what is required to support a Tier-1 application should not be compromised to fit what a particular AFA does or does not support. Simplicity and interoperability with other non-AFA assets is also key. AFA's should support replication to and data portability between itself and non-AFAs. Further, these capabilities should be native and not require additional hardware and software (virtual or physical) to support these capabilities. Lastly, don't get hung up on the minutia of de-dupe, compression, compaction, or data reduction metrics. All leading vendors have approaches that leverage what their technologies can do to make the most efficient use of flash and preserve its duty-cycle. At the end of the day, you should compare two rations: Storage seen by the host\storage consumed on the array (or another way, provisioned v. allocated) and $/GB. These are the most useful in comparing what you are getting for your money. The $/IOPS conversation is old and challenging to relate to real costs as IOPS is a more ephemeral concept that GB, plus.

Mar 26, 2015

Understanding your particular use case is key to selecting the proper technology for your environment. We are Big Data, Oracle, Data Warehouse. NON-OLTP, So we are hyper sensitive to performance and HA. Flash and Flash Hybrid are the future for the datacenter, but there are varying ways of skinning this cat. so pay close attention to the details. for me HA is paramount. our storage must be NDU in all situations. Performance is important but we are seeing numbers well in excess of anyone's requirements. So what's next? sustained performance. Write Cliff issues, how is this addressed? datacenter cost if your a co-lo is important so having a smaller foot print and lower KW's should be considered. Then of course cost. becareful of the usable number. often de-duplication and compression is factored in with the marketing, So POC is important to understand true usable.
I go back to my original statement, understanding what YOUR company needs is the most important piece of data one should take into the conversation with any and all of the suitors.

it_user202749 - PeerSpot reviewer
Principal Architect with 1,001-5,000 employees
Mar 3, 2015

It depends on your requirements. Are you looking at Flash for Performance, ease of use, or improve data management.
Performance; you likely want an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes.

Data reduction or data management: De-duplication and compression help manage storage growth however; you do need to understand your data. If you have many Oracle databases then Block size will be key. Most products use a 4-8K block size. Oracle writes a unique ID on the Data blocks which makes it look like unique data. If your product has a smaller block size your compression and de-duplication will be better. (Below 4K better but performance may suffer slightly)

De-duplication: If you have several Test, Dev, QA databases that are all copies of production de-duplication might help significantly. With this if de-duplication is your goal you need to look at the de-duplication boundaries. Products that offer array wide or Grid wide de-duplication will provide the most benefit.

Remote Replication: If this is a requirement you need to look at this carefully each dose it differently, some products need a separate inline appliance to accommodate replication. Replication with no rehydration of data is preferred as this will reduce Wan Bandwidth requirements and Remote storage volumes.
Ease of use: Can the daily weekly tasks be completed easily, how difficult is it to add or change storage Volumes, LUN’s, Aggregates. Do you need Aggregates? Can you meet the RTO/RPO Business requirements with the storage or will you need to use a Backup tool set to do this? You should include the cost of meeting the RTP/RPO in the solution cost evaluation.

Reporting: you need to look at the caned reports do they have the reports you need to sufficiently mange your data. And equally important to they have the reports needed to show the business the efficiencies provided by the storage Infrastructure. Do you need bill back reports? (Compression, de-duplication rates, I/O Latency reports, ect..).

Terence Canaday - PeerSpot reviewer
Principal SE at Pure Storage
Jan 30, 2021

@it_user202749 I have to add a few topics to this.Most of all, large block sizes only help in regards of throughput, but not IOPS. The more IOPS you want to get, the smaller the blocks have to be. A modern Flash Array is usually optimized for 32k Blocksize which is the average for regular hypervisor environments. SQL and other databases even use 64k blocksize as default, because of the 8K pagesize with 8 pages per reads.And of course, there are already flash based storage systems on the market where you don´t need to - or even cannot at all - disable dedupe and compression to prevent performance gaps. These two features are essential to increase the lifetime of the flash cells. Variable block sizes for inline dedupe are needed to be effective nowadays.Finally the asynchronous or synchronous replication has to be considered specifically on the workloads to use. Each write needs to travel to the remote site before the commit to the host is send. So the lateny will be higher (round trip time is added to the write latency) for write intensive workloads which can get avoided by using asynch or near synch replication features. The storage should be able to address all possibilities at the same time of course. :-)Not to mention NVMeoF option or Intel Optane technology for the last peak of performance. :-)

PeerSpot user
sedson52 - PeerSpot reviewer
Lead System Engineer at MITRE Corporation
Real User
Jan 14, 2020

storage virtualization and the ability to tailor the storage solution to needs of the end user and associated compute resources is the biggest factor. Being able to easily implement tiers of storage performance is key to being more efficient with the money spent on storage.

Find out what your peers are saying about Dell Technologies, Pure Storage, NetApp and others in All-Flash Storage Arrays. Updated: October 2022.
653,522 professionals have used our research since 2012.
Bob Whitcombe - PeerSpot reviewer
Technical Sales Architect at a tech services company with 501-1,000 employees
Real User
Mar 1, 2017

AFA's have two major advantages over spinning disk - latency and IOPS - so you must understand your workload before jumping into an AFA selection process. Today, the AFA costs 3-5x more than a traditional array. When that price delta narrows to 50% more, I will probably be All flash. Note that as we get to "all flash everywhere" new Hyper-converged architectures will also figure prominently in my AFA analysis.

With the current gap in pricing however, we must engineer a solution for a need. Quantify the need - am I providing EPIC for a 5000 person hospital, analytics, transaction processing etc? What is the critical gating factor and what are the target SLA's? Do I need more IOPS, more throughput, lower latency? Will going to an AFA bring my response time down from 50ms today to 10ms tomorrow? Do I need to remember I have a 100ms SLA? As many note above, AFA's excel in many critical performance areas over traditional arrays - but I don't start with the array - I start with the workloads and what level of service my consumers require.

it_user609312 - PeerSpot reviewer
Sr. Systems Administrator at a healthcare company with 501-1,000 employees
Mar 1, 2017

How many IOPS are you averaging right now? Most organizations have far less IOPS than you would think. If you're looking to just speed up some apps, then use flash array for compute and SAN for long term storage. Or better yet, go with a high end, easy to scale out hyperconverged system and get the best of both worlds! Look for good dedupe and compression numbers.

it_user618633 - PeerSpot reviewer
Founder and Group CEO at a tech services company with 51-200 employees
Mar 1, 2017

There is not really any one single thing to consider. These systems are complex for a good reason and the overall outcome will only be as good as the sum of all moving parts....

1. How many disks and therefore how many total IOPS are available?
2. What class of SSD (SLC, cMLC etc)?
3. Are the controllers able to deliver the full capability of the disks behind them?
4. Are the controllers able to flood the interconnects in-front of them?
5. Are there enough controllers to prevent down-time during failure or maintenance without significant degradation of performance?
6. Do the software smarts of the system provide added benefits such as dynamic prioritization of presented volumes allowing you to deliver multiple classes of storage from a single medium?
7. Can the controllers cope with running these software smarts at full speed without affecting the data transport?

And most importantly, perhaps this is the single most important thing...

1. What is the ecosystem behind the storage, i.e. the vendor and their overall capability to patch known issues, properly test releases before publishing them, effectively communicate changes and give you confidence in the long term operation of the SAN. Do they have local support and is it of high quality?

it_user240762 - PeerSpot reviewer
Storage Sales Specialist at Hewlett-Packard
Mar 1, 2017

when considering an AFA, you should factor in the obvious - it's always going to scale with SSD. What most clients find out is within the first year of filling up an AFA their data naturally grows and creates new hot data while now retaining warm data. A tiered solution would be nice, but due to AFA you now have to expand with scale and although you may only need to expand your lower tier, you are going to have to buy SSD's. I've seen clients buy a separate SAN (not their original plan) once they saw their SSD Scale expansion quote for an AFA design. So consider the TCO and total costs. If allowed to mention, an architecture life the HPE 3PAR offers allows an organization to start out with an AFA model; then some months later when scale is a topic a client can add disk to the array and really maximize TCO.

it_user618630 - PeerSpot reviewer
Software Defined Sales Specialist at IBM
Mar 1, 2017

Dear friend,
All-flash arrays are amazing in many aspects. The first thing you should evaluate Is The fit of cost verse The benefits that application Will have With it. Does The reduction OF The io latency will afect The Business result? Talking about The technologies availiable in the market, you should search for The better (lowest) latency, right software features With The higest endurance and capacity. Lets see each factor:
- latency: lowest is better, But remember that hard Disk have average 5 microsecunds, any thing bellow 1 ms, can give you enought acceleration to your application. Microlatency Bellow 200ms Will be reach OnLy in some flash System that has no raid Controller But asics, for example IBM flash system.
- software reatores: do you need Only accelerate a specific application, so use a flashsystem with out any software frature like compression, snap, virtualization. It we call tier 0 storage, if you need a flash array to general storage use, tier one storage, find a supplier that give you a subsystem with all enterprise features: snapshot, migration, Virtualize other storage. As V9000 of IBM.
- endurance: all flash media are based in nand memory, each cell of nand has a very limited times that it can be writen. Read is not a problem, but it time you rewrite a cell , it became week, so find out how the garbage collection is made in the supplier (THIS PROCESS IS DONE BY ALL SUPPLIERS) , does the supplier have control of it? Or uses the internal ssd microcode and don't control it? Bad idea. Is there other intelligence doing the management of the data in each cell? For example, some manufactures has a internal cell autotier, that moves data that is less changed to cells that are very week, so it give more use time for then,
- capacity: what is the real and usable capacity that the manufacturer give you a commitment to archive. Be carefull some sold with a promise to reach one amount capacity and latter just give you more media when it not reach the promised amount. Test it with your data before buy. Get more media for free will not give you more physical space in your datacenter or reduce your electricity Bill.

In summary: check if you really need flash now, how it handle the garbage collection, if the software features fit your needs, and if the capacity reduction is real for your data. Don't buy with out test with your own data.

Hope the information was helpful
Christian Paglioli

it_user478728 - PeerSpot reviewer
AGM IT Delivery at a financial services firm with 1,001-5,000 employees
Real User
Mar 1, 2017

1. Response time
2. Connectivity of flash storage with hosts cinsidering large no of iops generated by storage
3. Backup stratergy for storage based backups
4. Ability to scale out considering most of flash storage are appliances

it_user256587 - PeerSpot reviewer
User at a tech company with 51-200 employees
Mar 1, 2017

In my experience from being involved in performing independent flash vendor comparisons, often in the vendors own labs, I have observed there is no single right answer as it depends on the workload(s) being driven.

For example highly write intensive workloads using random data patterns and larger block sizes place different demands on flash solutions than high read intensive, sequential ones with steady 4k block sizes.

I have tested many different flash products and sometimes a vendor performs to a very high standard for one workload and for the next workload, we see unacceptable latencies often exceeding 50ms when running scaled up workload levels.

The workload demands coupled with an appropriate product configuration determine the best outcomes in my experience.

I would encourage you to look at the workload profiles and how they will be mixed. Other factors impact performance such as the vendors support for the protocol. I mean FC vs iSCSI vs NFS support will often lead to wild performance variations between vendors,

How vendors cope with the metadata command mix also massively affects performance.

As an example, I have seen two comparable flash product configurations where one hit sub 5ms latency for a 50k iop workload which was mostly write intensive on a 5:1 data reduction ratio while the next vendor hit 80ms latency for the exact same workload conditions. Until you test and compare them at scale, it's nothing more than guesswork.

There is a free to use portal at :
Workloadcentral.com with workload analytics where storage logs can be uploaded and analysed to better understand the current workload behaviour. There is also a chance to see other workloads, such as Oracle, SQL, VDI and download them to replay in your own lab against products under consideration.

Good luck with your investigations!

Technical Architect at HCL Technologies
Real User
Mar 1, 2017

When evaluating Enterprise class all flash arrays, there are quite a few things to look for as these arrays differ fundamentally from the standard spinning disk based arrays or hybrid arrays. This comes down to flash/SSD as a media:

1. What type of SSDs are being used – is this EeMLC, TLC, 3D NAND etc
2. Writes are particularly crucial as SSDs have preset and finite number of write cycles. How does the underlying intelligence handle writes
3. What data efficiency measures are used: De-duplication and compression – Are these inline or post process
4. What storage protocols are supported
5. Are capacity enhancements such as erasure coding supported
6. If the all flash arrays are scale out in nature (ex: XtremIO), what is the interconnect protocol
7. Points of integration with orchestration/automation and data management tools
8. Does the array support capabilities such as external storage virtualization etc

it_user70797 - PeerSpot reviewer
Sr. Storage Solutions Engineer with 1,001-5,000 employees
Mar 1, 2017

The most important aspect to look for is to make sure the application requirements are met. Same as traditional storage arrays. There are plenty of advanced functions available via snap, clone, recovery options. You need to make sure you understand exactly how the new functions will be used in your environment.

it_user594891 - PeerSpot reviewer
Infrastructure and Database Architect at PB IT Pro
Mar 1, 2017

Performance in a shared environment, mixed workloads.
API access for backups, clones, deployments to support an increasingly Agile/DevOps world
Anlaytics on usage, trends, hotspots, bottlenecks
vCenter/Hyper-V integration

it_user380349 - PeerSpot reviewer
Technical Sales Consultant at a tech vendor with 51-200 employees
Mar 1, 2017

There are relatively few systems that need the high performance of all-flash, especially as the capacities of all-flash go up from dozens of terabytes to several petabytes. Server and Storage virtualization can give similar performance at a fraction of the cost. Make sure your need is priority.

it_user569205 - PeerSpot reviewer
User at a tech company with 51-200 employees
Mar 1, 2017

Choice many people make who have a tighter budget is price per gigabyte.

Hitesh Chhaya - PeerSpot reviewer
Engineer-Technology Solutions at Ashtech Infotech Pvt.Ltd.
Real User
Mar 1, 2017

While a block device layer can emulate a disk drive so that a general-purpose file system can be used on a flash-based storage device.

Chris Childerhose - PeerSpot reviewer
Lead Infrastructure Architect at ThinkON
Real User
ExpertTop 5
Mar 1, 2017

Further to all the great suggestions above another thing to look at is - what sets the vendor apart from all other vendors? What makes them unique or different versus being similar to all others. An example of this would be an analytic website or a unique dashboard.

it_user543627 - PeerSpot reviewer
Director Strategy and Business Development at Samsung
Real User
Oct 31, 2016

all flash arrays are for primary storage.
therefore data protection (snapshots), devops (zero copy clones), and programmability are key attributes

it_user307320 - PeerSpot reviewer
Infrastructure Manager at a retailer with 1,001-5,000 employees
Sep 8, 2015

Recovery also is a point, even SSD is fast to write to block. We still need to make sure it won't have data loss if there are power down suddenly.

it_user249687 - PeerSpot reviewer
QA Manager at Reduxio Systems
Jun 4, 2015

I would want to know what UNIQUE features the array has - they all pretty much have very similar features - give me what differentiates them (if anything...)

it_user240582 - PeerSpot reviewer
Technology Enterprise Presales Consultant at HP
May 19, 2015

Added to enhacements on typical Flash characteristics like : type of technology used (SLC, eML, cMLC, TLC, 16LC, ...) affects directly to the pricing and the performance (IOPS and < 1 msg. average latency , less Over Provisioning, right Wear Leveling (I/O pattern -random/sequential-), management of Garbage Collection and Write Amplification (Data Compression & DeDup In-Line are very important efficiency factors), together a longer Drive Endurance and DWPD -Device Writes Per Day-.
Also will continue to be equal or more important the typical TIER-1 characteristics like : High Performance, Scale Out Architecture active/active LUN nodes/controllers and multi-tenancy, Reliability 99,9999% ha, D&R (Sync RPO=0 & Async replication RPO < 5', with consistency groups), Application integration (VMware, Hyper-V, Oracle, SQL, Exchange, SAP,...), Eficiency (Thin Techniques), Easy management (self configuring, optimizing & tuning) intuitive, Data Mobility.

Related Questions
Ben Amini - PeerSpot reviewer
Chief Executive Officer at Robin Trading Company
Feb 4, 2022
Hi community professionals, I work at a small tech services company and in my position, I have to provide solutions in the IT infrastructure area to our customers.  In some cases, I analyze their needs and tell them: "You can fulfill your compute and storage demand with this product". But, sometimes I'm getting responses such as:  "No, we are an enterprise company and we can't use this produc...
See 2 answers
Manager at a financial services firm with 1,001-5,000 employees
Feb 4, 2022
Hmm, this is tough. You wouldn't have enough granularity to get you close to an optimized solution. Your best bet will be to document at least your top 10 requirements for compute and storage, then use that to identify plausible options to explore. So for compute, for example, what kind of processing do you do, will it be just for office work or server workloads; if server workloads, what type of workloads are they, and so on and so forth. For storage, what will be your needs, file or block, high transaction rate, replication requirements, do you need an object, so many questions.  Sorry, I can't be of more help, however, all I am saying is you need refined requirements.
Marc Staimer - PeerSpot reviewer
President & CDS at Dragon Slayer Consulting
Feb 4, 2022
Ben, what you are talking about are advanced selling skills. A bit much to provide you an answer, so I will paraphrase as best I can. You need to understand in-depth, several aspects of the customer's problems you are attempting to solve from their perspective: 1. How are they solving those problems now? Make no mistake, they are solving them in some way. It may be with spit, chewing gum, and baling wire, but they're solving them. 2. Where does the current solution fall short and when does it hit the wall? 3. Why are other solutions inadequate, too costly, unsustainable, or all 3? 4. How is your solution a better fit in solving those problems now and in the future? The key is in the questions you ask. Remember none of us buy logically. We always make buying decisions emotionally and justify them with logic. No one buys on price. We reject on price when it does not match the value. Price is an issue when the prospect cannot distinguish between different solutions.  There are always different factors in the buying decision. Your organization's job is to show how other solutions will not achieve what they need or want based on all of their parameters, requirements, considerations, etc., but yours will. You must spend more time talking about the problem you're solving than the solution you're selling. You are driving their self-induced anxiety about their problem and making the devil they know worse than the devil they don't. As to the specific situation you described. Keep in mind that IT pros are very risk-averse. They will over-provision and overbuy to cover themselves. A typical IT philosophy is "it's better to have the resources you may not need than to need the resources you do not have." Remember, it's emotional. No one wants to be caught with their pants down. By over-buying, they give themselves headroom for the unexpected.  There are methodologies to deal with this. Several vendors have implemented cloud-like on-demand elasticity and pricing where the customer can automatically utilize more resources that are on their premises and only pay for what they use. Dell, HPE, Pure Storage, Infinidat, NetApp all have programs like this today. And more vendors are following. This is one of the driving forces behind public clouds. Sorry for being so long-winded here. This is a complicated discussion that takes more than a short answer. Good luck.
Jan 12, 2022
Hello, We're planning to offer Storage as a Service (STaaS) to our customers.  I'm looking for your recommendations on a solution for an enterprise-level storage environment from where we can offer this service. The environment should offer a unified storage environment and should be able to deliver all SAN, NAS, and object storage offerings. Thanks for your help!
2 out of 5 answers
Rastislav Maniak - PeerSpot reviewer
Partial Owner at Storage One
Sep 20, 2021
NetApp AFF/FAS (or OEM Lenovo DM series), with no doubt, is the best answer.  SVM provides total customer isolation; it's able to run FC/iSCSI/NVMe SAN. CIFS/NFS NAS, object S3 from one box.  It's able to scale up and out. It is possible to combine All-Flash and Hybrid models in one scale-out cluster. Not supported to combine Lenovo and Netapp models in one cluster.  -Ontap Select for VMware ESX or KVM -Able to run on all the big 3 cloud hyperscalers.  Search for data fabric strategy.
Dibyendu Das - PeerSpot reviewer
Director at Apace Systems Corporation
Oct 2, 2021
You can check us on Apace Systems | www.apacesystems.com. Apace is best known for an intelligent storage platform for both micro and macro data, unified with content or media speciality intelligence… We do implement it On-Premise, Cloud, Edge or Hybrid with support to all the Cloud Services Globally…
Related Articles
Ariel Lindenfeld - PeerSpot reviewer
Director of Community at PeerSpot
Aug 21, 2022
We’re launching an annual User’s Choice Award to showcase the most popular B2B enterprise technology products and we want your vote! If there’s a technology solution that’s really impressed you, here’s an opportunity to recognize that. It’s easy: go to the PeerSpot voting site, complete the brief voter registration form, review the list of nominees and vote. Get your colleagues to vote, too! ...
Related Articles
Ariel Lindenfeld - PeerSpot reviewer
Director of Community at PeerSpot
Aug 21, 2022
PeerSpot User's Choice Award 2022
We’re launching an annual User’s Choice Award to showcase the most popular B2B enterprise technol...
Related Categories
Download Free Report
Download our free All-Flash Storage Arrays Report and find out what your peers are saying about Dell Technologies, Pure Storage, NetApp, and more! Updated: October 2022.
653,522 professionals have used our research since 2012.