2015-02-23T07:21:00Z
Ariel Lindenfeld - PeerSpot reviewer
Director of Community at PeerSpot
  • 28
  • 141

When evaluating Enterprise Flash Array Storage, what aspect do you think is the most important to look for?

Let the community know what you think. Share your opinions now!
24
PeerSpot user
24 Answers
it_user208149 - PeerSpot reviewer
Presales Technical Consultant Storage at Hewlett Packard Hellas Ltd.
Vendor
2015-03-14T21:50:27Z
Mar 14, 2015

Primary requirement for me is the data reduction using data de-duplication algorithms. Second requirement is SSD's wear gauge. i need to be sure that SSD's installed in a Flash Array will work as many years as possible. So the vendor who has the best offering in those 2 topics has the best flash array.

Search for a product comparison in All-Flash Storage
it_user221634 - PeerSpot reviewer
User at Hewlett-Packard
Vendor
2015-04-10T21:31:50Z
Apr 10, 2015

Customers should consider not only performance, which is really table stakes for an All Flash Array, but also resilience design and data services offered on the platform. AFA is most often used for Tier-1 apps, so the definition of what is required to support a Tier-1 application should not be compromised to fit what a particular AFA does or does not support. Simplicity and interoperability with other non-AFA assets is also key. AFA's should support replication to and data portability between itself and non-AFAs. Further, these capabilities should be native and not require additional hardware and software (virtual or physical) to support these capabilities. Lastly, don't get hung up on the minutia of de-dupe, compression, compaction, or data reduction metrics. All leading vendors have approaches that leverage what their technologies can do to make the most efficient use of flash and preserve its duty-cycle. At the end of the day, you should compare two rations: Storage seen by the host\storage consumed on the array (or another way, provisioned v. allocated) and $/GB. These are the most useful in comparing what you are getting for your money. The $/IOPS conversation is old and challenging to relate to real costs as IOPS is a more ephemeral concept that GB, plus.

Vendor
2015-03-26T19:26:26Z
Mar 26, 2015

Understanding your particular use case is key to selecting the proper technology for your environment. We are Big Data, Oracle, Data Warehouse. NON-OLTP, So we are hyper sensitive to performance and HA. Flash and Flash Hybrid are the future for the datacenter, but there are varying ways of skinning this cat. so pay close attention to the details. for me HA is paramount. our storage must be NDU in all situations. Performance is important but we are seeing numbers well in excess of anyone's requirements. So what's next? sustained performance. Write Cliff issues, how is this addressed? datacenter cost if your a co-lo is important so having a smaller foot print and lower KW's should be considered. Then of course cost. becareful of the usable number. often de-duplication and compression is factored in with the marketing, So POC is important to understand true usable.
I go back to my original statement, understanding what YOUR company needs is the most important piece of data one should take into the conversation with any and all of the suitors.

it_user202749 - PeerSpot reviewer
Principal Architect with 1,001-5,000 employees
Vendor
2015-03-03T16:12:06Z
Mar 3, 2015

It depends on your requirements. Are you looking at Flash for Performance, ease of use, or improve data management.
Performance; you likely want an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes.

Data reduction or data management: De-duplication and compression help manage storage growth however; you do need to understand your data. If you have many Oracle databases then Block size will be key. Most products use a 4-8K block size. Oracle writes a unique ID on the Data blocks which makes it look like unique data. If your product has a smaller block size your compression and de-duplication will be better. (Below 4K better but performance may suffer slightly)

De-duplication: If you have several Test, Dev, QA databases that are all copies of production de-duplication might help significantly. With this if de-duplication is your goal you need to look at the de-duplication boundaries. Products that offer array wide or Grid wide de-duplication will provide the most benefit.

Remote Replication: If this is a requirement you need to look at this carefully each dose it differently, some products need a separate inline appliance to accommodate replication. Replication with no rehydration of data is preferred as this will reduce Wan Bandwidth requirements and Remote storage volumes.
Ease of use: Can the daily weekly tasks be completed easily, how difficult is it to add or change storage Volumes, LUN’s, Aggregates. Do you need Aggregates? Can you meet the RTO/RPO Business requirements with the storage or will you need to use a Backup tool set to do this? You should include the cost of meeting the RTP/RPO in the solution cost evaluation.

Reporting: you need to look at the caned reports do they have the reports you need to sufficiently mange your data. And equally important to they have the reports needed to show the business the efficiencies provided by the storage Infrastructure. Do you need bill back reports? (Compression, de-duplication rates, I/O Latency reports, ect..).

TC
Principal SE at Pure Storage
Vendor
Jan 30, 2021

@it_user202749 I have to add a few topics to this.Most of all, large block sizes only help in regards of throughput, but not IOPS. The more IOPS you want to get, the smaller the blocks have to be. A modern Flash Array is usually optimized for 32k Blocksize which is the average for regular hypervisor environments. SQL and other databases even use 64k blocksize as default, because of the 8K pagesize with 8 pages per reads.And of course, there are already flash based storage systems on the market where you don´t need to - or even cannot at all - disable dedupe and compression to prevent performance gaps. These two features are essential to increase the lifetime of the flash cells. Variable block sizes for inline dedupe are needed to be effective nowadays.Finally the asynchronous or synchronous replication has to be considered specifically on the workloads to use. Each write needs to travel to the remote site before the commit to the host is send. So the lateny will be higher (round trip time is added to the write latency) for write intensive workloads which can get avoided by using asynch or near synch replication features. The storage should be able to address all possibilities at the same time of course. :-)Not to mention NVMeoF option or Intel Optane technology for the last peak of performance. :-)

PeerSpot user
TC
Principal SE at Pure Storage
Vendor
2021-01-27T13:11:56Z
Jan 27, 2021

Flash changed the whole way to address storage in the first place. And of course, being able to use components, that are not measured in hours lifetime but instead in write access makes it possible to use these far longer than three or five years. Flash Storage systems make it possible to change the whole way of lifecylces compared to the early days where systems were end of life after five years at last and you had to buy a new system without a chance to use the SSDs or DFMs after the five year lifespan of a traditional system.
Where you discussed aspects of IOPS and Raid groups in the past, you now discuss dedupe and compression efficiencies and lifetime of the system. >100.000IOPS with the smalles systems should be enough for everyone. :-)

SE
Lead System Engineer at MITRE Corporation
Real User
2020-01-14T14:20:29Z
Jan 14, 2020

storage virtualization and the ability to tailor the storage solution to needs of the end user and associated compute resources is the biggest factor. Being able to easily implement tiers of storage performance is key to being more efficient with the money spent on storage.

Learn what your peers think about Dell PowerStore. Get advice and tips from experienced pros sharing their opinions. Updated: March 2023.
686,748 professionals have used our research since 2012.
BW
Technical Sales Architect at a tech services company with 501-1,000 employees
Real User
2017-03-01T20:35:47Z
Mar 1, 2017

AFA's have two major advantages over spinning disk - latency and IOPS - so you must understand your workload before jumping into an AFA selection process. Today, the AFA costs 3-5x more than a traditional array. When that price delta narrows to 50% more, I will probably be All flash. Note that as we get to "all flash everywhere" new Hyper-converged architectures will also figure prominently in my AFA analysis.

With the current gap in pricing however, we must engineer a solution for a need. Quantify the need - am I providing EPIC for a 5000 person hospital, analytics, transaction processing etc? What is the critical gating factor and what are the target SLA's? Do I need more IOPS, more throughput, lower latency? Will going to an AFA bring my response time down from 50ms today to 10ms tomorrow? Do I need to remember I have a 100ms SLA? As many note above, AFA's excel in many critical performance areas over traditional arrays - but I don't start with the array - I start with the workloads and what level of service my consumers require.

it_user609312 - PeerSpot reviewer
Sr. Systems Administrator at a healthcare company with 501-1,000 employees
Vendor
2017-03-01T20:29:38Z
Mar 1, 2017

How many IOPS are you averaging right now? Most organizations have far less IOPS than you would think. If you're looking to just speed up some apps, then use flash array for compute and SAN for long term storage. Or better yet, go with a high end, easy to scale out hyperconverged system and get the best of both worlds! Look for good dedupe and compression numbers.

it_user618633 - PeerSpot reviewer
Founder and Group CEO at a tech services company with 51-200 employees
Consultant
2017-03-01T19:49:07Z
Mar 1, 2017

There is not really any one single thing to consider. These systems are complex for a good reason and the overall outcome will only be as good as the sum of all moving parts....

1. How many disks and therefore how many total IOPS are available?
2. What class of SSD (SLC, cMLC etc)?
3. Are the controllers able to deliver the full capability of the disks behind them?
4. Are the controllers able to flood the interconnects in-front of them?
5. Are there enough controllers to prevent down-time during failure or maintenance without significant degradation of performance?
6. Do the software smarts of the system provide added benefits such as dynamic prioritization of presented volumes allowing you to deliver multiple classes of storage from a single medium?
7. Can the controllers cope with running these software smarts at full speed without affecting the data transport?

And most importantly, perhaps this is the single most important thing...

1. What is the ecosystem behind the storage, i.e. the vendor and their overall capability to patch known issues, properly test releases before publishing them, effectively communicate changes and give you confidence in the long term operation of the SAN. Do they have local support and is it of high quality?

it_user240762 - PeerSpot reviewer
Storage Sales Specialist at Hewlett-Packard
Vendor
2017-03-01T19:42:11Z
Mar 1, 2017

when considering an AFA, you should factor in the obvious - it's always going to scale with SSD. What most clients find out is within the first year of filling up an AFA their data naturally grows and creates new hot data while now retaining warm data. A tiered solution would be nice, but due to AFA you now have to expand with scale and although you may only need to expand your lower tier, you are going to have to buy SSD's. I've seen clients buy a separate SAN (not their original plan) once they saw their SSD Scale expansion quote for an AFA design. So consider the TCO and total costs. If allowed to mention, an architecture life the HPE 3PAR offers allows an organization to start out with an AFA model; then some months later when scale is a topic a client can add disk to the array and really maximize TCO.

it_user618630 - PeerSpot reviewer
Software Defined Sales Specialist at IBM
MSP
2017-03-01T19:36:48Z
Mar 1, 2017

Dear friend,
All-flash arrays are amazing in many aspects. The first thing you should evaluate Is The fit of cost verse The benefits that application Will have With it. Does The reduction OF The io latency will afect The Business result? Talking about The technologies availiable in the market, you should search for The better (lowest) latency, right software features With The higest endurance and capacity. Lets see each factor:
- latency: lowest is better, But remember that hard Disk have average 5 microsecunds, any thing bellow 1 ms, can give you enought acceleration to your application. Microlatency Bellow 200ms Will be reach OnLy in some flash System that has no raid Controller But asics, for example IBM flash system.
- software reatores: do you need Only accelerate a specific application, so use a flashsystem with out any software frature like compression, snap, virtualization. It we call tier 0 storage, if you need a flash array to general storage use, tier one storage, find a supplier that give you a subsystem with all enterprise features: snapshot, migration, Virtualize other storage. As V9000 of IBM.
- endurance: all flash media are based in nand memory, each cell of nand has a very limited times that it can be writen. Read is not a problem, but it time you rewrite a cell , it became week, so find out how the garbage collection is made in the supplier (THIS PROCESS IS DONE BY ALL SUPPLIERS) , does the supplier have control of it? Or uses the internal ssd microcode and don't control it? Bad idea. Is there other intelligence doing the management of the data in each cell? For example, some manufactures has a internal cell autotier, that moves data that is less changed to cells that are very week, so it give more use time for then,
- capacity: what is the real and usable capacity that the manufacturer give you a commitment to archive. Be carefull some sold with a promise to reach one amount capacity and latter just give you more media when it not reach the promised amount. Test it with your data before buy. Get more media for free will not give you more physical space in your datacenter or reduce your electricity Bill.

In summary: check if you really need flash now, how it handle the garbage collection, if the software features fit your needs, and if the capacity reduction is real for your data. Don't buy with out test with your own data.

Hope the information was helpful
Christian Paglioli

it_user478728 - PeerSpot reviewer
AGM IT Delivery at a financial services firm with 1,001-5,000 employees
Real User
2017-03-01T18:10:15Z
Mar 1, 2017

1. Response time
2. Connectivity of flash storage with hosts cinsidering large no of iops generated by storage
3. Backup stratergy for storage based backups
4. Ability to scale out considering most of flash storage are appliances

it_user256587 - PeerSpot reviewer
User at a tech company with 51-200 employees
Vendor
2017-03-01T17:52:18Z
Mar 1, 2017

In my experience from being involved in performing independent flash vendor comparisons, often in the vendors own labs, I have observed there is no single right answer as it depends on the workload(s) being driven.

For example highly write intensive workloads using random data patterns and larger block sizes place different demands on flash solutions than high read intensive, sequential ones with steady 4k block sizes.

I have tested many different flash products and sometimes a vendor performs to a very high standard for one workload and for the next workload, we see unacceptable latencies often exceeding 50ms when running scaled up workload levels.

The workload demands coupled with an appropriate product configuration determine the best outcomes in my experience.

I would encourage you to look at the workload profiles and how they will be mixed. Other factors impact performance such as the vendors support for the protocol. I mean FC vs iSCSI vs NFS support will often lead to wild performance variations between vendors,

How vendors cope with the metadata command mix also massively affects performance.

As an example, I have seen two comparable flash product configurations where one hit sub 5ms latency for a 50k iop workload which was mostly write intensive on a 5:1 data reduction ratio while the next vendor hit 80ms latency for the exact same workload conditions. Until you test and compare them at scale, it's nothing more than guesswork.

There is a free to use portal at :
Workloadcentral.com with workload analytics where storage logs can be uploaded and analysed to better understand the current workload behaviour. There is also a chance to see other workloads, such as Oracle, SQL, VDI and download them to replay in your own lab against products under consideration.

Good luck with your investigations!

AS
Technical Architect at HCL Technologies
Real User
2017-03-01T17:25:16Z
Mar 1, 2017

When evaluating Enterprise class all flash arrays, there are quite a few things to look for as these arrays differ fundamentally from the standard spinning disk based arrays or hybrid arrays. This comes down to flash/SSD as a media:

1. What type of SSDs are being used – is this EeMLC, TLC, 3D NAND etc
2. Writes are particularly crucial as SSDs have preset and finite number of write cycles. How does the underlying intelligence handle writes
3. What data efficiency measures are used: De-duplication and compression – Are these inline or post process
4. What storage protocols are supported
5. Are capacity enhancements such as erasure coding supported
6. If the all flash arrays are scale out in nature (ex: XtremIO), what is the interconnect protocol
7. Points of integration with orchestration/automation and data management tools
8. Does the array support capabilities such as external storage virtualization etc

it_user70797 - PeerSpot reviewer
Sr. Storage Solutions Engineer with 1,001-5,000 employees
User
2017-03-01T15:49:58Z
Mar 1, 2017

The most important aspect to look for is to make sure the application requirements are met. Same as traditional storage arrays. There are plenty of advanced functions available via snap, clone, recovery options. You need to make sure you understand exactly how the new functions will be used in your environment.

it_user594891 - PeerSpot reviewer
Infrastructure and Database Architect at PB IT Pro
Consultant
2017-03-01T15:48:46Z
Mar 1, 2017

Performance in a shared environment, mixed workloads.
API access for backups, clones, deployments to support an increasingly Agile/DevOps world
Anlaytics on usage, trends, hotspots, bottlenecks
vCenter/Hyper-V integration

it_user380349 - PeerSpot reviewer
Technical Sales Consultant at a tech vendor with 51-200 employees
Vendor
2017-03-01T15:45:51Z
Mar 1, 2017

There are relatively few systems that need the high performance of all-flash, especially as the capacities of all-flash go up from dozens of terabytes to several petabytes. Server and Storage virtualization can give similar performance at a fraction of the cost. Make sure your need is priority.

it_user569205 - PeerSpot reviewer
User at a tech company with 51-200 employees
Vendor
2017-03-01T15:45:03Z
Mar 1, 2017

Choice many people make who have a tighter budget is price per gigabyte.

HC
Engineer-Technology Solutions at Ashtech Infotech Pvt.Ltd.
Real User
2017-03-01T15:44:47Z
Mar 1, 2017

While a block device layer can emulate a disk drive so that a general-purpose file system can be used on a flash-based storage device.

Chris Childerhose - PeerSpot reviewer
Lead Infrastructure Architect at ThinkON
Real User
ExpertTop 5
2017-03-01T15:44:22Z
Mar 1, 2017

Further to all the great suggestions above another thing to look at is - what sets the vendor apart from all other vendors? What makes them unique or different versus being similar to all others. An example of this would be an analytic website or a unique dashboard.

it_user543627 - PeerSpot reviewer
Director Strategy and Business Development at Samsung
Real User
2016-10-31T19:54:27Z
Oct 31, 2016

all flash arrays are for primary storage.
therefore data protection (snapshots), devops (zero copy clones), and programmability are key attributes

it_user307320 - PeerSpot reviewer
Infrastructure Manager at a retailer with 1,001-5,000 employees
Vendor
2015-09-08T03:56:35Z
Sep 8, 2015

Recovery also is a point, even SSD is fast to write to block. We still need to make sure it won't have data loss if there are power down suddenly.

it_user249687 - PeerSpot reviewer
QA Manager at Reduxio Systems
Vendor
2015-06-04T18:01:29Z
Jun 4, 2015

I would want to know what UNIQUE features the array has - they all pretty much have very similar features - give me what differentiates them (if anything...)

it_user240582 - PeerSpot reviewer
Technology Enterprise Presales Consultant at HP
Consultant
2015-05-19T10:07:05Z
May 19, 2015

Added to enhacements on typical Flash characteristics like : type of technology used (SLC, eML, cMLC, TLC, 16LC, ...) affects directly to the pricing and the performance (IOPS and < 1 msg. average latency , less Over Provisioning, right Wear Leveling (I/O pattern -random/sequential-), management of Garbage Collection and Write Amplification (Data Compression & DeDup In-Line are very important efficiency factors), together a longer Drive Endurance and DWPD -Device Writes Per Day-.
Also will continue to be equal or more important the typical TIER-1 characteristics like : High Performance, Scale Out Architecture active/active LUN nodes/controllers and multi-tenancy, Reliability 99,9999% ha, D&R (Sync RPO=0 & Async replication RPO < 5', with consistency groups), Application integration (VMware, Hyper-V, Oracle, SQL, Exchange, SAP,...), Eficiency (Thin Techniques), Easy management (self configuring, optimizing & tuning) intuitive, Data Mobility.

Related Questions
Avigayil Henderson - PeerSpot reviewer
Content Development Manager at PeerSpot
Feb 21, 2023
Hello community,  Please share your input and help out fellow peers. Thankyou.
See 1 answer
LW
Content Editor at PeerSpot
Feb 21, 2023
Machine learning capabilities are relatively common among the bigger all-flash providers but differ in what they offer. Here are a number to consider. HPE Primera's all-flash platform incorporates HPE's InfoSight technology, which uses machine learning to predict and prevent potential issues. InfoSight also analyzes workload patterns and makes real-time recommendations to optimize performance and efficiency. Another player is Dell EMC PowerStore which uses integrated machine learning to optimize performance, efficiency, and data placement. The platform uses intelligent data services to automatically tier data and optimize efficiency without requiring manual admin work. IBM comes to the table with its FlashSystem 9100 and AI-based predictive storage analytics and storage resource management. And you can also look at NetApp AFF A-Series which comes with what NetApp calls its AI-informed predictive analytics and corrective action.
Avigayil Henderson - PeerSpot reviewer
Content Development Manager at PeerSpot
Feb 21, 2023
Hi community,  Please share with the community what your thoughts are based on your personal experience. Thank you.
See 1 answer
LW
Content Editor at PeerSpot
Feb 21, 2023
VMware is definitely a behemoth and many all-flash storage systems include VMware integration. Among the bigger players are the following that you might want to look at: NetApp AFF offers tight integration with VMware vSphere, including VAAI and VASA support. The platform also offers integration with VMware NSX, enabling you to virtualize your network and security infrastructure. Pure Storage FlashArray also offers strong integration with VMware, including VAAI, vCenter, and VMware Site Recovery Manager (SRM). FlashArray also offers a plugin for the vSphere Web Client for management of storage policies directly from the vSphere environment. Dell has a number of options. The Unity line supports VMware VAAI, vSphere, and vCenter integration. The platform also offers automated storage tiering to optimize the placement of data in VMware environments. Dell's PowerStore solution provides native and scalable vVols support, and Dell notes that its PowerMax line "is engineered to meet the most demanding VMware requirements." HPE's Nimble Storage solution also integrates with VMware, including VAAI, vCenter, and VMware SRM and, like Pure Storage's FlashArray, offers a plugin for the vSphere Web Client. It also supports vSphere vVols. IBM FlashSystem's integration with VMware includes VAAI, vCenter, and VMware SRM. It also offers integration with vRO to help with insights into the performance and utilization of your VMware environment.
Related Articles
Ariel Lindenfeld - PeerSpot reviewer
Director of Community at PeerSpot
Aug 21, 2022
We’re launching an annual User’s Choice Award to showcase the most popular B2B enterprise technology products and we want your vote! If there’s a technology solution that’s really impressed you, here’s an opportunity to recognize that. It’s easy: go to the PeerSpot voting site, complete the brief voter registration form, review the list of nominees and vote. Get your colleagues to vote, too! ...
Related Categories
Related Articles
Ariel Lindenfeld - PeerSpot reviewer
Director of Community at PeerSpot
Aug 21, 2022
PeerSpot User's Choice Award 2022
We’re launching an annual User’s Choice Award to showcase the most popular B2B enterprise technol...
Download Free Report
Download our free Dell PowerStore Report and get advice and tips from experienced pros sharing their opinions. Updated: March 2023.
DOWNLOAD NOW
686,748 professionals have used our research since 2012.