When I compared various modular storage area network (SAN) solutions and tools, I found HPE 3PAR StoreServ and IBM FlashSystem to be the most effective ones currently available on the market.
One of the things that I initially noticed about HPE 3PAR StoreServ was how easy it was to deploy and manage. HPE 3PAR StoreServ gives us the ability to quickly create and control spaces where our data can be stored. It is designed with a number of functionalities that streamline and simplify the processes that we use to manage our data. These functionalities include:
Point and click deployment. Any member of our team can easily deploy HPE 3PAR StoreServ, saving us a great deal of time and other resources when we initially deploy it.
An easy-to-use GUI. HPE 3PAR StoreServ employs a single pane of glass GUI that comes loaded with many powerful tools. It has general screens that enable us to handle basic scheduling, activities, and dashboard management. Additionally, VMware screens make it possible for us to manage and track our virtual machines.
A major benefit that HPE 3PAR StoreServ offers us is the way it enables us to scale our data storage to meet our needs. Our data storage needs can evolve over time. HPE 3PAR StoreServ is designed in a way that enables us to address this issue. HPE 3PAR StoreServ reduces the storage capacity needed to hold our data by as much as 75 percent when compared to other products. We start with several terabytes of storage space and have the ability to scale up to 80 petabytes of space.
One of the aspects of IBM FlashSystem that I appreciate is the way that it supplies us with not only powerful and valuable insights, but also complete control of our data storage architecture. IBM FlashSystem has a number of features that allow us to gain a deep understanding of our data. These include:
AI analytics: IBM FlashSystem leverages a powerful AI algorithm. This AI mines more than two exabytes of data. It then examines it and looks for trends and the potential consequences that we did not know could result. IBM FlashSystem then makes it possible for my team to see potential issues before they ever have the chance to harm our business.
Centralized dashboard. IBM FlashSystem has a single dashboard that we can use to keep track of our data storage. Any and all information relevant to the health and status of our data storage are fed to this dashboard. Everything that we need to know can be found in one place.
We can also use this solution to easily secure our data against all manner of threats. IBM FlashSystem employs a suite of features that ensure that we always have options when a security-related issue arises. The security features that it offers include:
IBM Spectrum Virtualize software. This software tool works together with a technology called IBM FlashCopy. IBM FlashCopy makes copies of data that can be used to prevent data loss in the case of a system failure. If any data is corrupted or deleted these copies can be relied upon.
Safeguarded Copy. IBM FlashSystem enables us to create “air gapped” pockets that contain our most valuable data. These pockets cannot be found by hackers if they penetrate our servers and cannot be deleted or changed by a bad actor if our system is breached.
Ultimately, either HPE 3PAR StoreServ or IBM FlashSystem will empower you to take control of every aspect of your data storage and migration process.
Cloud Engineer at a tech services company with 51-200 employees
Jun 17, 2022
When it comes to changing to other storage fabrics (from SCSI to NVMe) or reaching out to other transport layers (ethernet, infiniband or FC), all units once initialised only operate with the modules installed at initialisation time.
So you better define which type of fabric, performance requiremenets you need, and what transport layer(s) and protocols you need and what your current infra supports. Some are more expensive than others. FC is for sure more expensive then ethernet based fabrics, though its far more secure, as it does not use IP and therefore storage traffic is completely isolated from your network/IP stack. The same goes for low latency storage networks operating under Infiniband, which come in more expensive due to proprietary switches and Infiniband NICs
Nowadays, it's about the time to really consider NVMe and decide upon which type of fabric suits your needs. You have FC NVMe, NVMe TCP, NVMe (iWarp), NVMe (Infiniband), and NVMe Rocky (ROCev1/v2).
In the strict sense choosing any of the fabric options excludes the other one. NVMe is about 30% more expensive than traditional SSD/SAS (SCSI) based Storage that has been out till now which is compared to its gains worth it. If you really have big elephants bursting data to your storage, you definately require NVMe storage, higher bandwidth and a more efficient transport layer (RDMA or Remote Direct Memory access) or FC NVMe.
FC still uses the same transport layer as back in 1990 and only needed to accomodate the NVMe Protocol, whereas all ethernet based fabrics had to cover both aspects and all of which might be subject to changes in the future. So the story of NVMe OF (over Fabric) is not final and nor are the choices. Some might perform better over time, as the struggle is no longer about the SSD media and NVMe protocol, but the performance differences are now in the transport layer. Still the advances are that huge that going to NVMe is far better then buying another storage array based on SAS/SCSI.
So buying a solution is much based on where your fabric (SAN) is going to , then it is actually focussing on the array. You have to figure out which fabric combines fair costs and offers for your business fair latency/performance. The array opted for needs to support your fabric of choice that fits your business needs, from there you have the scale up/out topic related to storage arrays in general.
All can scale up, replace controller units with more powerfull ones etc. The one that scales out the best and actually allows to use nearly whatever backend storage is definately IBM SVC/IBM Spectrum Virtualisation. I do find the concept of DELL EMC Powerstore getting close to that, though its still locked in to the Powerstore controller units and expansions and still disallows to attach other vendor backend storage to be virtualised.
Business Development Manager at a tech services company with 501-1,000 employees
Jul 27, 2020
It is dependent on use case. Generally NAS is used to store file level data and SAN is used to store block level data. Like for storing data like word Excel files NAS is used and SAN is used to store data from database applications. There are unified storages to store both file and block level data. Also SAN has more better bandwidth and better performance due to connectivity like fibre up to 32 Gbps and NAS has connectivity on network or LAN up to 10Gbps.
Cloud Engineer at a tech services company with 51-200 employees
May 27, 2021
NAS has no upfront investments, you can use standard NICs in your servers, segment NAS traffic etc... and you might want to reuse your current switch infra. Still it is recommended to use a separate from LAN infra and use a larger MTU size (for jumbo frames). In the past , the density of VMs on a NAS solution compared to the SAN , for a given latency was lower.
SAN has by default network isolation as it uses seperate from LAN ,SAN switches. It comes at a higher cost however and Server HBAs are more expensive. One does require the skillset, as the Fabric OS and its flow control mechanism is quite different from managing Cisco/HP/Juniper switches. FC SAN is considered faster, and due to the higher initial costs, tends to be seen at most at larger organisations, likely taking up 80% or more of the storage infra in those organsitations. Currently for some use cases S3 object Storage is changing the game. Traditional SANs for backups (especially longterm archived data) are now loosing ground in favour of S3 Object Storage.
VMware : NFS 4.1 does not support Storage DRS, there is no support for Site Recovery Manager (NFS3 does) , no Storage IO Control. One of the most significant changes in v4.1 was adding multipath, by introducing better performance and availability through load balancing and multipathing.
Historically, SAN was the native initial VMware platform, and the so called VAAI primitives were initially only avaialble on SAN Storage Arrays. Thats why FC SAN is the traditional storage platform for VMware. After some time NAS stood up, and closed the gaps (Mainly Netapp did) , but the use case for a NAS is CIFS/SMB and NFS services for the applications and not to run VMs on NFS volumes. Some microsoft clusters modes, are not operable on a NAS solution as well.
PeerSpot’s crowdsourced user review platform helps technology decision-makers around the world to better connect with peers and other independent experts who provide advice without vendor bias.
Our users have ranked these solutions according to their valuable features, and discuss which features they like most and why.
You can read user reviews for the Top 8 Modular SAN (Storage Area Network)...
Well, there are many things to consider, but I will start with scalability.
In HCI solutions scalability is achieved by adding nodes, while in dHCI (discrete HCI, hyper-converged solutions that use a SAN), you can expand the compute nodes or the storage. That means that dHCI is more flexible, and you will address your compute or storage needs in a tailored way.
The other thing to consider is availability.
HCI solutions base their availability in RAIN (Redundant Array of "inexpensive" Nodes). This means that you have more than one copy of your data located in different nodes. In case that you experience a failure in a node, your data is protected and accessible. Moreover, is extremely easy to set up a stretched cluster.
SAN-based architectures, usually include just one copy of your data, unless you use more than one storage system and a replication solution.
Another thing to consider is operations. HCI environments are easy to use, set up and scale. On the other hand, SAN-based solutions require more knowledge and maintenance efforts (Fabric OS's to update, HBAs, etc).
Maybe what I say becomes a little redundant.
As mentioned earlier, new technologies don't see why not use HCI.
I think it's an important factor and when you have a reduced team, you end up opting for a fully integrated solution.
HCI is wonderful, and possible to work with scalability, redundancy, there are tools to provide agile backup.
The traditional structure makes many analysts more comfortable, but for small teams it ends up overloading.
I use both frameworks, for large volatile data volume I believe that pure investment in HCI comes at a high cost, as it adds more storage host.
Also talk about abandoning the SAN you already have, in my opinion and something very drastic, each product has its strengths, replication for storage is still my favorite, even though there are very good replication solutions in HCI.
It's worth analyzing the whole, the size of the structure, the technical team, the qualifications, what kind of application you want to work on, the financial investment is important, but it can be more expensive in the end.
I've seen companies connecting their SAN to HCI, not always for performance reasons, but because it already exists, or there are low-cost solutions, and space requirements.
But when everything is new, it is possible to buy the minimum (HCI), already in SAN and I need to pre-dimension the number of ports, capacity, processing, speed, which will be used in its growth journey, this can make the project more expensive.
Business-wise, direct savings across the architecture, hardware, software, backup, and recovery, hyperconvergence can transform IT organizations from cost centers to frontline revenue drivers. A major issue in traditional IT architecture was that as complexity rises, the focus shifts from business problems to tech problems. The business’s focus should be on what IT can do for the bottom line, not what the bottom line can do for IT.
Capital expenditures (CAPEX): The one-time purchase and implementation expenses associated with the solution Operational expenditures (OPEX): The running costs of an IT solution – better known as the total cost of ownership (TCO) – that are incurred for managing, administering, and updating the existing IT infrastructure Considering the separate areas of cost reductions discussed above, organizations can evaluate the expense differentials between their traditional infrastructures and the HCI environment.
Hyperconvergence helps meet current and future needs, so it’s essential to calculate the TCO accurately. The TCO of a hyperconverged infrastructure includes annual maintenance fees for data centers and facilities, telecom services, hardware, software, cloud systems, and external vendors. Other costs include staff needed for deployment and maintenance, staff training and efforts to integrate with existing and legacy systems.
HCI overcomes the enormous wastage of resources and budgets common in the early phases of traditional infrastructure deployments because their scale dwarfs business needs at the time of purchase. HCI lends itself to incremental and granular scaling, allowing IT to add/remove resources as the business grows.
Whether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you:
- Customization and tuning (SAN)
- Simplicity and ease of management (HCI)
- Single number to call support (HCI)
- Opex vs Capex
- Pay-as-you-grow (HCI)/scalability
- Budget cycles
If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out.
There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA.
HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management.
3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently.
In general, it's best to start with HCI and work to disqualify it rather than the other way around.
There are multiple factors that you shall be looking at while selecting one over the other.
1. Price- Price for HCI is cheaper if you are refreshing your complete infrastructure stack (Compute/Storage/network) however, if you are just buying individual components in the infrastructure such as compute or storage only, then 3-Tier infrastructure is cheaper.
2. Scalability-HCI is highly and easily scalable.
3. Support- On a 3 tier architecture, you have multiple vendors/departments to call/contact to get support on the solution. Whereas for HCI, you call/contact a single vendor addressing all your issues on the solution.
4. Infrastructure- For very small infrastructure, a 3Tier architecture based on iSCSI SAN can be a little cheaper. However, for a medium or large infrastructure HCI comes cheaper every time.
5. Workload type- If you are using VDI, I strongly recommend to use HCI. Similarly, for a passive secondary site, 3-tier could be OK. Please run all bench-marking tools to know what are your requirements.
I am sure HCI can do everything though.
There are so many variables to consider.
First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block.
To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic.
SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario.
Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN.
Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky.
Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow.
Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe.
HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.
All depends of how you understand and use HCI:
If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance.
Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other.
Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.
Scalability and agility are the main consideration factor to decide between SAN and HCI. SAN infra needs huge work involvement when attaining the end of support, end of life situation. Also, budgeting and procurement frequency plays a role.
Also, the limitation of HCI to be single datastore in VMware environment is a problem, when disk corruption or data corruption happens.
If things are working in a traditional way already and not much growth is expected then SAN is suitable. however, if things are on the cloud journey or already virtualized then HCI suites more.
There are two SAN (FC SAN and IP SAN), both use the SCSI v3 protocol:
- FC-SAN achieves a bandwidth of 16 and 32 Gbps.
- IP SAN achieves a bandwidth of 1, 10, and 25 Gbps.
SAN generally uses CI (Converged Infrastructure): “n” COMPUTE nodes, “n” NETWORK nodes, and “n” STORAGE nodes.
HCI (Hyper-Converged Infrastructure) uses only GbE network (1, 10, and 25 Gbps), through the SCSI V3 protocol. Each node is connected to an aggregate of nodes (Cluster – up to 64 Nodes) and have all 3 functions for each node (COMPUTE + NETWORK + STORAGE). These nodes are managed by a Hypervisor (VMware, Nutanix, ...).
If STORAGE capacity grows rapidly, HCI (Hyper-Converged Infrastructure) will not be the most suitable solution!
The two main problems are the NETWORK and the SCSI V3 protocol: high latency and limited by 25 Gbps!
The choice is more philosophical than deterministic, it depends on what you're going to do over this new infrastructure. All the answers are excellent and I have no all these aspects on my mind, but before choosing this or that what do you need SAN or HCI for? Who is going to implement and maintain the solution?
There are many factors to be considered before taking any decisions.
What is the data to be stored on the device? Because depending upon that we have to decide whether to go for dedicated storage or HCI solution. E.g. for only file data HCI solution will not be correct fit. Also we have to take in to consideration cost of the device.
What is the scalability? If you require more scalability or expandability, then again cost factors comes in to picture. Because some HCI solution needs capacity based license. And solutions which does not require capacity based license, their basic cost is more. Also you have to consider per TB cost also. In HCI solutions, storage capacity is limited.
Performance - HCI solutions not able to deliver to specialized workloads where more performance is required.
Other than this many other factors needs to take in to consideration like backup solution, DR site.
It's completely case to case basis decision.
In my opinión , key factors to consider between traditional SAN and HCI are:
. Existing infraestructure: If the busness already have legacy systmems that need to be reutilized in the new infraestructure , it can condition your decisión
. Performance: Even with the new technologies like SSDs and NVMe, the performance onf the SAN with FC or iSCI is significative better tan HCI, so if the business apps needs maximun performance then SAN is the choice.
. Scalability: HCI solutions evolve fast and now they scale better , but in any case SAN technology has provven good scalability over the years.
. Manageability: Consider the features for manage and monitor your infraestructure as well as the knowledge profile of your IT team , new tools and architectures needs new skills and new trainings .