Founder, Professional Services Director, Lead Architect at Falcon Consulting
Real User
Top 20
2021-09-04T23:44:38Z
Sep 4, 2021
Sustainable performance (practical delivered IOPS vs per calculated IOPS, allocated network channels and bandwidth utilisation ratio), security, robustness and recoverability.
Also, capability to interact with networking hardware acceleration such as flow control, IO buffer management and traffic shaping, enterprise storage features like sync/async replication, deduplication, snapshot and snapshot integration, the flexibility of storage volume formation, such as media type and volume structure (RAID level, multi-tiering, multi-level caching).
Search for a product comparison in Software Defined Storage (SDS)
It is a must-have feature in a reliable software-defined storage solution that it should support a variety of protocols to collect, store and retrieve data. A multi-purpose software storage solution should support iSCSI and Fibre Channel protocols to handle object-level application workloads as well as NFS & SMB protocols for distributed file systems. The multi-protocol capability enables the end-users to generate and scale unified storage pools that configure SAN+NAS storage volumes to support various data types and applications logically.
So the end-users can leverage accelerated performance by integrating cost-effective software virtualization storage. So as your organization's data storage needs grow, SDS allows you to deploy storage volumes without fretting about whether or not those volumes will integrate smoothly with other systems.
2. Cloud Storage Integration:
In any organization, data follows a typical "life cycle". Begin as a "hot" business-critical data, it cools and isn't accessed as frequently. If not managed properly, this life cycle can turn into a big headache in overcrowded high-performance arrays. However, this problem is not a new one but the solution should be advanced enough to sort out this matter efficiently and effectively.
The best software storage solutions allow the end-users to integrate the cloud of their own choice into their virtualized infrastructure seamlessly. So storage professionals can implement strategies to move their cold files between on-prem and cloud storage easily. So they can also manage their data under a unified strategy.
3. Advanced Data Services:
Advanced data services are the premium features offered by advanced and enterprise-level software storage solutions. Vendors like Stonefly SCVM, netapp, datacore etc offer remarkable data services such as snapshot technology, async/sync replication, data deduplication and reliable security features that deliver a considerable benefit.
Be as detached as possible from the hardware and a geo cluster without a witness. Today, all solutions normally offer a good level of performance. StarWind offers very good performance while being really detached from the hardware and the HeartBeat link option allows us to not need a 3rd site.
Learn what your peers think about Nutanix Cloud Infrastructure (NCI). Get advice and tips from experienced pros sharing their opinions. Updated: May 2023.
Performance :- SDS (Software Define Storage) reduce the performance of physical storage devices like their IOPs. And also perform well with any RAID configurations.
Compatibility :- It can be compatible with all type of hardware/Operating Systems. But I would like to suggest BareMetal installation. Because It reduce licensing and batter system performance.
Security :- It includes all basic's of security encryptions method like lock hard drive or OS in case of wrong Login attempts.
Features:- SDS must have AD integration, Bind drives/Shared Path with network classes/IP /Mac etc (so it makes more secure),Drive Locker etc.
You need to scale out objects in which the system creates and allocates a unique identifier to the object.
You need to evaluate which one creates high available scale-out file share to use with application storage and also, those SDS's that are able to run on the server OS and in VM either on-premise or in the cloud.
PreSales Manager at a tech services company with 1-10 employees
User
2020-03-13T16:20:58Z
Mar 13, 2020
You have to know the converged infrastructure vs. the hyperconverged infrastructure, and the differences lie mainly in the SAN in a converged environment, making a SAN a single point of failure, as opposed to an HCI solution in which there is no single point of failure and can reach the same number of IOPs and higher than a converged infrastructure. However, many companies are slow to make the change for the initial investment cost, in the long run, companies choose HCI and know that HCI solution is the right one enters another analysis.
Solutions Architect/Team Lead - Business Data and Data Protection at a tech consulting company with 501-1,000 employees
Real User
2019-11-19T18:47:22Z
Nov 19, 2019
The aspect that is most important depends entirely upon the business objectives and needs of the client. Some need scalability, some need a specific application compatibility, some need specific hypervisors, some need to focus on DR/backup capability. It's not a great question.
CEO and President DataCore Software Corporation at DataCore Software
Vendor
2017-08-19T17:20:13Z
Aug 19, 2017
Start with the economics and in your evaluation criteria, stress not only the new features and capabilities being touted but on how much disruption will it cause to your current environment, does it protect and leverage your existing investments and is it software that can bridge different deployment models (serverSAN, pure software, appliance, hyperconverged or hybrid cloud) since we live in a 'hybrid' world and how much true agility it brings to meet change and growth. To often vendors tout specific new models or features and describe these new 'shiny objects' as panaceas, but the reality is often the new comes with a 'rip and replace' mindset that forgets about existing investments and how to add agility and future-proofing to your infrastructure to readily accept new technologies and absorb them within the overall management versus creating yet another independent silo to manage. Look at the economics and think big picture to avoid stop-gap solutions that actually add complexity and cost.
I highly recommend StarWind Virtual SAN as an SDS solution. It can work with any type of hardware, by enabling us to build a specific hardware layer based on each customer's unique requirements. We also now have the ability to migrate live machines in our environment. I would say that StarWind helps us make more cost-effective usage of our existing hardware and leverage our current infrastructure at a higher level than we could ever before. Once we deployed it, we immediately noticed huge performance gains, even on older hardware.
I would say that the top benefits of StarWind Virtual SAN are:
Cost effective: This is one of the cheapest products in the market. Our ROI is phenomenal. In addition to its low price, we also save money on hardware since the tool’s virtualization is so effective.
Stability: This is a robust and solid solution. We’ve never had issues with performance or stability. Since we have deployed it to production, we have had 100% uptime.
Scalability: The scalability is excellent. Right now, we have over 15TB of disks.
Top-notch support: Starwind’s support is excellent. They have very fast response times and have very good knowledge of the system. Support is available via teleconference or online. They also assist with testing and implementation.
Easy configuration and management: StarWind’s user interface is easy to work with.
I would like to see more available documentation for the solution. Better documentation would have quickly solved some issues I encountered when I started using the tool. Other than that, this solution has so many valuable features that it brings to the table. StarWind improves all areas in our organization, including performance, data availability, and data security. I rate it a 10/10
Regional Manager/ Service Delivery Manager at a tech services company with 201-500 employees
Nov 26, 2021
Hi @Evgeny Belenky ,
HERE ARE THE STORAGE REQUIREMENTS FOR DEEP LEARNING
Deep learning workloads are a special kind of beast: all DL data is considered hot data, which raises the dilemma of not being able to employ any sort of tiered storage management solution. This is because normal SSDs usually used for hot data under conventional conditions simply won’t move the data required for millions, billions, or even trillions of metadata transfers for an ML training model to classify an unknown something out of only a limited amount of examples.
Below are a few examples of a few storage requirements needed to avoid the dreaded curse of dimensionality.
COST EFFICIENCY
Enormous AI data sets become an even bigger burden if they don’t fall within the budget set aside for storage. Anyone who has been in charge of managing enterprise data for any amount of time knows well that highly-scalable systems have always been more high-priced on a capacity versus cost basis. The ultimate deep learning storage system must be both affordable and scalable to make sense.
PARALLEL ARCHITECTURE
In order to avoid those dreaded choke points that stunt a deep learning machine’s ability to learn, it’s essential for data sets t to have parallel-access architecture.
DATA LOCALITY
While it might be possible that many organizations may opt to keep some of their data on the cloud, most of it should remain on-site in a data center. There are at least three reasons for this: regulatory compliance, cost efficiency, and performance. For this reason, on-site storage must rival the cost of keeping it on the cloud.
HYBRID ARCHITECTURE
As touched on above, different types of data have unique performance requirements. Thus, storage solutions should offer the perfect mixture of storage technologies instead of an asymmetrical strategy that will eventually fail. It’s all about simultaneously meeting ML storage performance and scalability.
SOFTWARE-DEFINED STORAGE
Not all huge data sets are the same—especially in terms of DL and ML. While some of them can get by with the simplicity of pre-configured machines, others need hyper-scale data centers featuring purpose-built servers architectures that are previously set in place. This is what makes software-defined storage solutions the best option.
Our X-AI Accelerated is an any–scale DL and ML solution that offers unmatched versatility for any organization’s needs. X-AI Accelerated was engineered from the ground up and optimized for “ingest, training, data transformations, replication, metadata, and small data transfers.” Not only that but RAID Inc. offers all the aforementioned requirements such as all-flash NVMe X2-AI/X4-AI or the X5-AI, which are hybrid flash and hard drive storage platforms.
Both the NVMe X2-AI/X4-AI and the X5-AI support parallel access to flash and deeply expandable HDD storage as well. Furthermore, the X-AI Accelerated storage platform permits one to scale out from only a few TBs to tens of PBs.
Hi dear community members,
Here we go again with a new bi-weekly Community Spotlight where we share with you recent contributions: articles, questions and discussions.
Check them out below!
Trending
Cybersecurity Trends To Look Out For in 2022
Top 5 Network Access Control (NAC) Software Solutions
Top 5 Performance Testing Tools 2022
PeerSpot Users' DevOps and DevSecOps prediction...
PeerSpot’s crowdsourced user review platform helps technology decision-makers around the world to better connect with peers and other independent experts who provide advice without vendor bias.
Our users have ranked these solutions according to their valuable features, and discuss which features they like most and why.
You can read user reviews for the Top 5 Software Defined Storage (SDS) Sol...
Sustainable performance (practical delivered IOPS vs per calculated IOPS, allocated network channels and bandwidth utilisation ratio), security, robustness and recoverability.
Also, capability to interact with networking hardware acceleration such as flow control, IO buffer management and traffic shaping, enterprise storage features like sync/async replication, deduplication, snapshot and snapshot integration, the flexibility of storage volume formation, such as media type and volume structure (RAID level, multi-tiering, multi-level caching).
1. Multi-protocol capabilities:
It is a must-have feature in a reliable software-defined storage solution that it should support a variety of protocols to collect, store and retrieve data. A multi-purpose software storage solution should support iSCSI and Fibre Channel protocols to handle object-level application workloads as well as NFS & SMB protocols for distributed file systems. The multi-protocol capability enables the end-users to generate and scale unified storage pools that configure SAN+NAS storage volumes to support various data types and applications logically.
So the end-users can leverage accelerated performance by integrating cost-effective software virtualization storage. So as your organization's data storage needs grow, SDS allows you to deploy storage volumes without fretting about whether or not those volumes will integrate smoothly with other systems.
2. Cloud Storage Integration:
In any organization, data follows a typical "life cycle". Begin as a "hot" business-critical data, it cools and isn't accessed as frequently. If not managed properly, this life cycle can turn into a big headache in overcrowded high-performance arrays. However, this problem is not a new one but the solution should be advanced enough to sort out this matter efficiently and effectively.
The best software storage solutions allow the end-users to integrate the cloud of their own choice into their virtualized infrastructure seamlessly. So storage professionals can implement strategies to move their cold files between on-prem and cloud storage easily. So they can also manage their data under a unified strategy.
3. Advanced Data Services:
Advanced data services are the premium features offered by advanced and enterprise-level software storage solutions. Vendors like Stonefly SCVM, netapp, datacore etc offer remarkable data services such as snapshot technology, async/sync replication, data deduplication and reliable security features that deliver a considerable benefit.
IOPS, restoration, and backups meeting RTO and RPO.
Be as detached as possible from the hardware and a geo cluster without a witness. Today, all solutions normally offer a good level of performance. StarWind offers very good performance while being really detached from the hardware and the HeartBeat link option allows us to not need a 3rd site.
The scalability and flexibility along with the integrations and costs.
Ease of use, availability, performance and support.
Built-in reliable performance metrics using recognised testing criteria and testing methodology.
Easy to use and configure
Its performances and weaknesses.
As per my understanding we keep in mind about,
Performance :-
SDS (Software Define Storage) reduce the performance of physical storage devices like their IOPs. And also perform well with any RAID configurations.
Compatibility :-
It can be compatible with all type of hardware/Operating Systems. But I would like to suggest BareMetal installation. Because It reduce licensing and batter system performance.
Security :-
It includes all basic's of security encryptions method like lock hard drive or OS in case of wrong Login attempts.
Features:-
SDS must have AD integration, Bind drives/Shared Path with network classes/IP /Mac etc (so it makes more secure),Drive Locker etc.
- limit the payload due to the SDS software, keeping as much as possible the original storage performance.
- Stability: SDS became a critical component of the infrastructure. So it must contribute actively to increasing the number of nine's.
- Support (a very important aspect): it has to be fast, reliable and flexible.
(1) Stability
(2) Support
(3) Performance
You need to scale out objects in which the system creates and allocates a unique identifier to the object.
You need to evaluate which one creates high available scale-out file share to use with application storage and also, those SDS's that are able to run on the server OS and in VM either on-premise or in the cloud.
We were recently looking for a solution with excellent reliability from a vendor that was not likely to disappear or drop the solution.
Also, we were price-sensitive so this played a large factor in our decision-making process.
Rather than evaluating based solely on feature set, make sure the SDS platform will meet all of the business objectives.
You have to know the converged infrastructure vs. the hyperconverged infrastructure, and the differences lie mainly in the SAN in a converged environment, making a SAN a single point of failure, as opposed to an HCI solution in which there is no single point of failure and can reach the same number of IOPs and higher than a converged infrastructure. However, many companies are slow to make the change for the initial investment cost, in the long run, companies choose HCI and know that HCI solution is the right one enters another analysis.
After evaluating all of the requirements, the first and main aspect should be reliability and data integrity.
The aspect that is most important depends entirely upon the business objectives and needs of the client. Some need scalability, some need a specific application compatibility, some need specific hypervisors, some need to focus on DR/backup capability. It's not a great question.
Do a careful POC and make very very sure the solution does not corrupt data when you have a major storage issue like an array failure.
Company Reputation, Costs, scalability; features for Cloud or DR.
Price and support for when problems happen.
Start with the economics and in your evaluation criteria, stress not only the new features and capabilities being touted but on how much disruption will it cause to your current environment, does it protect and leverage your existing investments and is it software that can bridge different deployment models (serverSAN, pure software, appliance, hyperconverged or hybrid cloud) since we live in a 'hybrid' world and how much true agility it brings to meet change and growth. To often vendors tout specific new models or features and describe these new 'shiny objects' as panaceas, but the reality is often the new comes with a 'rip and replace' mindset that forgets about existing investments and how to add agility and future-proofing to your infrastructure to readily accept new technologies and absorb them within the overall management versus creating yet another independent silo to manage. Look at the economics and think big picture to avoid stop-gap solutions that actually add complexity and cost.
As necessidades de negócio são o foco.
Geralmente ao avaliar SDS buscamos: Resiliência, Gerenciamento Simplificado e Performance.