Try our new research platform with insights from 80,000+ expert users
it_user315723 - PeerSpot reviewer
Systems Engineer at a media company with 1,001-5,000 employees
Vendor
We've gotten rid of shared storage, which is a better solution than an all-flash array. I've heard, however, that maintenance of it causes stability issues.

What is most valuable?

Getting rid of sharing storage, especially VSAN 6. That would be even better than having an all-flash array.

What do I think about the stability of the solution?

I hear a lot of issues of stability whenever you go to maintenance, but people who are having spectacular experiences are not speaking the loudest so it can be hard to tell.

What do I think about the scalability of the solution?

I haven’t looked at configuration maximums but it seems like you can scale it up pretty hard in terms of clusters with vSphere 6.

How are customer service and support?

Customer Service:

In general, VMware customer support is world class. Response time is really quick – you get connected to experts much faster than in other companies, like Microsoft for example.

Technical Support:

All I've seen is community support, especially from bloggers and community experts. I haven’t had any experience.

Buyer's Guide
VMware vSAN
June 2025
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
860,168 professionals have used our research since 2012.

How was the initial setup?

It's not very different than vSphere 3. If you're comfortable with VMware it’s straightforward. From what Ive seen, it’s a simple install once you have all the hardware. I have heard you have to tweak it performance wise.

What other advice do I have?

Support is up there in the top five things to look at. If you can call, have online communities, easy access to articles. I would also add that if you can get through to someone who has deep knowledge of the product quickly.

Stability, the issue that we have run into is that they are fly-by-night brand new startups and you can get stranded without support.

You need to vet the company, they need to be around in a few weeks to help you. Also, peer reviews are very important – invaluable. Salesmen will tell you everything, we look at whitepapers and vendor supplied information. Google is your friend.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
IT Administrator and Sr. VMware Engineer at a retailer with 501-1,000 employees
Real User
It supports two architectures (hybrid and All-Flash), which is useful for all virtualized applications, including business-critical applications.

Originally posted in Spanish at https://www.rhpware.com/2015/02/introduccion-vmware...

The second generation of Virtual SAN is the vSphere 6.0 that comes with sharing the same version number. While the jump from version 1.0 (vSphere 5.5) 6.0 to change really worth it occurred, as this second-generation converged storage integrated into the VMware hypervisor significantly increases performance and features that are based on a much higher performance and increased workloads scale business level, including business-critical applications and Tier 1 capital.



Virtual SAN 6.0 delivers a new architecture based entirely on Flash to deliver high performance and predictable response times below one millisecond in almost all business critical applications level. This is also achieved because in this version doubles the scalability up to 64 nodes per host and up to 200 VMs per host, as well as improvements in technology snapshots and cloning.

Performance characteristics

The hybrid architecture of Virtual SAN 6.0 provides performance improvements of nearly double compared to the previous version and 6.0 Virtual SAN architecture all-flash four times the performance considering the number of IOPS you get in clusters with similar workloads predictable and low latency.

As the hyper convergent architecture is included in the hypervisor efficiently optimizes the ratio of operations of I/O and dramatically minimizes the impact on the CPU, which leads to products from other companies. The distributed architecture based on the hypervisor reduces bottlenecks allowing Virtual SAN move data and run operations I/O in a much more streamlined and very low latencies, without compromising the computing resources of the platform and keeping the consolidation of the VM's. Also the data store Virtual SAN is highly resilient, resulting in preventing data loss in case of physical failure of disk, hosts, network or racks.

The Virtual SAN distributed architecture allows you to scale elastically uninterrupted. Both capacity and performance can be scaled at the same time when a new host is added to a cluster, and can also scale independently simply by adding disks existing hosts.

New capabilities

The major new capabilities of Virtual SAN 6.0 features include:

  • Virtual Architecture SAN All-Flash: Virtual SAN 6.0 has the ability to create an all-flash architecture in which the solid state devices are used wisely and work as write cache. Using PCI-E devices high performance read / write intensive, economical flash storage devices and data persistence is achieved at affordable costs

Virtual SAN 6.0 All-Flash predictable levels of performance achieved up to 100,000 IOPS per host and response times below one millisecond, making it ideal for critical workloads. - See more at: https://www.rhpware.com/2015/02/introduccion-vmware...

Doubling the scalability

This version duplicates the capabilities of the previous version:

  • Scaling up to 64 nodes per cluster
  • Scaling up to 200 VMs per host, both hybrid and All-Flash architectures
  • Size of the virtual disks increased to 62TB

Performance improvements

  • Duplicate IOPS with hybrid architectures. Virtual SAN 6.0 Hybrid achieves more than 4 million IOPS for read-only loads and 1.2 million IOPS for mixed workloads on a cluster of 32 hosts
  • IOPS quadruples with architecture All-Flash: Virtual SAN 6.0 All-Flash achieves up to 100,000 IOPS per host
  • Virtual SAN File System: The new disk format enables more efficient operations and higher performance, plus scalar much simpler way
  • Virtual SAN Snapshots and Clones: highly efficient snapshots and clones are supported with support for up to 32 snapshots per clone per VM and 16,000 snapshots per clone per cluster
  • Fault Tolerance Rack: the Virtual SAN 6.0 Fault Domains allow fault-tolerant level rack and power failures in addition to the disk, network and hardware hosts
  • Support for systems with high-density disks Direct-Attached JBOD: You can manage external storage systems and eliminate the costs associated with architectures based blades or knives
  • Capacity planning: You can make the type scenario analyses "what if" and generate reports on the use and capacity utilization of a Virtual SAN data store when a virtual machine is created with associated storage policies
  • Support for checksum based on hardware: limited checksums based hardware drivers to detect problems of corruption and data integrity support is provided
  • Improving services associated disk: Troubleshooting and associated services is added to the drives to give customers the possibility to identify and fix disks attached directly to hosts:
  • LED fault indicators. magnetic or solid state devices having permanent faults lit LEDs to identify quickly and easily
  • Manual operation LED indicators: this is provided to turn on or off the LED and identify a particular device
  • Mark as SSD drives: you can make devices not recognized as SSDs
  • Mark as local disks: You can dial without recognizing flash drives as local disks to be recognized for vSphere hosts
  • Default Storage Policies: automatically created when Virtual SAN is enabled in a cluster. This default policy is used by the VM's that have no storage policy assigned
  • Evacuation of disks and disk groups: the evacuation data disks or disk groups are removed from the system to prevent the loss of data supports
  • Virtual SAN Health Services: This service is designed to provide bug fixes and generate health reports vSphere administrators about Virtual SAN subsystems 6.0 and its dependencies, such as:
    • Health Cluster
    • Network Health
    • Health Data
    • Limits Health
    • Physical Disk Health

Buyers

Requirements vSphere

Virtual SAN 6.0 requires vCenter Server 6.0. Both the Windows version as visa Virtual SAN can handle. Virtual SAN 6.0 is configurable and monitored exclusively through vSphere Web Client. It also requires a minimum of 3 vSphere hosts with local storage. This amount is not arbitrary, but is used for the cluster meets the fault tolerance requirements of at least one host, a disc network failure.

Storage System Requirements

Disk controllers

Each vSphere host own contribution to the cluster storage Virtual SAN requires a driver disk, which can be SAS, SATA (HBA) or RAID controller. However, a RAID controller must operate in any of the following ways:

  • Pass-through
  • RAID 0

The Pass-through (JBOD or HBA) is the preferred mode settings 6.0 Virtual SAN that enables managing RAID configurations the attributes of storage policies and performance requirements defined in a virtual machine

Magnetic devices

When the hybrid architecture of Virtual SAN 6.0 is used, each vSphere host must have at least one SAS, NL-SAS or SATA disk in order to participate in the Virtual Cluster SAN cluster.

flash devices

In architecture-based flash drives 6.0 Virtual SAN devices can be used as a layer of cache as well as for persistent storage. In hybrid architectures each host must have at least a flash based (SAS, SATA or PCI-E) in order to participate in the Virtual SAN disk cluster.

In the All-flash architecture each vSphere host must have at least one flash based device marked as device capacity and one for performance in order to participate Virtual SAN cluster.

Networking requirements

Network Interface Cards (NIC)

In hybrid architectures Virtual SAN, each vSphere host must have at least one network adapter 1Gb or 10Gb. VMware's recommendation is 10 Gb.

The All-flash architectures only support 10Gb Ethernet NICs. For redundancy and high availability, you can configure NIC Teaming per host. NIC Teaming is not supported for link aggregation (performance).

Virtual Switches

Virtual SAN 6.0 is supported by both VMware vSphere Distributed Switch (VDS) and the vSphere Standard Switch (VSS). Other virtual switches are not supported in this release.

VMkernel network

You must create a VMkernel port on each host for communicating and labelled for Virtual SAN Virtual SAN traffic. This new interface is used for intracluster communications as well as for read and write operations when a vSphere cluster host is the owner of a particular VM but the current data blocks are housed in a remote cluster host.

In this case, the operations of I / O network must travel through the cluster hosts. If this network interface on a vDS is created, you can use the Network feature I / O control to configure shares or reservations for Virtual SAN traffic.

Conclusion

This new second generation Virtual SAN is a storage solution enterprise-class hypervisor level that combines computing resources and storage from the hosts. With its two supported architectures (hybrid and All-Flash) Virtual SAN 6.0 meets the demands of all virtualized applications, including business-critical applications.

Without doubt Virtual SAN 6.0 is a storage solution that realizes the VMWare defined storage software or SDS (Software Defined Storage) offering great benefits to both customers and the vSphere administrators who every day we face new challenges and complexities. It certainly is an architecture that will change the vision of storage systems from now on.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
VMware vSAN
June 2025
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
860,168 professionals have used our research since 2012.
PeerSpot user
Cloud consultant at a tech services company with 51-200 employees
Consultant
It gives us the ability to manage capacity and performance in a linear fashion, but management (dashboard, alerts, monitoring) needs improvement.

What is most valuable?

vMotion and the Distributed Resource Scheduler (DRS) load-balancing resources are the most valuable features.

How has it helped my organization?

We can manage capacity and performance in linear fashion.

We get better performance with a better cost efficiency.

What needs improvement?

The management of vSAN (dashboard, alerts, monitoring) has a significant amount of growth potential.

What was my experience with deployment of the solution?

No issues encountered.

What do I think about the stability of the solution?

The stability is dependent on how we scale and stabilize I/O across the host(s). We have encountered issues, but have worked through them.

What do I think about the scalability of the solution?

There are no issues because it is linear.

Which solution did I use previously and why did I switch?

Prior to VSAN, we used SAN Storage, and we switched because we needed a more cost-effective solution for our cloud environment, coupled with easy scalability. Currently, SAN Storage has risks and bottlenecks, due to having only two storage processors which are not enough to handle our needs.

How was the initial setup?

It was straightforward.

What about the implementation team?

We implemented in-house.

Which other solutions did I evaluate?

  • NetApp
  • Nutanix

What other advice do I have?

Make sure you size correctly when you do the initial implementation.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user234747 - PeerSpot reviewer
Practice Manager - Cloud, Automation & DevOps at a tech services company with 501-1,000 employees
Real User
VMware I/O Analyser Fling vs. Iometer

Originally posted at vcdx133.com.

I previously posted about my “Baby Dragon Triplets” VSAN Home Lab that I recently built. One of the design requirements was to meet 5,000 IOPS @ 4K 50/50 R/W, 100% Random, which from the performance testing below has been met.

The performance testing was executed with two tools:

  • VMware I/O Analyser Fling – Excellent tool that collects esxtop data as well; if you need fast and easy storage performance testing, keep this in your toolkit.
  • Iometer configured as per the VMware 2M IOPS with VSAN announcement

Iometer – Test configuration

Iometer – Results

VMware I/O Analyser – Test configuration

VMware I/O Analyser – Results

Observations

  • The realistic Iometer results were significantly lower compared to the same settings with the VMware I/O Analyser results. This is because the Iometer config was with 8 x 8GB disks and the VMware I/O Analyser was testing with the default 100MB disk. If you use VMware I/O Analyser, make sure you extend the 100MB disk to 8GB (as per User Manual that comes with the Fling). You can see the lower latency due to less parallel I/O over the smaller address space.
  • Due to the small size of workloads, all storage tested was SSD and not SATA. Switching from VSS to VDS with LBT had no improvement on performance. Network Throughput was around 20MB/s for the VSAN VMkernel. The Corsair SSD drive is rated at 85,000 IOPS @ 4K 100% Write 100% Random, so with VM config, CPU, RAM, SSD and Network not being the bottleneck, I suspect it is the Z87 Serial ATA controller (or its ESXi driver) that is the limiting factor (even though it is supposed to support 6Gb/s).
  • I am considering scrapping my ESXi environment to test a single host with Windows Server 2012 and Iometer and then ESXi with SSD (DAS) and Iometer again, just to see if not having VSAN makes a difference.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
VMware Virtual SAN vs. EMC ScaleIO and conventional storage arrays

Software-defined and hyper-converged storage solutions are now a viable alternative to conventional storage arrays so let’s take a quick look at how two of the most popular solutions compare – VMware Virtual SAN (VSAN) and EMC ScaleIO:

Architecture

On vSphere this is an easy win for VMware as VSAN is delivered using kernel modules which provides the shortest path for the IO, has per Virtual Machine policy based management and is tightly integrated with vCenter and Horizon View.

ScaleIO is delivered as Virtual Machines, which is not likely to be as efficient, and is managed separately from the hypervisor – on all other platforms ScaleIO is delivered as lightweight software components not Virtual Machines.

VSAN also has the advantage of being built by the hypervisor vendor, but of course the downside of this is that it is tied to vSphere.

Availability

Win for EMC, since the failure of a single SSD with VSAN disables an entire Disk Group. Although VSAN has the ability to support up to three disks failures where as ScaleIO only one, in reality the capacity and performance overhead of supporting more than one failure means that VSAN will nearly always be used with just RAID 1 mirroring.

If you need double disk failure protection you are almost certainly better off using a storage array.

Performance

Easy win for VMware as VSAN uses SSDs as a write buffer and read cache, ScaleIO does have the ability to utilise a RAM read cache.

Flexibility

Easy win for EMC as with ScaleIO you can:

  1. Utilise physical servers running Windows and Linux
  2. Utilise hypervisors running vSphere, Hyper-V, XenServer and KVM
  3. Utilise any storage supported by the OS or hypervisor
  4. Utilise any combination of HDDs and SSDs as required
  5. Create multiple Protection Domains per system for greater resiliency
  6. Create Storage Pools for each storage tier within a Protection Domain
  7. Mix and match nodes with dissimilar configurations

VSAN has a more rigid architecture of using Disk Groups which consist of one SSD and up to seven HDDs.

Elasticity

Easy win for EMC as ScaleIO supports up to 1,024 nodes, 256 Protection Domains and 1,024 Storage Pools, and auto-rebalances the data when storage is added or removed.

ScaleIO can also throttle the rebuilding and rebalancing process so that it minimises the impact to the applications.

Advanced Services

Easy win for EMC as ScaleIO provides Redirect-on-Write writeable snapshots, QoS (Bandwidth/IOPS limiter), Volume masking and lightweight encryption.

Licensing

This is a tricky one as VSAN has the more customer friendly licensing as it is per CPU therefore as new CPUs, SSDs and HDDs are released you will be able to support more performance and capacity per license.

ScaleIO has a capacity based license which is likely to mean that further licenses are required as your capacity inevitably increases over time. There is also two ScaleIO licences – Basic and Enterprise (adds QoS, Volume masking, Snapshots, RAM caching, Fault Sets and Thin provisioning).

The one downside of VSAN licensing is that you need to licence all the hosts in the cluster even if they are not used to provision or consume VSAN storage.

Conventional storage arrays

What are the advantages of a conventional mid-range array?

  1. Rich data services – most storage arrays include de-duplication, compression and tiering along with many other advanced features
  2. Unified storage – many storage arrays support both block and NAS protocols
  3. Replication – many storage arrays support synchronous and metrocluster solutions
  4. Integrated data protection – some storage arrays do not require a separate backup solution
  5. Usable capacity – most storage arrays support parity RAID which can achieve usable capacity ratios of up to 80%
  6. Double disk protection – whilst this is supported on VSAN it is almost certainly not practical at scale
  7. Turnkey solution – with a single contact for support of all hardware and software

What are the advantages of hyper-converged software-defined solutions?

  1. Multi node failure – can tolerate the failure of more than one node
  2. Rapid rebuilds – as they take place in parallel across multiple drives
  3. Bring your own hardware – take advantage of commodity prices
  4. Built-in “IT Deflation” – as over time hardware unit costs drop
  5. Independent – the software lives on beyond the life of the hardware
  6. Elasticity – non-disruptively grow and shrink as required
  7. Low ongoing costs – perpetual license followed by annual maintenance
  8. Gain new features – just by upgrading the software
  9. Simplified management – compute and storage managed together

So which is best?

As always each vendor will build a strong case that their solution is the best, in reality each solution has strengths and weaknesses, and it really depends on your requirements, budget and preferences as to which is right for you.

For me the storage array is not going away, but it is under pressure from software-defined and cloud based solutions, therefore it will need to deliver more innovation and value moving forward. The choice between VSAN and ScaleIO really comes down to your commitment to vSphere – if there is little chance that your organisation will be moving away, then VSAN has to be the way to go, otherwise the cross-platform capabilities of ScaleIO are very compelling.

Disclosure: My company has a business relationship with this vendor other than being a customer. We are Partners with VMware and EMC.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
The solution is simple to manage but redirect-on-write snapshots is needed

Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network.

For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the first of a three part series, we will take a look at Virtual SAN (VSAN).

So why VSAN?

Large Data Centres, built by the likes of Amazon, Google and Facebook, utilise commodity compute, storage and networking hardware (that scale-out rather than scale-up) and a proprietary software layer to massively drive down costs. The economics of IT hardware tend to be the inverse of economies of scale (i.e. the smaller the box you buy the less it costs per unit).

Most organisations, no matter their size, do not have the resources to build their own software layer like Amazon, so this is where VSAN (and vSphere and NSX) come in – VMware provides the software and you bring your hardware of choice.

There are a number of hyper-converged solutions on the market today that can combine compute and storage into a single host that can scale-out as required. None of these are Software-Defined (see What are the pros and cons of Software-Defined Storage?) and typically they use Linux Virtual Machines to provision the storage. VSAN is embedded into ESXi, so you now have the choice of having your hyper-converged storage provisioned from a Virtual Machine or integrated into the hypervisor – I know which I would prefer.

Typical use cases are VDI, Tier 2 and 3 applications, Test, Development and Staging environments, DMZ, Management Clusters, Backup and DR targets and Remote Offices.

VSAN Components

To create a VSAN you need:

  • From 3 to 32 vSphere 5.5 certified hosts
  • For each host a VSAN certified:
    • I/O controller
    • SSD drive or PCIe card
    • Hard disk drive
  • 4 GB to 8GB USB or SD card for ESXi boot
  • VSAN network – GbE or 10 GbE (preferred) for inter-host traffic
    • Layer 2 Multicast must be enabled on physical switches
  • A per socket license for VSAN (also includes licenses for Virtual Distributed Switch and Storage Policies) and vSphere

The host is configured as follows:

  • The controller should use pass-through mode (i.e. no RAID or caching)
  • Disk Groups are created which include one SSD and from 1 to 7 HDDs
  • Five Disk Groups can be configured per host (maximum of 40 drives)
  • The SSD is used as a read/write flash accelerator
  • The HDDs are used for persistent storage
  • The VSAN shared datastore is accessible to all hosts in the cluster

The solution is simple to manage as it is tightly integrated into vSphere, highly resilient as there is zero data loss in the event of hardware failures and highly performant through the use of Read/Write flash acceleration.

VSAN Configuration

The VSAN cluster can grow or shrink non-disruptively with linear performance and capacity scaling – up to 32 hosts, 3,200 VMs, 2M IOPS and 4.4 PBs. Scaling is very granular as single nodes or disks can be added, and there is no dedicated hot-spare disks instead the free space across the cluster acts as a “hot-spare”.

Per-Virtual Machine policies for Availability, Performance and Capacity can be configured as follows:

  • Number of failures to tolerate – How many replicas (0 to 3 – Default 1 equivalent to a Distributed RAID 1 Mirror)
  • Number of disk stripes per object – The higher the number the better the performance (1-12 – Default 1)
  • Object space reservation – How Thickly provisioned the disk is (0-100% – Default 0)
  • Flash read cache reservation – Flash capacity reserved as read cache for the storage object (0-100% – Default 0)

The Read/Write process

Typically a VMDK will exist on two hosts, but the Virtual Machine may or may not be running on one of these. VSAN takes advantage of the fact that 10 GbE latency is an order of magnitude lower than even SSDs therefore there is no real world difference between local and remote IO – the net result is a simplified architecture (which is always a good thing) that does not have the complexity and IO overhead of trying to keep compute and storage on the same host.

All writes are first written to the SSD and to maintain redundancy also immediately written to an SSD in another host. A background process sequentially de-stages the data to the HDDs as efficiently as possible. 70% of the SSD cache is used for Reads and 30% for Writes, so where possible reads are delivered from the SSD cache.

So what improvements would we like to see in the future?

VSAN was released early this year after many years of development, the focus of the initial version is to get the core platform right and deliver a reliable high performance product. I am sure there is an aggressive road-map of product enhancements coming from VMware, but what we would like to see?

The top priorities have to be efficiency technologies like redirect-on-write snapshots, de-duplication and compression along with the ability to have an all-flash datastore with even higher-performance flash used for the cache – all of these would lower the cost of VDI storage even further.

Next up would be a two-node cluster, multiple flash drives per disk group, Parity RAID, and kernel modules for synchronous and asynchronous replication (today vSphere Replication is required which supports asynchronous replication only).

So are we about to see the death of the storage array? I doubt it very much, but there are going to be certain use cases (i.e. VDI) whereby VSAN is clearly the better option. For the foreseeable future I would expect many organisations to adopt a hybrid approach mixing a combination of VSAN with conventional storage arrays – in 5 years time who knows how that mix will be, but one thing is for sure the percentage of storage delivered from the host is only likely to be going up.

Some final thoughts on EVO:RAIL

EVO:RAIL is very similar in concept to the other hyper-converged appliances available today (i.e. it is not a Software-Defined solution). It is built on top of vSphere and VSAN so in essence it cannot do anything that you cannot do with VSAN. Its advantage is simplicity – you order an appliance, plug it in, power it on and you are then ready to start provisioning Virtual Machines.

The downside … it goes against VMware’s and the industries move towards more Software-Defined solutions and all the benefits they provide.

Disclosure: My company has a business relationship with this vendor other than being a customer. We are Partners with VMware.
PeerSpot user
Parin Thaker - PeerSpot reviewer
Solution Specialist at Dotcad Pvt Ltd
Real User
Top 5
Great option for network security and integration of NSX technology
Pros and Cons
  • "Easy-to-use, and easy-to-scale product."
  • "The upgrading process could be simplified."

What is our primary use case?

Our customers use this product when they don't want to deploy an expensive storage device but they're looking for good storage technology. I'm a system integrator.

What is most valuable?

This is a mature, easy-to-use, and easy-to-scale type of technology. When a customer wants network security and integration of NSX technology, vSan is a good solution.

What needs improvement?

I'd like to see a simplification of the upgrading process. For now, I have to verify each and every component before upgrading. If there were a technology to check the compatibility without the complexity, it would be helpful to users.

For how long have I used the solution?


What do I think about the stability of the solution?

The solution is stable. 

What do I think about the scalability of the solution?

The solution is scalable. 

How are customer service and support?

Whenever I need support from VMware, I get very good support from the team.

How was the initial setup?

The initial setup is somewhat complex. It involves checking hardware compatibility before buying it and installing the VMware components. One person can deploy an entire environment in a day.

What's my experience with pricing, setup cost, and licensing?

Licensing costs are more or less on par with other similar products. 

What other advice do I have?

It's important to check the compatibility before deploying. This is a good solution and I rate it nine out of 10. 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer. Integrator
PeerSpot user
reviewer1181523 - PeerSpot reviewer
System Analyst at a computer software company with 10,001+ employees
Real User
Decent storage virtualization that is beginning to integrate modern technologies but needs to be cheaper
Pros and Cons
  • "VMware has been around for a long time are are doing a decent job at catching up with the latest technologies i.e. bringing in kubernetes and containerization. Overall, this is a great tool for virtualization."
  • "I would like for the next release to be a bit cheaper."

What is most valuable?

VMware has been around for a long time are are doing a decent job at catching up with the latest technologies i.e. bringing in kubernetes and containerization. Overall, this is a great tool for virtualization.

What needs improvement?

I would like for the next release to be a bit cheaper.

For how long have I used the solution?

I have been using VMware vSAN for 10+ years.

What's my experience with pricing, setup cost, and licensing?

VMware is not a cost effective solution, especially if you have a Microsoft shop. In this case, you would have to purchase the VMware license when there are already Hyper-V solutions that could do it for much cheaper.

What other advice do I have?

If you are already using VMware, then it is great to fun your applications and carries your infrastructure to the cloud. But, I would not recommend this solution to new customers. I give this solution a seven out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer. Partner
PeerSpot user
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros sharing their opinions.
Updated: June 2025
Product Categories
HCI
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros sharing their opinions.