Senior Manager, Infrastructure and Operations at a agriculture with 1,001-5,000 employees
Each node has dual Intel Ivy bridge processors and 14.4Tb of raw storage however I have seen synchronous replication work great over distances.
An important announcement that came out today was EMC’s launch of Hyper-Converged EVO:RAIL appliance to Redefine Simplicity with hyper converged EVO:RAIL appliances.
With this launch EMC has moved forward further with Converged Infrastructure positioning using differentiating factors like – EMC Value add software, global enterprise data protection, management, and support. It’s not that they didn’t have a converged infrastructure in the past – it used to be the vBlock earlier.
But the Hyper-converged infrastructure appliance is a lot more software defined.
With vBlock the compute and storage was pre-integrated. It offered stablility/predictabliity for specific applications and hardware environments through reference architecture.It was a proven blueprint through VSPEX architecture.
Hyper-converged is really for smaller footprint and the IT generalist. From what I saw in a demo recently it is all about simplicity. I have put together some of my own notes from the demo and it looks like a great product that with comprehensive feature set right at launch time.
The EMC VSPEX Blue is powered by EMC hardware and VMware EVO:RAIL software. Inside the one appliance there are four servers. The architecture has been kept this away to offer agility, scalability and efficient support.
Hardware –
One appliance that has 4 independent nodes inside it.
Each node has dual Intel Ivy bridge processors and 14.4Tb of raw storage – includes both SSD and HDD
Two models of the appliance are being released – Standard and Performance.
- The Performance model is for VDI type workloads.
- Difference between Standard and Performance Model is memory.
VSPEX Blue as it is called, has four key components –
Software
for hardware monitoring, integration with EMC Dial Home, and value added EMC software.
EVO:RAIL Engine
automates cluster deployment and config, clean interface, and pre-sized VM templates with single-click policies.
Resilient Cluster architecture
As is the requirement with EVO:RAIL, VSAN provides distributed datastore that is consistent and fault tolerant. vMotion provides system availability during maintenance and DRS balances the workload.
Final component is the Software defined datacenter (SDDC) building block
– combines compute, storage, network, and management resources into a single software stack with vSphere and VSAN.
In a recent demo that I attended the Dashboard looked clean. It had ESRS embedded in the interface, management framework was in place to add EMC value-add software, and also orchestration was clearly defined.
One differentiator that I saw and would like to confirm later was that the EMC VSPEX BLUE offers information not available in EVO:RAIL. It also mapped alerts to a graphical representation of the hardware layout which helps with part identification for field services. The appliance was integrated with vRealize log Insight so detailed performance metrics are available.
EMC Customers like myself, who have experience with the ESRS piece like the fact that a remote engineer can dial in to acknowledge a call home and fix the hardware issues or dispatch a CE or parts to fix problems. This reduces a lot of operational overhead in terms of troubleshooting and resource availability.
On the dashboard is an area – Installed Apps and Market that allow you to either display installed software or get access to value add softwrae from EMC like EMC RecoverPoint for VMs, VMware vSphere Data Protection Advanced (VDPA) and so on.
On a future looking prospect – the VSPEX BLUE appliance includes EMC CloudArray Virtual Edition license, entitling 1Tb cache and 10Tb cloud storage with support for free. Companies that want a hybrid model and store some data in the cloud for cost benefit or resiliency will definitely find this very useful. Encryption in flight and at-rest with secure local key management is available to address security concerns. For network bandwidth issues, throttling and dat compression is built in. Finally, there is NAS support providing CIFS and NFS file services.
There is also no requirement that the virtual appliance be installed on each ESXi node with the protected VMs.
As an existing RecoverPoint customer of EMC I have seen synchronous replication work great over distances. WAN optimization helps tremendously and offers built in deduplication and compression functionality. The replication robustness is suited for environments utp to 300ms of latency so that addresses a lot of environments and geographical distances.
I don’t know much about pricing but the demo seemed really good and pricing was mentioned as highly competitive. So you may want to check with the local EMC sales team to get a budgtary quote. I personally like to understand ball park pricing of various products from different vendors so that during architecture of the environment there is atleast some level of understand if a solution will fit within the planned cost.
Finally, a note on licensing – The VSPEX Blue is available as a single sku so it makes for easy ordering. Appliance software includes – VMware EVO:RAIL software bundle, management, ESRS, RecoverPoint for VMs – 15 VMs, and Cloud extension.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Technology Consultant, ASEAN at a tech services company with 501-1,000 employees
I’m impressed with the interface - simple to use.
The same VMware EVO:RAIL vs Nutanix questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Overview post so that I can compare with Nutanix or Simplivity.
What is EVO:RAIL?
EVO represents a new family of ‘Evolutionary’ Hyper-Converged Infrastructure offerings from VMware. RAIL represents the first product within the EVO family that will ship during the second half of 2014. EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes.
Minimum number of EVO:RAIL hosts?
Minimum number is 4 hosts. Each EVO: RAIL appliance has four independent nodes with dedicated computer, network, and storage resources and dual, redundant power supplies.
Each of the four EVO:RAIL nodes have (at a minimum):
- Two Intel E5-2620 v2 six-core CPUs
- 192GB of memory
- One SLC SATADOM or SAS HDD as the ESXi™ boot device
- Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
- One 400GB MLC enterprise-grade SSD for read/write cache
- One Virtual SAN-certified pass-through disk controller
- Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
- One 1GbE IPMI port for remote (out-of-band) management
What is VMware software included with an EVO:RAIL appliance?
- vSphere Enterprise Plus
- vCenter Server
- Virtual SAN
- Log Insight
- Support and Maintenance for 3 years
Total Storage Capacity per Appliances?
- 14.4TB HDD capacity (approximately 13TB usable) per appliance, allocated to the Virtual SAN datastore for virtual machines
- 1.6TB SSD capacity per appliance for read/write cache
- Size of pre-provisioned management VM: 30GB
How many EVO:RAIL appliance can I scale to?
- With current release EVO:RAIL scales to 4 appliance (16 Hosts)
Who are the EVO:RAIL partners?
- The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro
- All support is through by OEM.
How EVO:RAIL Run?
- EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started. EVO:RAIL uses the vCenter Server Appliance. You can use vCenter Web Client to manage VMs.
EVO:RAIL Networks
- Each node in EVO:RAIL has 2 x 10GbE NIC (SFP+). This means there is 8 x 10GbE NIC per hosts.
- IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN.
- EVO: RAIL supports four types of traffic: Management, vSphere vMotion®, Virtual SAN, and Virtual Machine. Traffic isolation on separate VLANs is recommended for vSphere vMotion, Virtual SAN, and VMs. EVO: RAIL Version 1.0 does not put management traffic on a VLAN.
EVO:RAIL Deployment
EVO: RAIL deployment is simple, with just four steps:
- Step 1. Decide on EVO: RAIL network topology (VLANs and top-of-rack switch). Important instructions for your top-of-rack switch are provided in the EVO: RAIL User Guide.
- Step 2. Rack and cable: connect the 10GbE adapters on EVO: RAIL to the 10GbE top-of-rack switch.
- Step 3. Power on EVO: RAIL.
- Step 4. Connect a client workstation/laptop to the top-of-rack switch and configure the network address to talk to EVO: RAIL. Then browse1 to the EVO: RAIL IP address, for example https://ipaddress:7443.
The wizard asks questions about the host names, networking configuration (VLANs and IPs, etc.), passwords, and other things.
After completing the wizard, you get a snazzy little build process indicator that shows a high level workflow around what the engine is doing.
Once completed, you get a very happy completion screen that lets you log into EVO:RAIL’s management interface.
Once logged in, you are presented with a dashboard that contains data on the virtual machines, health of the system, configuration items, various tasks, and the ability to build more virtual machines.
The interface will allow you to manage virtual machines in an easy way. It has pre-defined virtual machine sizes (small / medium / large) and even security profiles that can be applied to the virtual machine configuration!
EVO:RAIL provides you monitoring capabilities. Simple overview.
Conclusion
I’m quite impressed with the interface for EVO:RAIL, it uses HTML5 and is very simple and friendly to use. Welcome to Hyper-Converged World. Next discussion, EVO:RAIL vs Nutanix.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
VMware EVO:RAIL [EOL]
May 2025

Learn what your peers think about VMware EVO:RAIL [EOL]. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
857,028 professionals have used our research since 2012.
Solutions Architect with 51-200 employees
It is a nice piece of technology, but it is way too expensive.
My view on EVO:RAIL has always been that it is a nice piece of technology, but it is way too expensive with a deeply flawed licensing model (more thoughts at VMware EVO:RAIL or VSAN – which makes the most sense?). It has just been “improved” because you can now use existing vSphere licenses which will dramatically reduce the cost of the appliance (more details here).
Even though VMware had a great chance to really make things so much better they have wasted the opportunity – amazingly they are still forcing you to use Enterprise Plus whereas Essentials Plus would be more appropriate in most cases. It is also not clear if the vSphere licences can be moved, or if Virtual SAN and Log Insight are still tied to the hardware or if existing licenses can be used as well.
So this still leaves us with the following questions:
- Why would you have to use vSphere Enterprise Plus?
- Why would you not have perpetual rights to all of the software?
- Why would you want 4 under-powered nodes?
- Why would you want the minimum number of nodes to be 4 (2 or 3 would be better)?
- Why would you scale in 4 node increments (1 would be better)?
- Why would you not allow the addition of extra drives?
The bottom line is I would love to know what VMware’s agenda is for EVO:RAIL – if anyone knows please get in touch because I just do not get it.
Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
Solutions Architect with 51-200 employees
VSAN vs. EVO:RAIL
I really like what VMware is doing with their Software-Defined Data Centre strategy – the idea of allowing customers to use commoditised low cost compute, storage and networking hardware for their infrastructure has got to be a good thing – we are on the verge of hopefully making IT both much simpler and cheaper.
What I am not so sure about is EVO:RAIL, I get VSAN (see An introduction to VMware Virtual SAN Software-Defined Storage technology and What are the pros and cons of Software-Defined Storage?), but does EVO:RAIL actually make sense?
There are some advantages – it is easy to order, as it is a fixed configuration and it is easy to deploy, just plug-in, power-on and go.
But compared to VSAN it has some serious constraints:
- Why can’t we specify a CPU and memory quantity (6-cores seems a bit behind the times today)?
- Why can’t we specify the SSD and HDD configuration (the supplied capacity seems a bit on the low side)?
- Why can’t we start with 3 nodes and then add nodes one at a time (purchasing 4 nodes at a time does not seem ideal)?
- Why can’t we re-use existing vSphere and VSAN licences?
- Why can’t we choose to use something other than vSphere Enterprise Plus (Standard or Essentials Plus may well be more appropriate)?
- Why can’t we transfer the VMware licences to another EVO:RAIL appliance or standard server (the licences are OEM based and tied to the hardware)?
I would also argue that VMware has done a great job of making vSphere and VSAN easy to deploy, yes it is going to take a bit longer than EVO:RAIL, but you are not talking about a significant amount of extra time.
So for me EVO:RAIL just does not make sense, not from a technical point of view, but commercially. If VMware were to follow their strategy of Software-Defined solutions surely they would allow customers to buy EVO:RAIL compliant hardware and EVO:RAIL software separately.
Even better just have a special EVO:RAIL build of vSphere that uses standard vSphere/VSAN licencing – that way the customer can move their licences between what ever hardware form they like, is that not the point of the Software-Defined Data Centre?
It looks to me a bit like the vRAM tax and hopefully VMware will listen and make some adjustments.
Comments would be very much appreciated as I am sure there are plenty of people with different opinions.
Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
Hi,
I think it it early days for VSAN and even when it does take off it will be deployed along side SAN/NAS arrays in medium to large organisations.
As with all technologies the architecture of something like VSAN has both positive and negative attributes when compared to an array.
Best regards
Mark
Technology Consultant, ASEAN at a tech services company with 501-1,000 employees
Nutanix vs. EVO:RAIL
2015 IT Trends: Convergence, Automation, and Integration.
The hyper-converged gaining momentum each of the last few years, there are more and more customers taking notice. During VMworld 2014 in August, VMware announced of hyper-convergence: the EVO: RAIL, the combination of virtualization software loaded onto four blade servers, sliding on a rail into a 2u space of a server rack. It represents compute, storage, and networking in a single modular unit.
Please read my other post for VMware EVO:RAIL and Nutanix.
VMware software included with an EVO:RAIL appliance:
- vSphere Enterprise Plus
- vCenter Server
- Virtual SAN
- Log Insight
- Support and Maintenance for 3 years
Hardware:
Hypervisor:
Some customers are implementing non VMware products to virtualize workloads, the flexibility to support more than VMware is quickly becoming important. VMware EVO:RAIL only support VMware while Nutanix support KVM or Hyper-V over VMware.
Read my other post for VMware and Microsoft Hyper-V 2012R2 here.
Storage:
This comparison is not cover performance, only comparing the availability and data services that the hyper-converged platforms offer.
Nutanix are using a Virtual Storage Appliance (VSA). There is a VSA on each node in the storage cluster and they act like scale out storage controllers. While VMware has taken the approach of building VSAN as a module in the vSphere kernel. Each approach has its benefits and draw backs. The VSA model will use more host resources to provide storage services. Using the VSA is allowing vendors to offer deduplication, compression, backup and replication among other services. While VMware’s integrated approach uses far less resources, it does lag in the data services it can offer currently.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.

it_user429375Technical Solutions Architect at a tech services company with 501-1,000 employees
Real User
The real comparison needs to be made between Nutanix and EMC/Dell's VX-Rail. This is a much closer than the EVO:Rail go to market concept.

Buyer's Guide
Download our free VMware EVO:RAIL [EOL] Report and get advice and tips from experienced pros
sharing their opinions.
Updated: May 2025
Product Categories
HCIBuyer's Guide
Download our free VMware EVO:RAIL [EOL] Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
- When evaluating Hyper-Converged Infrastructure, what aspect is the most important to look for?
- What is the biggest difference between SimpliVity and Nutanix
- What unique aspects of HCI does Nutanix provide that other HCI solutions do not?
- What's the best way to trial hyper-converged (HCI) solutions?
- Nutanix or Simplivity for Graphic Intense VDI Use?
- In which business category do you fit the trendy hyper-converged infrastructure?
- SAN vs. HCI: What should businesses know before choosing one over the other?
- Should I Move to Hyper Converged? Why?
- Does any hyper-converged solution support segment routing?
- What is the difference between converged and hyper-converged infrastructure?
This review is a good summary.