Scalability, and flexibility.
My conversations, now, have to do with trying to help customers on how to grow with VSAN.
Scalability, and flexibility.
My conversations, now, have to do with trying to help customers on how to grow with VSAN.
It offers a lower cost of growth for a lot of our customers. They can meet immediate needs, but don’t need to spend a lot of money now. Balancing between capital budget and operational budget, instead of buying SAN to SAN, they can buy what they need now and then have operational costs after that.
If you don't have vRealize Operations, it would detract from usability of VSAN. It allows our customers to see more granularly than other storage solutions.
Never used. Last week, I got in touch with a channel partner, and he talked about different tools and different things they had implemented. Our team excited about it because there we don’t have many resources, but now we have with channel partner.
We set it up for our healthcare customer with our in-house team only.
Look outside of upfraont costs, because it’ll be equivalent to Nutanix. Its biggest value is its scalability. You can buy a little bit, and not a whole infrastructure box when you want to grow. Customers can just spin up half a dozen additional hosts quickly of they want.
I have a lot of confidence in it, but it’s a challenge to convince customers because they’re intrigued but don’t want to take the steps. All the specs and concept of having storage within servers is interesting to customers, but not ready to pull trigger. If we can sell more with Horizon, then licenses included for pricing, and must refresh hosts anyways.
We've decreased the time it takes for us to roll out new solutions. It's sped up that process for us.
It needs to allow for more customizations and individualization specific to each user.
It needs to be more malleable and adjustable to changing requirements. There are too many hard-set limitations.
We've used it for two months.
It's not a consideration, because in my impression VSAN is deployed in set-sizing and is not customizable.
Zero issues with tech support. Our TAM answers after some time, but it's not a negative because they're dedicated just to our company.
It's not difficult, but there are still limitations even in in-depth white papers because it's so new.
It's not enterprise class yet because it's a new iteration and a work in progress. Just make sure it fits and meets your requirements.
Getting rid of sharing storage, especially VSAN 6. That would be even better than having an all-flash array.
I hear a lot of issues of stability whenever you go to maintenance, but people who are having spectacular experiences are not speaking the loudest so it can be hard to tell.
I haven’t looked at configuration maximums but it seems like you can scale it up pretty hard in terms of clusters with vSphere 6.
In general, VMware customer support is world class. Response time is really quick – you get connected to experts much faster than in other companies, like Microsoft for example.
Technical Support:All I've seen is community support, especially from bloggers and community experts. I haven’t had any experience.
It's not very different than vSphere 3. If you're comfortable with VMware it’s straightforward. From what Ive seen, it’s a simple install once you have all the hardware. I have heard you have to tweak it performance wise.
Support is up there in the top five things to look at. If you can call, have online communities, easy access to articles. I would also add that if you can get through to someone who has deep knowledge of the product quickly.
Stability, the issue that we have run into is that they are fly-by-night brand new startups and you can get stranded without support.
You need to vet the company, they need to be around in a few weeks to help you. Also, peer reviews are very important – invaluable. Salesmen will tell you everything, we look at whitepapers and vendor supplied information. Google is your friend.
Originally posted in Spanish at https://www.rhpware.com/2015/02/introduccion-vmware...
The second generation of Virtual SAN is the vSphere 6.0 that comes with sharing the same version number. While the jump from version 1.0 (vSphere 5.5) 6.0 to change really worth it occurred, as this second-generation converged storage integrated into the VMware hypervisor significantly increases performance and features that are based on a much higher performance and increased workloads scale business level, including business-critical applications and Tier 1 capital.
Virtual SAN 6.0 delivers a new architecture based entirely on Flash to deliver high performance and predictable response times below one millisecond in almost all business critical applications level. This is also achieved because in this version doubles the scalability up to 64 nodes per host and up to 200 VMs per host, as well as improvements in technology snapshots and cloning.
The hybrid architecture of Virtual SAN 6.0 provides performance improvements of nearly double compared to the previous version and 6.0 Virtual SAN architecture all-flash four times the performance considering the number of IOPS you get in clusters with similar workloads predictable and low latency.
As the hyper convergent architecture is included in the hypervisor efficiently optimizes the ratio of operations of I/O and dramatically minimizes the impact on the CPU, which leads to products from other companies. The distributed architecture based on the hypervisor reduces bottlenecks allowing Virtual SAN move data and run operations I/O in a much more streamlined and very low latencies, without compromising the computing resources of the platform and keeping the consolidation of the VM's. Also the data store Virtual SAN is highly resilient, resulting in preventing data loss in case of physical failure of disk, hosts, network or racks.
The Virtual SAN distributed architecture allows you to scale elastically uninterrupted. Both capacity and performance can be scaled at the same time when a new host is added to a cluster, and can also scale independently simply by adding disks existing hosts.
The major new capabilities of Virtual SAN 6.0 features include:
Virtual SAN 6.0 All-Flash predictable levels of performance achieved up to 100,000 IOPS per host and response times below one millisecond, making it ideal for critical workloads. - See more at: https://www.rhpware.com/2015/02/introduccion-vmware...
This version duplicates the capabilities of the previous version:
Virtual SAN 6.0 requires vCenter Server 6.0. Both the Windows version as visa Virtual SAN can handle. Virtual SAN 6.0 is configurable and monitored exclusively through vSphere Web Client. It also requires a minimum of 3 vSphere hosts with local storage. This amount is not arbitrary, but is used for the cluster meets the fault tolerance requirements of at least one host, a disc network failure.
Each vSphere host own contribution to the cluster storage Virtual SAN requires a driver disk, which can be SAS, SATA (HBA) or RAID controller. However, a RAID controller must operate in any of the following ways:
The Pass-through (JBOD or HBA) is the preferred mode settings 6.0 Virtual SAN that enables managing RAID configurations the attributes of storage policies and performance requirements defined in a virtual machine
When the hybrid architecture of Virtual SAN 6.0 is used, each vSphere host must have at least one SAS, NL-SAS or SATA disk in order to participate in the Virtual Cluster SAN cluster.
In architecture-based flash drives 6.0 Virtual SAN devices can be used as a layer of cache as well as for persistent storage. In hybrid architectures each host must have at least a flash based (SAS, SATA or PCI-E) in order to participate in the Virtual SAN disk cluster.
In the All-flash architecture each vSphere host must have at least one flash based device marked as device capacity and one for performance in order to participate Virtual SAN cluster.
In hybrid architectures Virtual SAN, each vSphere host must have at least one network adapter 1Gb or 10Gb. VMware's recommendation is 10 Gb.
The All-flash architectures only support 10Gb Ethernet NICs. For redundancy and high availability, you can configure NIC Teaming per host. NIC Teaming is not supported for link aggregation (performance).
Virtual SAN 6.0 is supported by both VMware vSphere Distributed Switch (VDS) and the vSphere Standard Switch (VSS). Other virtual switches are not supported in this release.
You must create a VMkernel port on each host for communicating and labelled for Virtual SAN Virtual SAN traffic. This new interface is used for intracluster communications as well as for read and write operations when a vSphere cluster host is the owner of a particular VM but the current data blocks are housed in a remote cluster host.
In this case, the operations of I / O network must travel through the cluster hosts. If this network interface on a vDS is created, you can use the Network feature I / O control to configure shares or reservations for Virtual SAN traffic.
This new second generation Virtual SAN is a storage solution enterprise-class hypervisor level that combines computing resources and storage from the hosts. With its two supported architectures (hybrid and All-Flash) Virtual SAN 6.0 meets the demands of all virtualized applications, including business-critical applications.
Without doubt Virtual SAN 6.0 is a storage solution that realizes the VMWare defined storage software or SDS (Software Defined Storage) offering great benefits to both customers and the vSphere administrators who every day we face new challenges and complexities. It certainly is an architecture that will change the vision of storage systems from now on.
vMotion and the Distributed Resource Scheduler (DRS) load-balancing resources are the most valuable features.
We can manage capacity and performance in linear fashion.
We get better performance with a better cost efficiency.
The management of vSAN (dashboard, alerts, monitoring) has a significant amount of growth potential.
No issues encountered.
The stability is dependent on how we scale and stabilize I/O across the host(s). We have encountered issues, but have worked through them.
There are no issues because it is linear.
Prior to VSAN, we used SAN Storage, and we switched because we needed a more cost-effective solution for our cloud environment, coupled with easy scalability. Currently, SAN Storage has risks and bottlenecks, due to having only two storage processors which are not enough to handle our needs.
It was straightforward.
We implemented in-house.
Make sure you size correctly when you do the initial implementation.
Originally posted at vcdx133.com.
I previously posted about my “Baby Dragon Triplets” VSAN Home Lab that I recently built. One of the design requirements was to meet 5,000 IOPS @ 4K 50/50 R/W, 100% Random, which from the performance testing below has been met.
The performance testing was executed with two tools:
Iometer – Test configuration
Iometer – Results
VMware I/O Analyser – Test configuration
VMware I/O Analyser – Results
Observations
Software-defined and hyper-converged storage solutions are now a viable alternative to conventional storage arrays so let’s take a quick look at how two of the most popular solutions compare – VMware Virtual SAN (VSAN) and EMC ScaleIO:
Architecture
On vSphere this is an easy win for VMware as VSAN is delivered using kernel modules which provides the shortest path for the IO, has per Virtual Machine policy based management and is tightly integrated with vCenter and Horizon View.
ScaleIO is delivered as Virtual Machines, which is not likely to be as efficient, and is managed separately from the hypervisor – on all other platforms ScaleIO is delivered as lightweight software components not Virtual Machines.
VSAN also has the advantage of being built by the hypervisor vendor, but of course the downside of this is that it is tied to vSphere.
Availability
Win for EMC, since the failure of a single SSD with VSAN disables an entire Disk Group. Although VSAN has the ability to support up to three disks failures where as ScaleIO only one, in reality the capacity and performance overhead of supporting more than one failure means that VSAN will nearly always be used with just RAID 1 mirroring.
If you need double disk failure protection you are almost certainly better off using a storage array.
Performance
Easy win for VMware as VSAN uses SSDs as a write buffer and read cache, ScaleIO does have the ability to utilise a RAM read cache.
Flexibility
Easy win for EMC as with ScaleIO you can:
VSAN has a more rigid architecture of using Disk Groups which consist of one SSD and up to seven HDDs.
Elasticity
Easy win for EMC as ScaleIO supports up to 1,024 nodes, 256 Protection Domains and 1,024 Storage Pools, and auto-rebalances the data when storage is added or removed.
ScaleIO can also throttle the rebuilding and rebalancing process so that it minimises the impact to the applications.
Advanced Services
Easy win for EMC as ScaleIO provides Redirect-on-Write writeable snapshots, QoS (Bandwidth/IOPS limiter), Volume masking and lightweight encryption.
Licensing
This is a tricky one as VSAN has the more customer friendly licensing as it is per CPU therefore as new CPUs, SSDs and HDDs are released you will be able to support more performance and capacity per license.
ScaleIO has a capacity based license which is likely to mean that further licenses are required as your capacity inevitably increases over time. There is also two ScaleIO licences – Basic and Enterprise (adds QoS, Volume masking, Snapshots, RAM caching, Fault Sets and Thin provisioning).
The one downside of VSAN licensing is that you need to licence all the hosts in the cluster even if they are not used to provision or consume VSAN storage.
Conventional storage arrays
What are the advantages of a conventional mid-range array?
What are the advantages of hyper-converged software-defined solutions?
So which is best?
As always each vendor will build a strong case that their solution is the best, in reality each solution has strengths and weaknesses, and it really depends on your requirements, budget and preferences as to which is right for you.
For me the storage array is not going away, but it is under pressure from software-defined and cloud based solutions, therefore it will need to deliver more innovation and value moving forward. The choice between VSAN and ScaleIO really comes down to your commitment to vSphere – if there is little chance that your organisation will be moving away, then VSAN has to be the way to go, otherwise the cross-platform capabilities of ScaleIO are very compelling.
Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network.
For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the first of a three part series, we will take a look at Virtual SAN (VSAN).
So why VSAN?
Large Data Centres, built by the likes of Amazon, Google and Facebook, utilise commodity compute, storage and networking hardware (that scale-out rather than scale-up) and a proprietary software layer to massively drive down costs. The economics of IT hardware tend to be the inverse of economies of scale (i.e. the smaller the box you buy the less it costs per unit).
Most organisations, no matter their size, do not have the resources to build their own software layer like Amazon, so this is where VSAN (and vSphere and NSX) come in – VMware provides the software and you bring your hardware of choice.
There are a number of hyper-converged solutions on the market today that can combine compute and storage into a single host that can scale-out as required. None of these are Software-Defined (see What are the pros and cons of Software-Defined Storage?) and typically they use Linux Virtual Machines to provision the storage. VSAN is embedded into ESXi, so you now have the choice of having your hyper-converged storage provisioned from a Virtual Machine or integrated into the hypervisor – I know which I would prefer.
Typical use cases are VDI, Tier 2 and 3 applications, Test, Development and Staging environments, DMZ, Management Clusters, Backup and DR targets and Remote Offices.
VSAN Components
To create a VSAN you need:
The host is configured as follows:
The solution is simple to manage as it is tightly integrated into vSphere, highly resilient as there is zero data loss in the event of hardware failures and highly performant through the use of Read/Write flash acceleration.
VSAN Configuration
The VSAN cluster can grow or shrink non-disruptively with linear performance and capacity scaling – up to 32 hosts, 3,200 VMs, 2M IOPS and 4.4 PBs. Scaling is very granular as single nodes or disks can be added, and there is no dedicated hot-spare disks instead the free space across the cluster acts as a “hot-spare”.
Per-Virtual Machine policies for Availability, Performance and Capacity can be configured as follows:
The Read/Write process
Typically a VMDK will exist on two hosts, but the Virtual Machine may or may not be running on one of these. VSAN takes advantage of the fact that 10 GbE latency is an order of magnitude lower than even SSDs therefore there is no real world difference between local and remote IO – the net result is a simplified architecture (which is always a good thing) that does not have the complexity and IO overhead of trying to keep compute and storage on the same host.
All writes are first written to the SSD and to maintain redundancy also immediately written to an SSD in another host. A background process sequentially de-stages the data to the HDDs as efficiently as possible. 70% of the SSD cache is used for Reads and 30% for Writes, so where possible reads are delivered from the SSD cache.
So what improvements would we like to see in the future?
VSAN was released early this year after many years of development, the focus of the initial version is to get the core platform right and deliver a reliable high performance product. I am sure there is an aggressive road-map of product enhancements coming from VMware, but what we would like to see?
The top priorities have to be efficiency technologies like redirect-on-write snapshots, de-duplication and compression along with the ability to have an all-flash datastore with even higher-performance flash used for the cache – all of these would lower the cost of VDI storage even further.
Next up would be a two-node cluster, multiple flash drives per disk group, Parity RAID, and kernel modules for synchronous and asynchronous replication (today vSphere Replication is required which supports asynchronous replication only).
So are we about to see the death of the storage array? I doubt it very much, but there are going to be certain use cases (i.e. VDI) whereby VSAN is clearly the better option. For the foreseeable future I would expect many organisations to adopt a hybrid approach mixing a combination of VSAN with conventional storage arrays – in 5 years time who knows how that mix will be, but one thing is for sure the percentage of storage delivered from the host is only likely to be going up.
Some final thoughts on EVO:RAIL
EVO:RAIL is very similar in concept to the other hyper-converged appliances available today (i.e. it is not a Software-Defined solution). It is built on top of vSphere and VSAN so in essence it cannot do anything that you cannot do with VSAN. Its advantage is simplicity – you order an appliance, plug it in, power it on and you are then ready to start provisioning Virtual Machines.
The downside … it goes against VMware’s and the industries move towards more Software-Defined solutions and all the benefits they provide.