Technical Support Engineer with 1-10 employees
  • 12
  • 1044

Which hypervisor provides the best network performance at 10gb or higher?

Which hypervisor provides the best network performance at 10gb or higher?

What do you all think?

PeerSpot user
15 Answers
IT Development Manager at a comms service provider with 10,001+ employees
Real User
Oct 9, 2019

On a basic, structural level, virtual networks aren't that different from physical networks.

So In virtualization, virtual switches are used to establish a connection between the virtual network and the physical network.

Once the vSwitch has bridged the connection between the virtual network and the physical network, the virtual machines residing on the host server can begin transferring data to, and receiving data from, all of the network-capable devices connected to the physical network. That is to say, the virtual machines are no longer limited to communicating solely across the virtual network.

What I want to say the network performance depend on many factors rather than hypervisor itself with my long experience in virtualization after working on VMware, OVM, KVM, Hyper-V, and Nutanix AHV, we can get the best performance for all of these hypervisors if we are using the proper NIC card, physical server, and physical switches.

From my point of view, Nutanix can provide the best performance due to the data locality which can offer more than 10 Gb for the hosted virtual machines.

But again you can gain the best performance from VMware if you have the best design.

Search for a product comparison in Server Virtualization Software
User at a marketing services firm with 201-500 employees
Real User
Oct 10, 2019

I felt the need to, again, make some remarks.

Left out of the discussion is the question of which architecture is planned for use and what OSes as guests. In the past, Intel CPU on the x86 ISA has been used merely, but that landscape is rapidly shifting.

There is a big change coming. Apart from the new x86 Epyc CPUs from AMD which have much better gains on a lot of virtualization platforms, the latest developments are now pointing into the direction of other ISA's like ARM and RISC. Not for the faint of heart as of yet, but it is coming and it won’t be stopped this time.

If you look carefully at the AMD Epyc CPU line, with lots of CPU lanes and much better performance figures than can be obtained on current platinum and gold editions of Intel CPU's, you quickly discover the benefits. And yes, this platform is rapidly maturing. This is something to consider when choosing the hypervisor; not all hypervisors perform equally well on those platforms. Initial testing I did with Epyc Rome suggests that the more mature Linux Hypervisors are taking the lead.

It all depends on your particular needs for that 10Gbit speed you want to implement. Without further details, it is hard to offer good advice. If your workload is SQL server the backend plays a much more important rule. In that respect Xenserver 8.0 on Epyc takes the crown, but only if your backend is of good quality too. Full flash backends are not always better for that particular network load and workload. I think if your wallet is deep enough full M.2 on Epyc CPUs spans the top of the line, no matter what hypervisor is chosen.

it_user1026912 - PeerSpot reviewer
User with 11-50 employees
Oct 9, 2019

I have good experience with VMware hypervisor over 10G network, in the next link you can see all the information about the best practices.


Hardware Networking Considerations

Before undertaking any network optimization effort, you should understand the physical aspects of the network. The following are just a few aspects of the physical layout that merit close consideration:

* Consider using server-class network interface cards (NICs) for the best performance.
* Make sure the network infrastructure between the source and destination NICs doesn’t introduce bottlenecks. For example, if both NICs are 10Gb/s, make sure all cables and switches are capable of the same speed and that the switches are not configured to a lower speed

For the best networking performance, we recommend the use of network adapters that support the following hardware features:

* Checksum offload
* TCP segmentation offload (TSO)
* Ability to handle high-memory DMA (that is, 64-bit DMA addresses)
* Ability to handle multiple Scatter Gather elements per Tx frame
* Jumbo frames (JF)
* Large receive offload (LRO)
* When using a virtualization encapsulation protocol, such as VXLAN or GENEVE, the NICs should support offload of that protocol’s encapsulated packets.
* Receive Side Scaling (RSS)

Make sure network cards are installed in slots with enough bandwidth to support their maximum throughput. As described in “Hardware Storage Considerations” on page 13, be careful to distinguish between similar-sounding—but potentially incompatible—bus architectures

Ideally, single-port 10Gb/s Ethernet network adapters should use PCIe x8 (or higher) or PCI-X 266 and dual-port 10Gb/s Ethernet network adapters should use PCIe x16 (or higher). There should preferably be no “bridge chip” (e.g., PCI-X to PCIe or PCIe to PCI-X) in the path to the actual Ethernet device (including any embedded bridge chip on the device itself), as these chips can reduce performance.

Ideally 40Gb/s Ethernet network adapters should use PCI Gen3 x8/x16 slots (or higher)

Multiple physical network adapters between a single virtual switch (vSwitch) and the physical network constitute a NIC team. NIC teams can provide passive failover in the event of hardware failure or network outage and, in some configurations, can increase performance by distributing the traffic across those physical network adapters.

When using load balancing across multiple physical network adapters connected to one vSwitch, all the NICs should have the same line speed.

If the physical network switch (or switches) to which your physical NICs are connected support Link Aggregation Control Protocol (LACP), configuring both the physical network switches and the vSwitch to use this feature can increase throughput and availability.

Owner at a tech services company with 51-200 employees
Real User
Oct 8, 2019

I have worked only with VMware Hypervisor and have seen that for most customers a 2 x 10Gbit connection it works fine when using in combination with the distributed virtual switch with network profile on it. (VMware QOS) in the DVS.

IT Manager at a tech vendor with 51-200 employees
Real User
Oct 8, 2019

Using Intel network cards 520 and 540, I have not tested any 550 yet. Proxmox gets the best performance, but not by much, XEN and VMware get really close, I do not think it can be the "deciding" factor. With Broadcom network cards the result changes a lot. In this case, Proxmox gets WAY better performance compared to XEN and VMware but is a little slower than Intel. I will not provide numbers because my tests are very informal and relaxed, just copy a big file, or open a bunch of queries in an SQL server.

Data Center Development at Telekomunikasi Indonesia
Oct 9, 2019

In my company, we use VMware vSphere as a hypervisor and Openstack. We have tested for VMware because VMware is the main platform for critical business. We've tested using Iperf tools, with 2 NIC 10Gb/s teaming to Cisco ACI can reach 18Gb/s as expected.

The most important things to achieve high throughput are:
1. Make sure software/firmware compatibility between the hypervisor version and NIC's firmware card.
2. Load testing (send file 1.5TB )using 2 servers as a client/source and 1 server as Target. Of course, before performing a test, all physical layers should be already error-free (optical cable, NIC, switch port).

Note: we found that error(CRC error) and packet drop appears if the firmware is not compatible.

Learn what your peers think about VMware vSphere. Get advice and tips from experienced pros sharing their opinions. Updated: February 2023.
687,256 professionals have used our research since 2012.
Account Representative at Nutanix
Real User
Oct 8, 2019

Two answers:

1) Likely is Nutanix because of the locality.
2) What workload is running that requires 10gb throughput or IOPS? Because the reality is, most workloads are not taxing a 10Gb port. If they are, then great, scale-out infrastructure like Nutanix can help distribute that workload as well as a number of modern DB's such as Mongo and NoSQL, etc.

System Administrator at Bakhresa Group of companies
Real User
Oct 8, 2019

I found VMware vSphere is far better equipped to meet the demands of an enterprise datacenter than other hypervisors and it delivers the production-ready performance and scalability needed to implement an efficient and responsive data center.

it_user1161660 - PeerSpot reviewer
System Engineer at a tech services company with 10,001+ employees
Real User
Oct 8, 2019


Senior System Engineer at Nutanix
Real User
Oct 8, 2019

Nutanix, for sure. Data locality makes a difference. For new-gen disk technologies, the network will be the bottleneck so the more you avoid networking traffic, the better.

Senior Strategic Technical Marketing Engineer at Nutanix
Real User
Oct 8, 2019

When you use Nutanix hyper-converged infrastructure (HCI), you can choose your own hypervisor, VMware ESXi, Hyper-V, or native Nutanix AHV. As you modernize your full-stack of architecture, the less visible the underlying infrastructure, the easier it is to use for your end-users. We recommend testing the configuration using our built-in X-ray tool so that you can choose what is best for your environment. The network performance at 10gb or higher might be dependent on other factors than just the selection of hypervisor.

User at a marketing services firm with 201-500 employees
Real User
Oct 8, 2019

Apart from the question of which hypervisor to use, it boils down to the quality of the network adapters, switching capacity and CPU lanes your proc supports. Checksum offloading is another important topic. If all is done well I measured performance gains with Xen Hypervisor compared to VMware hypervisor on the same hardware approaching 8% or better. I am a real user.

Nevertheless, the hypervisor itself plays just an integral part in the overall setup and hence the performance figures. It all starts with high-quality network adapters and good switches as the most important components for 10Gbit performance.

Senior Linux Administrator at a tech services company
Real User
Oct 8, 2019

KVM/Proxmox use kernel Linux.

Program Architect (Microsoft) with 5,001-10,000 employees
Real User
Oct 8, 2019

VMware is best.

Storage Specialist at Informatics Services Corporation
Real User
Oct 8, 2019

VMware ESX is one of the best software. Our experience with iSCSI has been this, we used HPE Proliant Server And HPE Ethernet 10Gb 2-port 560FLR-SFP+ Adapter as Initiator the Cisco Switch as a Fabric Layer, the delay was acceptable and applicable.

Related Questions
User at Visiting Nurse Service of NY
Jan 8, 2022
Hi community, I'm working at a healthcare company with 1,001-5,000 employees. Please let me know which virtualization software (Oracle VM, Azure VM, Microsoft Hyper-V, Google Compute Engine, VMware, ...) would you recommend for my organization?  Please let me know why would you choose the suggested solution.
2 out of 6 answers
Principal System Engineer at BCBS of MN
Nov 29, 2021
I am not sure what workloads you're planning to run on the virtualized platform - databases, SAP, Hadoop, etc. But VMware vSphere is a great virtual platform to run (mixed of different types of workloads).  It is simple to deploy and maintain.  Do you still have 3 tiers of traditional infrastructure? Network Storage and Compute? Do you plan to implement HCI (Hyper-Converged Infrastructure)? 
Head of Architecture and DevOps at a financial services firm with 10,001+ employees
Nov 29, 2021
Hi  If you are looking for on-premise VMware would suffice most of your requirements. However,  if you are allowed for Cloud I would recommend to select Microsoft Azure. I have recently locked the on-premise Data Center to Azure for one of the Healthcare providers in Dubai.
Snr. Infrastructure Architect (Data Centre) at DHA
Nov 29, 2021
Hi community, Can you please share your experience regarding real-time Virtual-to-Virtua (lV2V) replication from primary to secondary storage in any hypervisor platform? Thanks for your help!
See 1 answer
Director of Community at PeerSpot (formerly IT Central Station)
Nov 29, 2021
Hi @Abbasi Poonawala, @Chris Childerhose, @Daniel Aramayo, @Steffen Hornung ​and @reviewer1362099, Can you possibly share your experience with @Syed Abid Hussain ​or assist with this question? Thanks for the help!
Related Articles
Chris Childerhose - PeerSpot reviewer
Lead Infrastructure Architect at ThinkON
Jul 27, 2022
Every Virtualization Administrator deals with frustrations daily, and having a management platform to help with daily tasks will ease the burden. But how do you know which one is right for you? How would you go about choosing the right platform that will help you in your daily tasks? Software Criteria When choosing a management tool, there are many things to consider based on your virtual e...
Related Articles
Chris Childerhose - PeerSpot reviewer
Lead Infrastructure Architect at ThinkON
Jul 27, 2022
Choosing a Virtualization Management Platform
Every Virtualization Administrator deals with frustrations daily, and having a management platf...
Download Free Report
Download our free VMware vSphere Report and get advice and tips from experienced pros sharing their opinions. Updated: February 2023.
687,256 professionals have used our research since 2012.