Windows Server 2016: the new Virtual Switch in Hyper-V

In this article we'll delve into the characteristics and we will see how to configure a Virtual Switch to Hyper-V in Windows Server 2016 in the mode Switch Embedded Teaming (SET). This is a new technology, alternative to NIC Teaming, allowing you to have multiple network adapters on the same physical host joins virtualization Hyper-V Virtual Switch.

With Windows Server 2012 It was introduced the ability to create an operating system natively network teaming (up to a maximum of 32 network adapters) without having to install special software vendors. It was common practice to define virtualization hosts of Hyper-V virtual switch attestandoli on these NIC teaming. To have high availability and balance virtual machine network traffic was necessary to unify these two different constructs, the Team and the Hyper-V Virtual Switch. Using this configuration, you should specify that the teaming LBFO tradition (Load Balancer Fail Over) It wasn't compatible with RDMA.

In Windows Server 2016 introduces a further possibility regarding Hyper-V Virtual Switch configuration Switch Embedded call Teaming (SET), figura 1, which allows you to unify multiple network adapters (up to a maximum of 8) in one Virtual switches without configuring any teaming. SET includes the teaming of network within the Virtual Switch providing high performance and fault tolerance in the face of hardware failure of the single NIC. In this configuration there is the possiblity to RDMA technology on individual network adapters, and therefore becomes invalid the need to have a separate set of NIC (one for use with the Virtual Switch and one for using RDMA).

2016_ 12_16_virtualswitch-01
Figure 1 – Architecture SET

When evaluating the adoption of Embedded Teaming mode Switch (SET) It is important to consider the compatibility with other technologies related to networking.

SET is compatible with:

  • Datacenter bridging (DCB)
  • Hyper-V Network Virtualization in both NVGRE that VxLAN
  • Receive side Checksum offloads (IPv4, IPv6, TCP) – If supported by the hardware model of NIC
  • Remote Direct Memory Access (RDMA)
  • SDN Quality of Service (QoS)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – If supported by the hardware model of NIC
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scalaing (RSS)

SET is Not compatible with the following network technologies:

  • 1X authentication
  • Ipsec Task Offload (IPsecTO)
  • QoS (implemented the host side)
  • Receive side coalescing (Us $)
  • Receive side scaling (RSS)
  • Single root i/o virtualization (SR-IOV)
  • Tcp Chimney Offload
  • Virtual Machine QoS (Vm-QoS)

 

Differences from the NIC Teaming

Switch Embedded Teaming (SET) It differs from traditional NIC Teaming in particular to the following aspects:

  • When you do a deployment of SET is not supported use NIC in standby mode, but all network adapters must be active;
  • You cannot assign a specific name to the team, but only to the Virtual Switch;
  • SET only supports the mode Switch Independent, While the NIC teaming has several modes of operation. This means that your network switches, where the NIC that belong to the SET are attested, I am not aware of the presence of this configuration and therefore does not introduce any control over how to distribute network traffic between the different members.

When you configure SET you only need to specify which network adapters belong to the team and the mechanism of balancing of traffic.

A Virtual Switch in SET should consist of network adapters that are certified by Microsoft that have passed compatibility testing "Windows Hardware Qualification and Logo (WHQL)"for Windows Server 2016. An important aspect is that the NIC must be identical in terms of brand, model, drivers and firmware.

As for how network traffic is distributed among the different members of SET has two modes: Hyper-V Port and Dynamic.

Hyper-V Port

In this configuration, the network traffic is divided between different team members based on the virtual switch port binding and virtual machine's MAC address associated. This mode is particularly suitable when using Virtual Machine technology in conjunction with the Queues (VMQs). It is also necessary to consider that in the case where there are a few virtual machines on the virtualization host you might not have a homogeneous traffic balancing being a little granular mechanism. In this mode also available bandwidth for a network adapter to a virtual machine (with traffic so always from a single switch port) is always limited to the available bandwidth on a single network interface.

Dynamic

This mechanism of load balancing has the following features:

  • The outgoing traffic is distributed (based on a hash TCP ports and IP addresses) the principle called flowlets (based on TCP communication breaks present in fluxes). Also in Dynamic mode is a mechanism of re-balancing of traffic in real time between the various members of SET.
  • The incoming traffic Instead it is distributed exactly as in Hyper-V mode Port.

Regarding the configuration of sets as well as for all components belonging to software-defined networking (SDN) It is recommended to adopt System Center Virtual Machine Manager (VMM). In configuring the Logical Switch simply select "Embedded" in the Uplink Mode as shown in Figure 2.

2016_ 12_16_virtualswitch-02
Figure 2 – SET configuration in VMM

As an alternative to configuring SET you can use the following PowerShell commands:

Create Virtual Switch in SET

2016_ 12_16_virtualswitch-03

The parameter EnableEmbeddedTeaming to create a team SET is optional in case they are listed multiple network adapters, but it is useful when you want to configure a Virtual Switch in this mode with a single network adapter to be extended with additional NIC subsequently.

Review of traffic distribution algorithm

2016_ 12_16_virtualswitch-04

Conclusions

Thanks to this new mechanism for creating Virtual Switch introduced in Hyper-V 2016 You can have more flexibility in networking management by reducing the number of network adapters and configuration complexity, while enjoying high performance and high availability.

Please follow and like us: