Category Archives: Windows Server 2016

Azure Backup Server v2 in Windows Server 2016

Azure Backup Server is a solution available on the market since October 2015 and in the spring of this year has been released the second version of the product, named Azure Backup Server v2 (MABS v2), that supports installation on Windows Server 2016. Actually Azure Backup Server has inherited the same capabilities of System Center Data Protection Manager, with the substantial difference that does not support backup to tape. Using Azure Backup Server v2 implemented on Windows Server 2016 allows the use of Modern Backup Storage that guarantees, thanks to the new technologies introduced in Windows Server 2016, to improve the performance of backups, to reduce the occupation of storage and to increase the resilience and safety with regard to the protection of virtual machines. This article describes how to implement Azure Backup Server v2 and contains the instructions to follow to take advantage of the benefits through native integration with Windows Server 2016.

Installation requirements

Deploy Azure Backup Server v2 (MABS v2) can be performed on a standalone physical server , on a virtual machine in VMWare or Hyper-V or on a virtual machine hosted on Azure.

The operating system can be Windows Server 2012 R2, but it is recommended Windows Server 2016 in order to benefit from the advantages of Modern Backup Storage. The machine must be joined to an Active Directory domain and should have the ability to access in Internet to Microsoft Azure even if you decide not to send the protected data to the cloud.

Regarding hardware specs Microsoft recommends the following.

Minimum: 1 GHz, dual-core CPU.
Recommended: 2.33 GHz, Quad-core CPU.

Minimum: 4 GB.
Recommended: 8 GB.

Disk space
Software installation: recommended about 8-10 GB.
Storage Pools: 1.5 times the data you wish to protect.
Scratch Location: at least the 5% of the total space of the protected data in the cloud.

With regard to the software requirements you must install Microsoft .Net 3.5 SP1, Microsoft .Net 4.6 and Hyper-V Powershell modules.

Finally you need to create on its own subscription Azure a Recovery Service Vault, which will be associated with the Azure Backup Server. The setup of Azure Backup Server will require the Vault credentials which can be downloaded by accessing the properties from the Azure Portal:

Figure 1 – Backup Download Credentials


Installation procedure

The download of the installation setup of Azure Backup Server v2 can be started directly by accessing this Microsoft page. After the download of different files you need to run the executable MicrosoftAzureBackupServerInstaller.exe to extract installer binaries in a single folder. Inside the chosen folder, you can run the Setup.exe file to start the installation process later documented.

Figure 2 – Select Install Microsoft Azure Backup Server

Figure 3 – Welcome page

Figure 4 – Check the prerequisites

azure backup server requires the presence of a microsoft sql server instance to host the related database. If you do not have an existing instance to use (required at least SQL Server 2014 SP1) the setup installs SQL Server 2016 Service Pack 1 (recommended by Microsoft). In this scenario you do not require the acquisition of a license for SQL Server as long as the instance is for the exclusive use of MABS v2.

Figure 5 - Choice relative to the SQL Server that hosts the DBs of MABS v2 anf check of the requirements

If you have not installed the Hyper-V Powershell module the setup will install it, but you will need to stop the installation setup to restart the system.

Figure 6 – Requirements not met and restart required for Hyper-V Powershell module installation

Figure 7 – Requirements met

Figure 8 – Choice of installation path

The setup of MABS v2 creates the account MICROSOFT$DPM$Acct local to the machine that will run SQL Server and SQL Server Agent services and the account DPMR$Servername used for the generation of reports.

Figure 9 – Choice of password for the MICROSOFT$DPM$Acct and DPMR$Servername

Figure 10 – Choice of deploying updates to MABS v2 via Windows Update

Figure 11 - Summary concerning the installation choices

At this point starts the setup of Microsoft Azure Recovery Services (MARS) Agent required to connect to the Recovery Service Vault in Microsoft Azure.

Figure 12 - Configuration of the proxy server if required for access to public services in Microsoft Azure

Figure 13 – Verification of the presence of the necessary requirements and installation of MARS

After installing the MARS, starts the registration process of the Azure Backup Server to the Azure Recovery Service Vault that requires the Backup Vault credentials (recoverable following the step documented in Figure 1) and the passphrase required to perform the encryption of stored data. You should save this key in a safe place as it is necessary during recovery operations and can not be recovered in any way by Microsoft staff.

Figure 14 -Choose Backup Vault Credentials

Figure 15 – Passphrase for encryption of backups

After completing these steps, you must wait the end of the installation process.

Figure 16 - MABS v2 installation completed successfully

Before proceeding with the configuration of MABS v2 it is recommended to apply the latest update available for Microsoft Azure Backup Server v2 which you can be downloaded from the Microsoft support site.

At this point, it is necessary to configure the SQL Server instance just installed according to their own needs, and it is recommended to apply thelatest Cumulative Update available for SQL Server 2016 Service Pack 1.


Features provided by the integration between MABS v2 and Windows Server 2016

Azure Backup Server v2 is natively integrates with the new technologies available in Windows Server 2016 so you can enjoy the following benefits:

  • Efficiency major in backups operations: using the technologies Refs Block Cloning, VHDX and Deduplication you can get a reduction of storage needed to protect data and improve performance in the execution of backup. The configuration of the Modern Backup Storage can be done by following the steps documented in official documentation, which although relating to DPM 2016 is identical for Azure Backup Server v2. Very interesting also the functionality Workload-Aware Storage that allows you to select which volumes use depending on the type of workloads are protected, having thus the opportunity to choose more efficient storage and dedicate it to more frequent backup tasks for which it is good to have high performance.
  • Reliability elevated in Hyper-V virtual machine protection, thanks to the integration with technology Resilient Change Tracking (RCT) can natively track changes made to VMs compared to backups, without the need to add filter drivers. This reduces the time-consuming tasks to perform consistency checks.
  • Security: ability to backup and restore Shielded VMs.


Costs of the solution

As regards the cost of the solution is good to specify that it is obviously necessary to contemplate the license of the machine's operating system on which you are installing MABS v2. An interesting aspect is that in order to implement Azure Backup Server is not require any licence concerning System Center, but you must have an Azure subscription . In the cost of the solution you should consider a fee for each protected instance and any storage occupied in Microsoft Azure. For more details on the cost of the solution, please consult the Official Microsoft page on the Pricing.



Azure Backup Server v2, with its approach cloud-first and through the integration with certain features in Windows Server 2016 , is a complete and functional solution for the protection of different workloads. For those using the first release of Azure Backup Server you can upgrade to MABS v2 keeping all the settings. The advice is still to implement MABS v2 on Windows Server 2016 so that you have a solution that allows you to perform backups with speeds up to 3 times and to reduce up to 50% storage utilization.

Windows Server 2016: Introduction to Hyper-V VHD Set

In Windows Server 2016 a cool new feature was introduced in Hyper-V, codenamed VHD Set. This is a new way of creating virtual disks that need to be shared among multiple virtual machines, useful for implementing guest cluster. In this article you will learn the characteristics of VHD Set, you will learn how to implement them at best and how to effectively address migration scenarios.


The ability to share virtual disks across multiple virtual machines is required for clustered configurations guest requiring shared storage and to avoid having to configure access to storage via for example virtual HBA or through the use of iSCSI Protocol.

Figure 1 – VHD Set

In the Hyper-V this feature was introduced with Windows Server 2012 R2 technology called Shared VHDX, which has the following important limitations that often prevent the use in production environment:

  • Back up the Shared VHDX should occur with specific agents and host based backup is not supported
  • Hyper-V replication scenarios are not supported
  • The resize online of Shared VHDX is not covered

With Hyper-V in Windows Server 2016 This feature was revolutionized with the introduction of the VHD Set instead of Shared VHDX which removes the limitations listed above making it a mature technology and reliable even for production environments. In fact the virtual machines that are configured to access the VHD Set you can protect them by host based backup, without having to install agents on guest machines. In this case we recommend a check to determine whether the backup solution supports this configuration. Also the discs in the VHD format Set support online resizing, without the need to stop the guest cluster configured to access it. Even the Hyper-V replication supports VHD format disks Set allowing you to implement disaster recovery scenarios for guest cluster configurations.

At the moment the only limitations in using VHDs Set are given by non-support for creating virtual machine checkpoint that access and the inability to perform a live migration of virtual machines with VHD storage Set. Microsoft's goal for the future is anyway to make virtual machines configured with VHD joint Set all other functionality.

Requirements for using VHD Set

The VHD Set format is supported only for guest operating systems Windows Server 2016. You can also configure guest cluster where virtual machines are accessing shared virtual disks you must fall into one of the following scenarios:

  • Hyper-V failover cluster with all files of VMs, including in the VHD Set format , residing on a Cluster Shared Volumes (CSV).
  • Hyper-V failover cluster that has as a storage location for VHD Set a SMB share 3.0 output from one Scale Out File Server (SOFS).

Figure 2 – Supported scenarios for using shared virtual disks

How To Create VHD Set

Creating virtual disks in the VHD Set format can be made either with a Graphical User Interface (GUI) that using Powershell. To create them via GUI simply open Hyper-V Manager and from the Actions Select New, Hard Drive. Among the possible formats will also be VHD Set as shown in the following figure:

Figure 3 – Selecting the virtual disk format in the creation wizard

Continuing with the Wizard, you can specify whether the disk should be classified as Fixed rather than Dynamic, the name, the location and its size if you choose to create a new blank disk. The same thing can also be done using Powershell cmdlet New-VHD, specifying as an extension of the virtual disk the new extension .vhds, as shown in the following example:

Figure 4 – Example of creating a disc in the VHD format Set using Powershell

Creating a disk in VHD Set format creates the following files in the specified location:

Figure 5 – Files generated by creating a disc in VHD format Set

The file with extension .avhdx contains data and can be fixed or dynamic depending on the choice made when creating, while the file .vhds contains the metadata required to coordinate access by different guest cluster nodes.

Virtual machine configuration with VHD Set

In order to add the drives in VHD Set format to virtual machines by modifying the properties and configure properly connecting SCSI controller:

Figure 6 – Addition of the Shared Drive in the properties of the VM

Next you must select the location of the file:

Figure 7 – Configuring the location of the shared drive

The same thing you will have to do it for all the virtual machines that will make up the guest cluster. After configuring the shared storage, that adds to the virtual machines the disks in VHS Set format, you can continue to configure the guest environment cluster according to the standard procedure for creating a cluster is described in Microsoft's official documentation.

Converting Shared VHDX in VHD Set

In Hyper-V infrastructure upgrade scenarios from Windows Server 2012 R2 to Windows Server 2016, may have to deal with the migration of Shared VHDX in VHD Set to take advantage of all the benefits in the new technology of virtual disk sharing. Moving to Windows Server 2016 there is no automatic update of Shared VHDX in VHD Set and is not prevented from continuing to use the shared disks in Shared VHDX format in Windows Server 2016. In order to migrate the Shared VHDX in VHD Set format you need to follow the steps manual:

  • Shut down all virtual machines connected to the Shared VHDX you intend to migrate.
  • Disconnect the Shared VHDX from all VMs using Powershell cmdlet Remove-VMHardDiskDrive or by using Hyper-V Manager.
  • Start converting the Shared VHDX in VHD format Sets via Powershell cmdlet Convert-VHD
  • Connect the disk you just converted to the VHD format Set to all VMs using Powershell cmdlet Add-VMHardDiskDrive or by using Hyper-V Manager.
  • Turn on virtual machines connected to VHD Set.

When using disks in VHD format sets can be useful the following Powershell cmdlets:

  • Get-VHDSet: useful for displaying various information about the disk in the VHD Set format, including a list of any checkpoint.
  • Optimize-VHDSet: needed to optimize space allocation used by the disk in the VHD Set format.


In Windows Server 2016 the introduction of VHD Set in Hyper-V enables you to easily implement architectures guest cluster without using storage sharing technologies that require heavy and complex configurations. Were also removed most of the restrictions regarding the methodology of sharing of virtual disks, present in the previous version of Hyper-V, making VHD Set a mature technology, reliable and therefore can also be used in production environments.

OMS Log Analytics: How to monitor Azure networking

Inside there is the possiblity to Log Analytics of solution specifications that allow the monitor to some components of the network infrastructure that exists in Microsoft Azure.

Among these solutions are Network Performance Monitor (NPM) which was deepened in the article Monitor network performance with the new solution of OMS and that lends itself very well to monitor the health status of, the availability and accessibility of the networking of Azure. They are also currently available in the following gallery of Operations Management Suite solution that enrich the monitoring potential side OMS:

  • Azure Application Gateway Analytics
  • Azure Network Security Group Analytics

Enabling Solution

By accessing the portal who you can easily add these solutions present in the gallery by following the steps that are documented in the following article: Add Azure Log Analytics management solutions to your workspace (OMS).

Figure 1 – Analytics Solution Azure Application Gateway

Figure 2 – Analytics Solution Azure Network Security Group

Azure Application Gateway Analytics

The Azure Application Gateway is a service that you can configure in Azure environment can provide Application Delivery functionality ensuring an application layer balance 7. For more information regarding ’ Azure Application Gateway can be found in the official documentation.

In order to collect diagnostic logs in Log Analytics you need to position yourself in the Azure portal resource Application Gateway that you want to monitor, and then under Diagnostics logs Select sending the logs to the workspace Log Analytics should:

Figure 3 – Application Gateway Diagnostics settings

For the Application Gateway you can select the following log collection:

  • Logs that are related to active logins
  • Performance data
  • Firewall log (If the Application Gateway has the Web Application Firewall enabled)

After you complete these simple steps designed to walk easily installed solution who data sent from the platform:

Figure 4 – Azure Application Gateway Analytics Overview who Portal

All ’ within the solution, you can view a summary of the collected information and selecting the individual charts you access details about the following categories:

  • Application Gateway Access log
    • Client and server errors in access log of Application Gateway
    • Applications received by Application Gateway for now
    • Failed requests per hour
    • Errors detected for user agent

Figure 5 – Application Gateway Access log

  • Application Gateway performance
    • State of health of the hosts that meet the requirements of the Application Gateway
    • Failed requests of Application Gateway expressed as maximum number and with the 95 percentile

Figure 6 – Application Gateway performance

  • Application Gateway Firewall log


Azure Network Security Group Analytics

In Azure, you can check the network communication via the Network Security Group (NSG) which aggregates a set of rules (ACL) to allow or deny network traffic based on direction (inbound or outbound), the protocol, the address and the source port or the destination address and port. The NSG are used to control and protect the virtual network or network interfaces. For all the details about the NSG please visit the Microsoft's official documentation.

In order to collect diagnostic logs of Network Security Group in Log Analytics you need to position yourself in the Azure Portal Resource Network Security Group that you want to monitor, and then under Diagnostics logs Select sending the logs to the workspace Log Analytics should:

Figure 7 – Enabling NSG Diagnostics

Figure 8 – Diagnostic configuration NSG

On the Network Security Group you can collect the following types of logs:

  • Events
  • Counters related to rule

At this point in the OMS portal home page you can select the tile by Overview of solution Azure Network Security Group Analytics to access data from the NSG collected by platform:

Figure 9 – Azure Network Security Group Analytics Overview OMS Portal

The solution provides a summary of the logs collected by splitting them into the following categories:

  • Network security group blocked flows
    • Rules of the Network Security Group with blocked traffic
    • Network routes with traffic blocked

Figure 10 – Network security group blocked flows

  • Network security group allowed flows
    • Rules of the Network security group with allowed traffic
    • Directives of the network with traffic rules allowed

Figure 11 – Network security group allowed flows

The methodology of sending diagnostic logs of Application Gateway and Network Security Group of Azure to Log Analytics has changed recently by introducing the following advantages:

  • Writing the log in log Analytics takes place directly without having to use the storage account as repository. You can choose to save the diagnostic logs on the storage account, but it is not necessary for the ’ sending data to OMS.
  • The latency between the time of log generation and their consultation in Log Analytics has been reduced.
  • Have been greatly simplified the steps required to configure.
  • All Azure Diagnostics were harmonised as format.


Thanks to a more complete integration between Azure and Operations Management Suite (OMS) You can monitor and control the status of the components of the network infrastructure built on Azure comprehensively and effectively, all with simple, intuitive steps. This integration of platform Azure with OMS is surely destined to be enriched with new specific solutions for other components. For those interested to further deepen this and other features of the who remember that you can try the OMS for free.

Windows Server 2016: Configuring the Failover Cluster Witness in the Cloud

In the article Windows Server 2016: What's New in Failover Clustering all were thorough main innovations introduced with Windows Server 2016 in the failover clustering. In this article we will detail the configuration of the cluster witness in Microsoft Azure cloud, analyzing the possible scenarios and the benefits of this new feature.


Possible scenarios supported by Witness Cloud

Among the supported scenarios that lend themselves more to this type of configuration are:

  • Multi-site stretched cluster.
  • Failover Cluster that does not require shared storage (SQL Always On, Exchange DAGs, etc).
  • Failover Cluster composed of nodes hosted on Microsoft Azure, other public or private cloud.
  • Scale-out type cluster File Server.
  • Cluster made actually small branch-office.


Cloud Witness configuration

We begin by specifying that a requirement to configure the cluster to use the Cloud Witness is that all nodes that make up the cluster has an internet access to Azure. Cloud Witness in fact uses the HTTPS protocol (door 443) to establish a connection with the Azure blob Storage Service Account.


Configuring the subscription requires a Witness Azure Cloud in which to configure a Storage Account that will be used as Witness and Cloud on which are written the blob file used for the arbitration of the cluster.


From the Azure portal you must create a storage account type Genaral Purpose. For this purpose is incorrect, create it with a performance level standard as they are not necessary for high performance that is provided with the use of SSDS. After selecting the most suitable location and replication policies you can proceed with the process of creating.


Figure 1 – Storage Account creation


After you create your storage account you must retrieve its required access key for authentication, which will be required in configuration steps.


Figure 2 – Account Storage access keys


At this point you can change the settings of the cluster Quorum from Failover Cluster Manager by following the steps below:


Figure 3 – Failover Cluster Quorum Settings Configuration Manager


Figure 4 – Witness Quorum selection


Figure 5 – Selection of Cloud Witness


Figure 6 – Storage Account name and access key


After successful configuration will be present among the various cluster resources also Cloud Witness:


Figure 7 – Cloud Resource Witness


Azure Storage Account is created a container named msft-cloud-witness, within which there will be a single blob file that has as its name the ID I joined the cluster. This means that you can use the same Microsoft Azure Storage Account to set up the different Cloud cluster Witness, where there will be a blob file for each cluster.


Figure 8 – Container inside of the Storage Account and its contents


Advantages of using Cloud Witness

The use of Cloud Witness gets the following benefits:

  • Eliminates the need to have an additional separate data center for certain cluster configurations by using Microsoft Azure.
  • Cancels the administrative effort required to maintain an additional virtual machine cluster witness role.
  • Given the small amount of data written to the Storage Account service charge is ridiculous.
  • The same Microsoft Azure Storage Account can be used as a witness to different clusters.



In the Windows Server failover cluster 2016 proves ready for integration with the cloud. With the introduction of cloud cluster systems more easily is possible Witness substantially reducing overall costs for implementing, the management effort and increasing flexibility of architecture cluster.

Windows Server 2016: What's New in Failover Clustering

Very frequently in order to ensure the high availability and business continuity for critical applications and services you need to implement a Failover Cluster running Microsoft. In this article we'll delve into the main innovations introduced with Windows Server 2016 in the failover clustering and analyse the advantages in adopting the latest technology.

Cluster Operating System Rolling Upgrade

In Windows Server 2016 introduces an important feature that allows you to upgrade the nodes of a Hyper-V cluster or Scale-Out File Server from Windows Server 2012 R2 to Windows Server 2016 without any disruption and avoiding to stop it hosted workloads.

The upgrade process involves these steps:

  • Put the node that you want to update paused and move all the virtual machine or the other workloads on the other nodes in the cluster
  • Remove the node from the cluster and perform a clean installation of Windows Server 2016
  • Add the node Windows Server 2016 the existing cluster. By this time the Mixed mode cluster with both Windows Server nodes 2012 R2 and nodes Windows Server 2016. In this connection it is well to specify that the cluster will continue to provide the services in Windows Server 2012 R2 and will not be yet available features introduced in Windows Server 2016. At this stage you can add and remove nodes is Windows Server 2012 R2 and nodes Windows Server 2016
  • Upgrading of all the cluster nodes in the same way as previously described
  • Only when all cluster nodes have been upgraded to Windows Server 2016 You can change the functional level to Windows Server cluster 2016. This operation is not reversible and to complete it you must use the PowerShell Update-ClusterFunctionalLevel. After you run this command you can reap all the benefits introduced in Windows Server 2016 stated below

Cloud Witness

Windows Server 2016 introduces the ability to configure the cluster witness directly in Microsoft Azure cloud. Cloud Witness, just like the tall types of witness, will provide a vote by participating in the calculation of quorum arbitrary.

Figure 1 – Cloud Witness in Failover Cluster Manager

Configuring the Cloud Witness involves two simple steps:

  • Creating a subscription to an Azure Storage Account that you will use Azure Cloud Witness
  • Configuring the Cloud Witness in one of the following ways


Failover Cluster Manager

Figure 2 – Cloud Witness Configuration Step 1

Figure 3 – Cloud Witness Configuration Step 2


Figure 4 – Cloud Witness Configuration Step 3

The use of Cloud Witness gets the following benefits:

  • Leverages Microsoft Azure eliminating the need for an additional separate data center for certain cluster configurations
  • Working directly with a Microsoft Azure Blob Storage canceling this way the administrative effort required to keep a virtual machine in a public cloud
  • The same Microsoft Azure Storage Account can be used for multiple clusters
  • View the mole little data that is written to the Storage Account service charge is ridiculous

Site-Aware Failover Clusters

Windows Server 2016 introduces the concept of clustered failover site-aware and is able to gather groups of nodes in a cluster based on the geographical location configuration stretched (site). During the lifetime of a cluster site-aware placement policies, the heartbeat between nodes and failover operations and calculation of the quorum are designed and improved for this particular cluster environment configuration. For more details about I invite you to consult the article Site-aware Failover Clusters in Windows Server 2016.

Multi-domain and workgroup Cluster

In Windows Server 2012 R2 and in previous versions of Windows, all nodes in a cluster must necessarily belong to the same Active Directory domain. With Windows Server 2016 removes these barriers and provides the ability to create a Failover Cluster without Active Directory dependencies.

In Windows Server 2016 supports the following configurations:

  • Single-domain Cluster: clusters where all nodes are in the same domain
  • Multi-domain Cluster: cluster composed of nodes joined to different Active Directory domains
  • Workgroup Cluster: cluster with nodes in WFWG (not joined to a domain)

In this regard it is good to specify what are the supported workloads and its limitations to Multi-domain and Workgroup cluster:

Cluster Workload



SQL Server


Recommended SQL Server authentication.

File Server

Supported, but not recommended

Kerberos authentication (not available in these environments) is the recommended authentication protocol Server Message Block traffic (SMB).


Supported, but not recommended

Does not support Live Migration, but only the Quick Migration.

Message Queuing (MSMQ)

Not supported

Message Queuing save property in AD DS.

Diagnostic in Failover Clustering

In Windows Server 2016 the following innovations have been introduced to facilitate troubleshooting if problems arise cluster environment:

SMB Multichannel and Multi-NIC Cluster Network

In Windows Server 2016 There are several new features in the network regarding the clustered environment that help ease configuration and get better performance.

The main benefits introduced in Windows Server 2016 can be summarised in the following points:

  • SMB Multichannel is enabled by default
  • Failover cluster can recognize automatically the NIC attested on the same subnet as the same switch
  • A single resource IP Address is configured for each Access Point Cluster (Zip code) Network Name (NN)
  • The network with Link-Local IPv6 addresses only (FE80) are recognized as private networks (cluster only)
  • The cluster validation does not report more warning messages in case there are more NIC attested on the same subnet

For more information I refer you to the Microsoft documentation: Simplified SMB Multichannel and Multi-NIC Cluster Networks.


Windows Server 2016 introduces major changes in the Failover Clustering making the solution more flexible and opening up new configuration scenarios. Furthermore the upgrade process allows us to easily update existing clusters to take advantage of all the benefits introduced by Windows Server 2016 for different workloads.

Windows Server 2016: the new Virtual Switch in Hyper-V

In this article we'll delve into the characteristics and we will see how to configure a Virtual Switch to Hyper-V in Windows Server 2016 in the mode Switch Embedded Teaming (SET). This is a new technology, alternative to NIC Teaming, allowing you to have multiple network adapters on the same physical host joins virtualization Hyper-V Virtual Switch.

With Windows Server 2012 It was introduced the ability to create an operating system natively network teaming (up to a maximum of 32 network adapters) without having to install special software vendors. It was common practice to define virtualization hosts of Hyper-V virtual switch attestandoli on these NIC teaming. To have high availability and balance virtual machine network traffic was necessary to unify these two different constructs, the Team and the Hyper-V Virtual Switch. Using this configuration, you should specify that the teaming LBFO tradition (Load Balancer Fail Over) It wasn't compatible with RDMA.

In Windows Server 2016 introduces a further possibility regarding Hyper-V Virtual Switch configuration Switch Embedded call Teaming (SET), figura 1, which allows you to unify multiple network adapters (up to a maximum of 8) in one Virtual switches without configuring any teaming. SET includes the teaming of network within the Virtual Switch providing high performance and fault tolerance in the face of hardware failure of the single NIC. In this configuration there is the possiblity to RDMA technology on individual network adapters, and therefore becomes invalid the need to have a separate set of NIC (one for use with the Virtual Switch and one for using RDMA).

2016_ 12_16_virtualswitch-01
Figure 1 – Architecture SET

When evaluating the adoption of Embedded Teaming mode Switch (SET) It is important to consider the compatibility with other technologies related to networking.

SET is compatible with:

  • Datacenter bridging (DCB)
  • Hyper-V Network Virtualization in both NVGRE that VxLAN
  • Receive side Checksum offloads (IPv4, IPv6, TCP) – If supported by the hardware model of NIC
  • Remote Direct Memory Access (RDMA)
  • SDN Quality of Service (QoS)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – If supported by the hardware model of NIC
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scalaing (RSS)

SET is Not compatible with the following network technologies:

  • 1X authentication
  • Ipsec Task Offload (IPsecTO)
  • QoS (implemented the host side)
  • Receive side coalescing (Us $)
  • Receive side scaling (RSS)
  • Single root i/o virtualization (SR-IOV)
  • Tcp Chimney Offload
  • Virtual Machine QoS (Vm-QoS)


Differences from the NIC Teaming

Switch Embedded Teaming (SET) It differs from traditional NIC Teaming in particular to the following aspects:

  • When you do a deployment of SET is not supported use NIC in standby mode, but all network adapters must be active;
  • You cannot assign a specific name to the team, but only to the Virtual Switch;
  • SET only supports the mode Switch Independent, While the NIC teaming has several modes of operation. This means that your network switches, where the NIC that belong to the SET are attested, I am not aware of the presence of this configuration and therefore does not introduce any control over how to distribute network traffic between the different members.

When you configure SET you only need to specify which network adapters belong to the team and the mechanism of balancing of traffic.

A Virtual Switch in SET should consist of network adapters that are certified by Microsoft that have passed compatibility testing "Windows Hardware Qualification and Logo (WHQL)"for Windows Server 2016. An important aspect is that the NIC must be identical in terms of brand, model, drivers and firmware.

As for how network traffic is distributed among the different members of SET has two modes: Hyper-V Port and Dynamic.

Hyper-V Port

In this configuration, the network traffic is divided between different team members based on the virtual switch port binding and virtual machine's MAC address associated. This mode is particularly suitable when using Virtual Machine technology in conjunction with the Queues (VMQs). It is also necessary to consider that in the case where there are a few virtual machines on the virtualization host you might not have a homogeneous traffic balancing being a little granular mechanism. In this mode also available bandwidth for a network adapter to a virtual machine (with traffic so always from a single switch port) is always limited to the available bandwidth on a single network interface.


This mechanism of load balancing has the following features:

  • The outgoing traffic is distributed (based on a hash TCP ports and IP addresses) the principle called flowlets (based on TCP communication breaks present in fluxes). Also in Dynamic mode is a mechanism of re-balancing of traffic in real time between the various members of SET.
  • The incoming traffic Instead it is distributed exactly as in Hyper-V mode Port.

Regarding the configuration of sets as well as for all components belonging to software-defined networking (SDN) It is recommended to adopt System Center Virtual Machine Manager (VMM). In configuring the Logical Switch simply select "Embedded" in the Uplink Mode as shown in Figure 2.

2016_ 12_16_virtualswitch-02
Figure 2 – SET configuration in VMM

As an alternative to configuring SET you can use the following PowerShell commands:

Create Virtual Switch in SET

2016_ 12_16_virtualswitch-03

The parameter EnableEmbeddedTeaming to create a team SET is optional in case they are listed multiple network adapters, but it is useful when you want to configure a Virtual Switch in this mode with a single network adapter to be extended with additional NIC subsequently.

Review of traffic distribution algorithm

2016_ 12_16_virtualswitch-04


Thanks to this new mechanism for creating Virtual Switch introduced in Hyper-V 2016 You can have more flexibility in networking management by reducing the number of network adapters and configuration complexity, while enjoying high performance and high availability.

Windows Server 2016: What’s New in Hyper-V

Windows Server 2016 was officially released and there are several new features related to Hyper-V that make it increasingly a powerful virtualization platform full of new and exciting features. In this article, I will show the new Hyper-V feature in Windows Server 2016 and attention will be paid to the changes from the previous version.

Nested Virtualization

This feature allows you to have a virtual machine using the Hyper-V role and consequences to host on it other virtual machine. This feature is useful for test and development environments, but it is not suitable to be used in production environments. In order to use the nested virtualization must be respected the following requirements:

  • The virtual machine using the Hyper-V role must have at least 4 GB RAM
  • Guest virtual machines must also be Windows Server 2016
  • When the nested virtualization is available only if the physical host that holds the VM with Hyper-V has Intel processors (VT-x and EPT).

For further information about can you see Windows Server 2016: Introduction to Hyper-V Nested Virtualization edited by Silvio Benedict or the Microsoft document Nested Virtualization.

Networking Features

Also in networking introduces important new features that allow you to take full advantage of the hardware and get more performance:

  • Remote direct memory access (RDMA) embedded switches and teaming (SET). Switch Embedded Teaming (SET) is a new technology alternative to NIC Teaming allowing you to have multiple network adapters that are joined to the same Virtual Switch to Hyper-V. Prior to Windows Server 2016 It was necessary to have a separate set of NIC (one for use with the Virtual switches and one to take advantage of RDMA) Since the teaming of the OS was not compatible with RDMA. In Windows Server 2016 There is the possiblity to RDMA on network adapters that are associated with a Virtual switches configured with or without Embedded Switch Teaming mode (SET)
  • Virtual machine multi queues (VMMQ). Improvement on QMV throughput with the ability to allocate multiple hardware queues per virtual machine
  • Quality of service (QoS) for software-defined networks

Hot add and remove for network adapters and memory

For generating virtual machine 2 running both Windows and Linux you can add or remove network adapters while the virtual machine is running, without no catch. In addition to build both virtual machines 1 build confidence 2, but running Windows Server 2016 or Windows 10 You can change the amount of RAM assigned to it while it is running, Although dynamic memory is not enabled.

Hyper-V Cluster Rolling Upgrade

Important are the changes in the cluster. Provides the ability to add a node Windows Server 2016 to an existing Hyper-V cluster consisting of Windows Server nodes 2012 R2. This allows us to update the cluster systems without any downtime. As long as all cluster nodes are not upgraded to Windows Server 2016 the cluster remains with the features of Windows Server 2012 R2. After the update process of the various nodes in the cluster, you must upgrade the level of functionality via Powershell cmdlet Update-ClusterFunctionalLevel.

Start Order Priority for Clustered Virtual Machines

Thanks to this feature it is possible to obtain more control over the virtual machine boot priority in clustered environment. This can be useful to start virtual machines that provide services before others that take advantage of these services. All this is easily achieved by configuring the set, assigning virtual machines to different sets and defining dependencies.

Production Checkpoints

The creation of production checkpoint relies on backup technologies inside the virtual machine instead of the save state (Save state of Hyper-V). For Windows OS based machines uses Volume Snapshot Service (VSS), While Linux virtual machines is done a flush of different file system buffers to create a checkpoint that is consistent at the file system level. The Production checkpoint are the default for new virtual machines, but there is always the possibility to create checkpoints based on the State to save the virtual machine, called checkpoint standard time.

Host Resource Protection

This feature helps prevent conditions where the operations carried out by a single virtual machine can degrade the performance of the Hyper-V host or other virtual machine. When this monitoring mechanism detects a VM with excessive activity in this reduces the resources assigned. By default this control mechanism is disabled and you can activate it using the following Powershell command: Set-VMProcessor-EnableHostResourceProtection $true

Shielded Virtual Machines

The Shielded virtual machines, aggregating several features, make it much more difficult all those activities that can be made by malware or Hyper-V administrators themselves to inspect, tampering and misappropriation of data. The data and status of Shielded virtual machines are encrypted and Hyper-V administrators is not permitted to view the output videos and data on virtual disks. These virtual machines can be carried out on specific Hyper-V hosts defined and in health status according to the policies issued by Guardian Server Host. The shielded virtual machines are compatible with the Hyper-V feature Replica. To enable replication that can be authorised for Hyper-V hosts on which you want to replicate the shielded virtual machine. For more details about these new features, please consult the document Guarded Fabric and Shielded VMs.

Virtualization-based Security for Generation 2 Virtual Machines

New security features were introduced to the virtual machine generation 2 (starting with version 8 of the configuration file) like Device Guard and Credential Guard, they are able to increase the security of the operating system from malware attacks.

Encryption Support for the Operating System drive in Generation 1 Virtual Machines

Now there is the possibility to protect the operating system disks using BitLocker to build virtual machine 1. Thanks to new feature Key Storage (requires at least version 8 of the configuration file) creates a small drive dedicated to hosting the keys used by BitLocker instead of using a Trusted Platform Module (TPM) that is only available for virtual machines by generation 2.

Linux Secure Boot

Build the virtual machine 2 based on the Linux operating system can boot using the Secure Boot. Are enabled to Secure Boot on Windows host Server 2016 the following Linux distributions:

  • Ubuntu 14.04 +
  • SUSE Linux Enterprise Server 12 +
  • Red Hat Enterprise Linux 7.0 +
  • CentOS 7.0 +

Windows PowerShell Direct

This is a new way to run Windows PowerShell commands on a virtual machine directly from the host impartendoli, without requiring an access via network and remote management tools regardless of configuration. Windows PowerShell Direct is an excellent alternative to the tools currently used by Hyper-V administrators as PowerShell, Remote Desktop or Hyper-V Virtual Machine Connection (Vmconnect) and offers a great experience in scripting and automation (for example, difficult to achieve with VMConnect).

Compatible with Connected Standby

When the Hyper-V role is enabled on a system that uses as power model Always On/Always Connected (AOAC) is now available as a power state Connected Standby.

Discrete Device Assignment

This exciting feature enables you to provide direct and exclusive access to a virtual machine to some harware PCIe devices. Using a device in this mode is bypassed the entire Hyper-V virtualization stack thus ensuring faster access to hardware. For more information about hardware requirements, please refer to the section "Discrete device assignment" in the document System requirements for Hyper-V on Windows Server 2016.

Windows Containers

Speaking of what's new in Hyper-V is also worth mentioning the Windows Containers that enable on a system of isolated applications. Among the main strengths of containers we find the speed of creation, high scalability and portability. There are two types of runtime containers, each provides a different level of application isolation:

  • Windows Server Containers using namespace and process isolation
  • Hyper-V virtual machine that uses a small Containers for each container

For more details on containers I invite you to consult official documentation of the Windows Container and the specific section on containing several very interesting articles.


Update feature in Windows Server 2016

More Memory and Processors for Generation 2 Hyper-V Virtual Machines and Hosts

Build the virtual machine 2, starting with version 8 of the configuration file, can be configured with more resources in terms of virtual processors and memory RAM. Have also been revised even the maximum resource usage of physical hosts. For all details about Hyper-V scalability in Windows Server 2016 You can read the document Plan for Hyper-V scalability in Windows Server 2016.

Virtual Machine Configuration File Format

Virtual machine configuration file uses a new format that allows you to read and write more efficiently the different configurations. This new format also makes it more resilient to corruption if you experience failure disk subsystem. The extension of the new configuration file that holds the virtual machine configuration is .vmcx While the extension .vmrs is used to hold the runtime state.

Virtual Machine Configuration Version

The configuration file version represents the level of virtual machine compatibility with the version of Hyper-V. The virtual machine configuration file version 5 are compatible Windows Server 2012 R2 and can be activated either on Windows Server 2012 R2 on Windows Server 2016. Virtual machines with configuration file versions introduced in Windows Server 2016 cannot be performed in Hyper-V on Windows Server 2012 R2. In order to use all the new features on virtual machines created with Windows Server 2012 R2 and then migrated or imported in Hyper-V on Windows Server 2016 You must update the configuration of virtual machines. The update is automatic. The downgrade of the configuration file is not supported. Full details on how to upgrade the version of the virtual machine configuration can be found in the following: Upgrade virtual machine version in Hyper-V on Windows 10 or Windows Server 2016.

Hyper-V Manager Improvements

Hyper-V manager also introduces important improvements:

  • Alternate credentials support – Provides the ability to use a different set of credentials in Hyper-V Manager when connecting to a remote host Windows Server 2016 or Windows 10. Credentials can also be saved to be easily reused.
  • Manage earlier versions – Using Hyper-V Manager on Windows Server 2016 and Windows 10 You can also manage Hyper-V Windows Server based systems 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Updated management protocol – Hyper-V Manager uses the WS-MAN to communicate with the remote Hyper-V host. This communication protocol enables CredSSP authentication, Kerberos or NTLM and facilitates the host configuration to allow remote management.

Integration Services Delivered Through Windows Update

Very useful is the ability to update the virtual machine integration services based on Windows operating system via Windows Update. This is a matter of particular interest to service providers because thanks to this mechanism on monitoring the application of these updates is left in the hands of the tenant who owns the virtual machine. Tenants can then update independently its own virtual machine Windows with all updates, including the integration services, using a single method..

Shared Virtual Hard Disks

Provides the ability to resize the virtual hard disks including shared, used to create environments guest clustering, without any downtime. The size of the shared virtual hard disks can be extended or reduced while the virtual machine is online. The guest cluster using shared virtual hard disks can now be protected with the Hyper-V Replication for disaster recovery.

Storage Quality of Service (QoS)

In storage you can create QoS policy on Scale-Out File Server and assign them to different virtual disks associated with Hyper-V virtual machines. This gives you the ability to check the performance of the storage by preventing the use of the same for individual virtual machines can impact the entire disk subsystem. You can find full details of this topic in the document Storage Quality of Service.



There are many new features in Microsoft Windows Server virtualization platform 2016 that make it even more complete and rich in new features. Microsoft Hyper-V is now available on the market for several years, has reached the highest levels of reliability and offers a great enterprise-class virtualization solution. In choosing the virtualization platform is well not to overlook even the various possibilities that we offer to scale down public cloud or to implement hybrid architectures.

You can test all the new features of Microsoft Hyper-V Server 2016 by downloading the trial version from the TechNet Evaluation Center.

For more insights on the topic I invite you to participate in the sessions devoted to Hyper-V during the SID // Windows Server 2016 Roadshow, the free event dedicated to the new operating system, open to all companies, consultants and partners who want to prepare for the future and who want to know the latest news and best practices to implement new server operating system.

Windows Server 2016: Introduction to Network Controller

In Windows Server 2016 There are many new features in networking that allow us to achieve a functional infrastructure, named Software-Defined Networking (SDN), underlying the Software Defined Datacenter (SDDC).

The main features of Software Architecture Defined Networking (SDN) are adaptability, the dynamism and ease of management. All these aspects can be covered better by introducing in Windows Server 2016 of the features that we're going to deepen in this article.

Network Controller

This is a new role that is introduced in Windows Server 2016 that can be easily installed by using Server Manager or Using PowerShell and that helps you manage, Configure and monitor virtual and physical network infrastructure of your datacenter. Thanks to the Network Controller you can also automate the configuration of their network infrastructure instead of having to manually configure device and services. This role can also be installed on virtual machines, plan to be put in high availability and can scale easily. Deploy your Network Controller can either be done in domain environment, in this case, user authentication and network device is using Kerberos, that in a non-domain environment requiring certificate authentication.

Communication between the Network Controller and the network components is done using the Southbound API, figura 1, where is made the discovery of network equipment and detected configuring services. Also through the same interface the required network information is collected and transmitted to the changes made.

Northbound interface API you can communicate with your Network Controller to consult network information and use them to make monitoring and troubleshooting. The same API is used to make changes to the network configuration and to deploy new devices.

2015_ 12_27_WS16NC_01
Figure 1 – Communication Scheme

Manage and monitor your network through Network Controller, figura 2, can be performed directly using PowerShell (Network Controller Cmdlets) or by using management applications such as System Center Virtual Machine Manager (SCVMM) and System Center Operations Manager (SCOM).

2015_ 12_27_WS16NC_02

Figure 2 – Management Network Controller

Via the Network Controller you can manage the following physical and virtual network infrastructure components:

  • Hyper-V VMs and virtual switches
  • Switch
  • Router
  • Software firewall
  • Vpn Gateway (including Multitenant RRAS Gateway)
  • Load Balancer

Virtualized Network Functions

The spread of virtualization has also involved the field network and there is more and more interest in virtual appliances and cloud services that provide network services with an emerging market growing fast. We see more and more frequently in software defined datacenter using virtual appliances to deliver networking features that typically were paid solely by physical devices (such as load balancers, Firewall, router, switch, etc.).

In Windows Server 2016 Technical Preview includes the following virtual appliance:

Software Load Balancer

This is a load balancer software layer-4, similar to the load balancer already widely used on the Azure platform. For more information about Microsoft Azure Load Balancing Services, I invite you to consult Microsoft Azure Load Balancing Services.

Multi-tenant Firewall

Datacenter Firewall, figura 3, is a new service introduced in Windows Server 2016. This firewall can protect the network layer virtual network and is thought to be multitenant. When implemented can be offered as a service by the service provider and the tenant administrator can install and configure the firewall policy to secure their virtual networks from potential attacks that originate from the internet or from Interne.

2015_ 12_27_WS16NC_03

Figure 3 – Firewall Policy

Managing the Datacentre Firewall can be made using the network controller. Datacenter Firewall provides the following benefits for cloud service providers:

  • A scalable and maintainable software firewall service that can be offered as a service to its tenants
  • Provides protection for tenants, regardless of the operating system running on the virtual machine
  • Freedom to move virtual machines hosted tenants of different fabrics without breaking the firewall functionality provided in that:
  • Agent firewall is deployed as a vSwitch;
  • The virtual machines of the tenant shall take the policy assigned to their vSwitch;
  • Firewall rules are configured in each port of the vSwitch, regardless of the physical host that holds the virtual machine

As regards tenants instead the Datacenter Firewall provides the following benefits:

  • Ability to define rules on the firewall to help protect workloads in virtual network to the Internet
  • Ability to create rules on the firewall for protection between virtual machines on the same subnet layer 2 or on different subnet L2
  • Ability to define firewall rules to help protect and isolate network traffic between the on-premise and virtual network tenants present at the service provider

RAS Gateway

RAS Gateway is used to route network traffic between the virtual and physical networks networks. There are many areas of use:

Site-to-Site Gateway

Multi-tenant gateway solution, figura 4, that allows tenants to access their resources and manage them using a site-to-site VPN connection. Thanks to this gateway you can connect virtual resources in the cloud with the physical network of the tenant.

2015_ 12_27_WS16NC_04

Figure 4 – S2S Gateway

Forwarding Gateway

Used to route network traffic between virtual networks and the physical network hosting provider (in the same geographical location) – Figure 5.

2015_ 12_27_WS16NC_05

Figure 5 – Forwarding Gateway

GRE Tunnel Gateway

Gateways are able to create tunnels based on the GRE protocol that provide connectivity between virtual network of tenants and external networks. The GRE protocol is supported on many network devices, Therefore it is an ideal choice when not prompted to channel encryption. For more information on the GRE tunnel I invite you to consult GRE Tunneling on Windows Server Technical Preview.

Hyper-V Network Virtualization

The Network Virtualization with Hyper-V (HNV) is a key component of Software Defined Networking solution (SDN) by Microsoft and as such there are many new features in Windows Server 2016 to make it more functional and integrated stack SDN.

An important aspect to consider when it comes to SDN is that stack itself is consistent with Microsoft Azure and would therefore bring the same potentials used in public cloud Azure at its reality.

Programmable Hyper-V Switch

With the Network Controller you can make policy push HNV, figura 6, towards the agent running on each host that uses the Open vSwitch Database Management Protocol (OVSDB – RFC 7047). The Host Agent stores these policies using a schema customization VTEP and is able to program complex rules within the powerful engine of Hyper-V virtual switch.

2015_ 12_27_WS16NC_06

Figure 6 – Push Policies

VXLAN Encapsulation support

EXtensible Protocol Virtual Local Area Network (VXLAN – RFC 7348) has been widely adopted in the market with the support of leading vendors like Cisco, Brocade, Dell, HP and others. The HNV now supports this encapsulation scheme, using Microsoft MAC distribution mode through the Network Controller, which allows you to program the association between the IP addresses of the tenant (Customer Address – CA) physical network IPS and (Provider Address – PA). Generic Routing Encapsulation the encapsulation protocol Network Virtualization (NVGRE) continues to be supported on Windows Server 2016.

Interoperability with Software Load Balancer (SLB)

The software load balancer (SLB) presented above is fully supported in the virtual networks. The SLB is done through the virtual switch engine performance and controlled by network controller regarding the mapping Virtual IP (VIP) – Dynamic IP (DIP).

IEEE Compliant

To ensure full interoperability with physical and virtual network equipment we guarantee that all packets transmitted when using HNV is in all its fields compliant with standards dictated by the IEEE. This aspect has been heavily edited and improved in Windows Server 2016.

New Elements Introduced (Cloud Stairs Fundamentals)

In Windows Server 2016 the following features have been introduced to allow you to configure your environment more effectively, making the best use of available hardware resources:

Converged Network Interface Card (NIC): This feature allows you to use a single network adapter to handle different types of traffic: the management, storage access (RDMA) and the traffic of the tenant. In this way it is possible to decrease the number of network adapters are required for each physical host.

Switch Embedded Teaming (SET): Set is a new integrated Virtual Switch NIC Teaming solution for Hyper-V. SET allows you to have up to eight compounds teaming physical network adapters in a single SET team. This teaming mode, being integrated into virtual switch, can only be used on physical hosts and not inside the virtual machines, where you can still configure teaming in the traditional way (NIC Teaming Virtual Machines). This teaming mode does not expose team interfaces, but the configurations are made through Virtual Switch port.

2015_ 12_27_WS16NC_07

Packet Direct: Packet Direct allows to achieve a high throughput and low latency for network traffic.

Enhancements to existing services

The Network Access Protection feature (NAP) is already in the State "deprecated" in Windows Server 2012 R2. In Windows Server 2016 the DHCP Server role will no longer support NAP DHCP scopes and functionality will no longer be NAP-enabled.

DNS Server
Now let's dig into those that are on Windows Servers 2016 the various innovations introduced on DNS servers to improve the efficacy and safety:

DNS Policy: You can configure DNS policy to define how the DNS server answers queries DNS. DNS responses can be based on many parameters, such as the client's IP address (location) or the time of day. DNS policies open their doors to different scenarios like location-aware DNS configuration, traffic management, load balancing and DNS split-brain.

Response Rate Limiting (RRL): You can configure the DNS server limits on response rate. This configuration allows us to avoid the use of DNS by malicious systems to perform DOS attacks (denial of service).

DNS-based Authentication of Named Entities (DANE): You can use the TLSA records (Transport Layer Security Authentication) to provide information to the Client regarding DNS which CA is waiting for a specific domain name. This mechanism is useful to prevent attacks man-in-the-middle type you.

Support for Unknown Records: This feature allows you to add records that are not explicitly supported by Windows DNS servers.

IPv6 root hints: You can use the IPV6 root servers for Internet name resolution.

Windows PowerShell Support: introducing new PowerShell cmdlets support is improved for the DNS Server.

DNS and IPAM: better integration between DNS and IPAM.

I invite you to study and evaluate the field the new features introduced in the field of networking downloading Windows Server 2016.