Category Archives: Hyper-V

Hyper-V: Evolution, Current Innovations, and Future Developments

Since the first release of Hyper-V in Windows Server 2008, Microsoft has never ceased innovating this virtualization solution, and it has no intention of stopping. Hyper-V represents a strategic technology for Microsoft, being widely used across various areas of the IT ecosystem: from Windows Server to Microsoft Azure, from Azure Stack HCI to Windows clients and even on Xbox. This article will explore the evolution of Hyper-V from its inception, examining the current innovations that make it one of the most robust and versatile virtualization platforms available on the market today. Additionally, we will look at future developments from Microsoft for Hyper-V, discovering how this technology will help evolve the landscape of modern IT infrastructures.

The Evolution of Microsoft Virtualization: From Virtual Server to Hyper-V

Microsoft boasts a long history in virtualization, starting with the release of Microsoft Virtual Server in the early 2000s. This product was designed to facilitate the execution and management of virtual machines on Windows Server operating systems. The subsequent version, Microsoft Virtual Server 2005, introduced significant improvements in terms of management and performance, allowing companies to consolidate servers and reduce operational costs. However, this approach was still limited compared to the growing virtualization needs.

With the introduction of Windows Server 2008, Microsoft launched Hyper-V, a fully integrated virtualization solution within the operating system, marking a significant qualitative leap from Virtual Server. Hyper-V offered more robust and scalable virtualization, with support for hypervisor-level virtualization, better resource management, virtual machine snapshots, and greater integration with Microsoft’s management tools, such as System Center.

In subsequent versions of Windows Server, Hyper-V was continuously improved, introducing advanced features such as Live Migration, support for large amounts of memory and high-performance processors, and virtual machine replication for disaster recovery. These developments have consolidated Hyper-V as one of the leading virtualization platforms in the market, effectively competing with third-party solutions like VMware and Citrix.

The Present of Hyper-V: Power and Flexibility

Hyper-V is a virtualization technology that uses the Windows hypervisor, requiring a physical processor with specific features. This hypervisor manages interactions between the hardware and virtual machines, ensuring an isolated and secure environment for each virtual machine. In some configurations, virtual machines can directly access physical resources concerning graphics, network, and storage.

Hyper-V Technology in Windows Server

Hyper-V is integrated into Windows Server at no additional cost. The main difference between the Standard and Datacenter editions concerns the number of allowed guest OS instances:

  • Windows Server Standard: Allows up to two instances of Windows Server guest OS environments.
  • Windows Server Datacenter: Allows an unlimited number of Windows Server guest OS instances.

Hyper-V supports a wide range of guest operating systems, including various Linux environments such as Red Hat Enterprise Linux, CentOS, Debian, Oracle Linux, SUSE, and Ubuntu, with the relevant Integration Services included in the Linux kernel. Additionally, it supports FreeBSD starting from version 10.0.

Windows Server Datacenter, with Hyper-V, also provides access to advanced technologies like Storage Spaces Direct and software-defined networking (SDN), significantly enhancing virtualization and resource management capabilities.

Advantages of Hyper-V in Windows Server:

  • Effective Hardware Utilization: Allows server and workload consolidation, reducing the number of physical computers needed and optimizing hardware resource use.
  • Improved Business Continuity: Minimizes downtime through synergy with other Microsoft solutions, ensuring greater service availability and reliability.
  • Private Cloud Creation: Facilitates the creation and expansion of private and hybrid clouds with flexible and cutting-edge solutions.
  • Efficient Development and Testing Environments: Enables the reproduction of computing environments without additional hardware, making development and testing processes faster and more cost-effective.

The Hypervisor in the Azure Ecosystem

Azure uses Microsoft Hyper-V as the hypervisor system, demonstrating the importance and reliability of this technology for Microsoft itself, which continues to optimize it constantly. Hyper-V offers a range of advanced features that ensure a secure and shared virtualization environment for multiple customers. Among these, the creation of guest partitions with separate address spaces allows parallel execution of operating systems and applications relative to the host operating system. The root partition, or privileged partition, has direct access to physical devices and peripherals, sharing them with guest partitions through virtual devices. These elements ensure a secure and reliable environment for managing virtual machines on Azure.

Hyper-V: More Than Just a Virtualization Tool

Hyper-V is not only a powerful virtualization tool but also essential for ensuring the security of various environments. In fact, Virtualization-Based Security (VBS) leverages hardware virtualization and the hypervisor to create an isolated virtual environment, which acts as a “root of trust” for the operating system, even if the kernel is compromised. Windows uses this isolated environment to host various security solutions, offering them additional protection against vulnerabilities and preventing the use of exploits that might try to bypass existing protections. VBS imposes restrictions to protect vital system and OS resources, as well as safeguard security aspects like user credentials.

Hyper-V is also used for containers, offering isolation that ensures high security and greater compatibility between different host and container versions. Thanks to Hyper-V isolation, multiple container instances can run simultaneously on a host, with each container operating within a virtual machine using its own kernel. The presence of the virtual machine provides hardware-level isolation between each container and the container host.

Hyper-V and Azure Stack HCI

Azure Stack HCI and Hyper-V in Windows Server are two fundamental pillars in Microsoft’s virtualization solution offerings, each designed to meet different needs within the IT landscape. While Azure Stack HCI positions itself as the cutting-edge solution for hybrid environments, offering advanced integrations with Azure services for optimized management and scalability, Hyper-V in Windows Server remains a solid choice for organizations requiring more traditional virtualized solutions, with particular attention to flexibility and management in disconnected scenarios. The choice between these two solutions depends on specific virtualization needs, the organization’s cloud strategy, and the need for access to advanced management and security features.

In this regard, it is important to note that Azure Stack HCI is built on proven technologies, including Hyper-V, and meets advanced security requirements for virtualization thanks to integrated support for Virtualization-Based Security (VBS).

The Future of Hyper-V: Innovations and Prospects

The new version of Windows Server, named “Windows Server 2025,” is expected this fall. Although Microsoft has not yet announced an official release date, some predictions can be made based on previous release cycles. The company’s latest product, Windows Server 2022, was made available to the public on September 1, 2021. If Microsoft follows a similar schedule, it is likely that Windows Server 2025 will be released in the fall of this year. This version will include a new release of Hyper-V with significant new features.

Indeed, Hyper-V in Windows Server 2025 will introduce support for GPU Partitioning, allowing a GPU to be shared among multiple virtual machines. This feature will also ensure full support for Live Migration and cluster environments. GPU-P will also enable the Live Migration of VMs with partitioned GPUs between two standalone servers, without the need for a cluster environment, making it ideal for specific test and development environments. Additionally, improved support for Direct Device Assignment (DDA) and the introduction of GPU pools for high availability will further enhance Hyper-V’s capabilities.

Moreover, Windows Server 2025 will introduce “Workgroup Clusters,” simplifying Hyper-V deployments in various scenarios. Until Windows Server 2022, deploying a cluster required Active Directory (AD), complicating implementations in environments where an AD infrastructure was not always available. With Windows Server 2025, it will be possible to deploy “Workgroup Clusters” with Hyper-V that do not require Active Directory but use a certificate-based solution, significantly simplifying deployment.

For more information on the new features of Windows Server 2025, you can consult this article: Windows Server 2025: the arrival of a new era of innovation and security for server systems.

Conclusion

Hyper-V has proven to be a valuable and continuously evolving virtualization solution in the IT landscape. From its introduction with Windows Server 2008 to the innovations planned for Windows Server 2025, Hyper-V has maintained a prominent position thanks to the constant introduction of advanced features and improvements in performance, management, and security. New features such as GPU Partitioning and Workgroup Clusters are just a few examples of how Microsoft continues to invest in this technology to meet the increasingly complex needs of modern IT infrastructures. The integration of Hyper-V in various environments, from the hybrid cloud of Azure Stack HCI to traditional virtualization servers, demonstrates its versatility and strategic importance. Looking ahead, it is clear that Hyper-V will remain a key element in Microsoft’s virtualization and cloud computing strategies, continuing to offer robust and innovative solutions for the challenges of IT infrastructures.

Microsoft Hyper-V Server 2019: overview and product features

Microsoft recently announced the availability of Microsoft Hyper-V Server 2019, the product of free installation, that allows you to adopt the Hyper-V role to deliver an enterprise-class virtualization solution. This is an announcement that many people have been waiting for a long time to take advantage of the new virtualization features in Windows Server 2019. This article introduces the main product features and provides steps to follow to install the product.

Features

Microsoft Hyper-V Server 2019 is a standalone product that includes only the roles related to virtualization, it's totally free and includes the same technologies as the Hyper-V hypervisor present in Windows Server 2019. This product may be of particular interest in scenarios where you don't need to fire Windows Server virtual machines, and when you have to keep Linux or VDI virtual machines running. All operating systems activated on Hyper-V Server 2019 will have to be appropriately licensed if required.

Compared to Windows Server, the footprint is much smaller, because there is no graphical user interface (GUI), contains no other roles and is only available in Server Core mode. In this regard, it should be taken into account that Hyper-V Server 2019 does not allow you to use Software Defined Datacenter features as Storage Spaces Direct (S2D). These features require the presence of Windows Server Datacenter edition.

Product Installation

Before proceeding with the installation it is appropriate to verify the hardware requirements listed in Microsoft's official documentation.

To get Microsoft Hyper-V Server 2019 you need to log inthe specific section of the Microsoft Evaluation Center and start downloading the ISO. With the Hyper-V Server ISO available, you can use it to boot the machine and start the installation of the product.

The installation process does not require any license key and there are no time limits for use. The following are the steps to install the product.

Figure 1 – Selecting the language and region settings

 

Figure 2 – Start the installation process

 

Figure 3 - Accepting the license terms

 

Figure 4 – Selecting the drive on which to install Microsoft Hyper-V Server

Next, the installation will begin, at the end of which, after the Administrator account password change, you will be able to configure Hyper-V Server 2019. The configuration can be done using the command sconfig or by using PowerShell.

Figure 5 – Configuring the server using sconfig

You do not need to install the Hyper-V role because it is already active at the end of the installation.

Windows Admin Center, that allows you to fully and graphically manage the virtualization environment, can also be used to manage Microsoft Hyper-V Server 2019.

Conclusions

The announcement of the availability of Microsoft Hyper-V Server 2019 came after months from the official release of Windows Server 2019, but now it can be used in production, and it may be particularly suitable if it falls within one of the scenarios described in this article. Microsoft Hyper-V Server 2019 is a good solution to adopt a stable and proven virtualization platform without incurring specific costs to fire the operating system of the virtualization host, but you should always consider the limitations.

Azure Site Recovery: the protection of Hyper-V virtual machines using Windows Admin Center

Among the various features that can be managed through Windows Admin Center, there is the possibility to simply drive the protection of virtual machines, present in a Hyper-V environment, with Azure Site Recovery (ASR). This article lists the necessary steps to follow and the possibilities offered by the Admin Center in this area.

Windows Admin Center, formerly known as Project Honolulu, allows through a web console, to manage the infrastructure in a centralized way. Thanks to this tool Microsoft has initiated a process of centralization in a single portal for all administrative console, allowing you to manage and configure your infrastructure with a user experience: modern, simple, integrated and secure.

Windows Admin Center requires no dependency with the cloud in order to function and can be deployed locally to gain control of different aspects of your local server infrastructure. In addition to the component Web Server, that allows access via browser to the tool, the Windows Admin Center consists of a component gateway, through which you can manage your server via Remote PowerShell and WMI over WinRM.

Figure 1 - Basic diagram of the architecture of Windows Admin Center

 

Connecting your Windows Admin Center gateway to Azure

Windows Admin Center also offers the opportunity to integrate with different Azure services, including Azure Site Recovery. In order to allow the Windows Admin Center gateway to communicate with Azure it is necessary to proceed with its registration process, by following the steps later documented. The wizard, available in the preview version of Windows Admin Center , making the creation of an Azure AD app in its own directory, which allows the Windows Admin Center communication with Azure.

Figure 2 - Start of the registration process from the Admin Center settings

Figure 3 - Generation of the code needed to log in

Figure 4 - Enter the code in the Device Login page

Figure 5 - Start the Azure authentication process

Figure 6 – Sign-in confirmation

Figure 7 – Selection of the Tenant where register the Azure AD app

Figure 8 - Guidance for providing permissions to the Azure AD app

Figure 9 – Assignment of permissions, from the Azure Portal, to the registered app

Figure 10 - Azure integration configuration completed

 

ASR environment configuration for protecting Hyper-V VMs

After configuring the connection of Windows Admin Center with Azure you can, selecting the Hyper-V system that holds the virtual machines to be replicated to Azure, proceed with the entire configuration of the Recovery Services vault, directly from the web console of Windows Admin Center. The steps below illustrate the simplicity of the activation.

Figure 11 – Start the configuration necessary for protecting VMs

From the Admin Center you are asked to provide basic information for the ASR environment configuration and it provides the ability to create a new Recovery Service vault or select an existing one.

Figure 12 – Configuration of the Hyper-V host in Azure Site Recovery

In the form proposed by the Windows Admin Center are offered only some values, therefore I advise you to proceed before to the creation of the Recovery Service vault and, on the previous screen, select an existing one, created with all configuration parameters at will and to suit your needs.

This step performs the following actions:

  • Install the ASR agent on the Hyper-V host or on all nodes in a cluster environment.
  • If you select to create a new vault it proceeds to the creation in the selected region and places it into a new Resource Group (assigning a default name).
  • It registers the Hyper-V system with ASR and configures a default replication policy.

Figure 13 - Site Recovery Jobs generated by the configuration

 

Virtual machine protection configuration

After the configuration of the previously reported activity is possible to activate the protection of virtual machines.

Figure 14 - Activation of the VM protection process

Figure 15 - Selection of the storage account and start of protection

At the end of the process of replication, you can validate the replication process by activating the test failover procedure from the Azure Portal.

 

Conclusions

Being able to interact with certain Azure services directly from Windows Admin Center can facilitate and speed up the administration of an hybrid datacenter. At the moment the possibility of integration with Azure Site Recovery are minimal and not suitable for complex scenarios. However, Windows Admin Center is constantly evolving and will be more and more enriched with new features to better interact with Azure services.

What's new in Virtual Machine Manager 1807

Following the first announcement of the Semi-Annual Channel release of System Center, took place in February with the version 1801, in June has been released the new update release: System Center 1807. This article will explore the new features introduced in System Center Virtual Machine Manager (SCVMM) the update release 1807.

Networking

Information related to the physical network

SCVMM 1807 introduced an important improvement in the field of network. In fact,, using the Link Layer Discovery Protocol (LLDP), SCVMM can provide information regarding connectivity to the physical network of Hyper-V hosts. These are the details that SCVMM 1807 can retrieve:

  • Chassis ID: Switch chassis ID
  • Port ID: The Switch port to which the NIC is connected
  • Port Description: Details on the port, such as the Type
  • System Name Manufacturer: Manufacturer and Software version details
  • System Description
  • Available Capabilities: Available features of the system (such as switching, Routing)
  • Enabled Capabilities: Features that are enabled on the system (such as switching, Routing)
  • VLAN ID: Virtual LAN identifier
  • Management Address: management IP address

The consultation of this information can be via Powershell or by accessing the SCVMM console: View > Host > Properties > Hardware Configuration > Network adapter.

Figure 1 – Information provided by SCVMM 1807 regarding the physical connectivity of Hyper-V hosts

These details are very useful for having visibility on the physical network and to facilitate troubleshooting steps. This information will be made available for Hyper-V hosts that meet the following requirements:

  • Windows Server Operating System 2016 or later.
  • Features DataCenterBridging and DataCenterBridging-LLDP-Tools enabled.

Conversion SET in Logical Switch

SCVMM 1807 can convert a Hyper-V Virtual Switch, created in Switch Embedded Teaming mode (SET), in a logical switch, using directly the SCVMM console. In previous versions of SCVMM this operation was feasible only through PowerShell commands. This conversion can be useful to generate a Logical Switch, that can be used as a template on different Hyper-V hosts that are managed by SCVMM. For more information on Switch Embedded Teaming (SET) I invite you to consult the article Windows Server 2016: the new Virtual Switch in Hyper-V

Support for host VMware ESXi v6.5

SCVMM 1807 introduced support for VMware ESXi host v 6.5 within its fabric. For what is my experience, even in environments that consist of multiple hypervisors, hardly you use SCVMM to manage VMware host. This support is important because it introduces the ability to convert VMs hosted on VMWare ESXi host 6.5 in Hyper-v VMs.

 

Storage

Support for selecting the CSV to use when adding a new disk

SCVMM 1807 allows you to specify, during the addition of a new virtual disk to an existing VM, in which cluster shared volumes (CSV) place it. In previous releases of VMM this possibility was not provided and by default the new virtual disks were placed on the same CSV where the disks that have been associated with the virtual machine were present. In some circumstances, such as in the presence of CSV with little free space available, this behavior could be inadequate and inflexible.

Figure 2 – Adding a new disk to a virtual machine by selecting which CSV place it

Support for the update of cluster Storage Spaces Direct (S2D)

In Virtual Machine Manager 2016 there is support for making the deployment of Storage Spaces Direct cluster (S2D). With SCVMM 1807 has also introduced the possibility of patching and update of Storage Spaces Direct cluster nodes, orchestrating the entire update process, which will use the baseline configured in Windows Server Update Services (WSUS). This feature allows you to more effectively manage the Storage Spaces Direct environment, cornerstone of the Software Defined Storage of Microsoft, that leads to the achievement of the Software Defined Data Center.

 

Statement of support

Support for SQL Server 2017

In SCVMM 1807 was introduced the support for SQL Server 2017 to host its database. This allows you to upgrade from SQL Server 2016 to SQL Server 2017.

 

Conclusions

The update release 1807 introduces several innovations in Virtual Machine Manager that greatly enrich it in terms of functionality. Furthermore, This update also addresses a number of issues listed in Microsoft's official documentation. It is therefore recommended to evaluate an update of the Virtual Machine Manager deployments, for greater stability and to take advantage of the new features introduced. Remember that the release belonging to the Semi-Annual Channel have a support period of 18 months.

To try out System Center Virtual Machine Manager, you must access to theEvaluation Center and after the registration you can start the trial period.

Azure Site Recovery: disaster recovery of VMware virtual machines

The solution Azure Site Recovery (ASR) protects virtual or physical systems, hosted both Hyper-V environment that VMware, automating the replication process to a secondary data center or to Microsoft Azure. With a single solution you can implement Disaster Recovery plans for heterogeneous environments orchestrating the replication process and actions needed for the successful recovery. Thanks to this solution, the DR plan will be easily available in any eventuality, even the most remote, to ensure business continuity. Recently, the solution has been expanded while also providing the ability to implement a disaster recovery strategy for Azure virtual machines, allowing you to enable replication between different regions.

In this article I'll show you how ASR can be used to replicate virtual machines in VMware environment to Azure (scenario 6 in the following figure), examining the characteristics and technical procedure to be followed. The following illustration shows all the scenarios currently covered by ASR solution:

Figure 1 – Scenarios covered by Azure Site Recovery

The replication scenario of VMware virtual machines to Azure requires the presence of the following architecture:

Figure 2 - Architecture in the replication scenario VMware to Azure

In order to activate the replication process is required the presence of at least one on-premises server on which you install the following roles:

  • Configuration Server: coordinates communications between the on-premises world and Azure, and manages the data replication.
  • Process Server: This role is installed by default with the Configuration Server, but may be provided more Process Server based on the volume of data to be replicated. It acts as a replication gateway, then receives replication data, performs an optimization through caching and compression mechanisms, provides encryption and sends them to the storage in the Azure environment. This role is also responsible to make the discovery of virtual machines on VMware systems.
  • Master target server: even this role is installed by default with the Configuration Server, but for deployment with a large number of systems can be more servers with this role. Take action during the failback process of resources from Azure by managing replication data.

On all virtual machines subject to the replication process is required the presence of Mobility Service, that is installed by Process Server. It is a special agent in charge of replicating the data in the virtual machine.

Following describes the process to follow to make the deployment of on-premises and Azure components required to enable replication of VMware virtual machines to Microsoft's public cloud.

The core component required on Azure side is the Recovery Service Vault within which, in the section Site Recovery, you can start the configuration process controlled by the chosen scenario.

Figure 3 – Choice of replication scenario of VMware virtual machines within the Recovery Service Vault

Then you must install on the on-premises machine the Configuration Server by following the steps listed:

Figure 4 – Steps to follow to add the Configuration Server

In this section of the Azure portal it is possible to download the Microsoft Azure Site Recovery Unified Setup and the key required for the registration of the server to the vault. Before starting the installation make sure that the machine on which you intend to install the Configuration Server be able to access the public URLs of the Azure service and that is enabled during the setup the web traffic on port 80 needed to download the MySQL component used by the solution.

The setup prompts you for the following information:

Figure 5 – Choice of roles to install

Select the first option for installing the roles Configuration Server and Process Server. The second option is useful if you need to install additional Process Server to enable a scale out deployment.

Figure 6 - Accept the license agreement by MySQL Community Server

Figure 7 - Key selection required for the registration to the Site Recovery Vault

Figure 8 - Choice of the methodology to access the Azure Services (direct or via proxy)

Figure 9 – Check to verify prerequisites

Figure 10 – Setting passwords for MySQL

Figure 11 – Further check on the presence of the required components to protect VMware VMs

Figure 12 – Choice of the installation path

Installation requires approximately 5 GB of available space, but are recommend at least 600 GB for the cache.

Figure 13 — Select the network interface and the port to use for replication traffic

Figure 14 – Summary of installation choices

Figure 15 - Setup of the different roles and components successfully completed

At the end, the setup shows the connection passphrase which is used by the Configuration Server, that is good to save with care.

Then you must configure the credentials that will be used by Azure Site Recovery to discover virtual machines in the VMware environment and for the installation of the Mobility Service on virtual machines.

Figure 16 - Definition of the credentials used by the service

After complete these steps you can select the Configuration Server from the Azure portal and then define VMware system data (vcenter or vSphere) with which to interface.

Figure 17 - Select the Configuration Server and add vCenter / vSphere host

On completion of this configuration it is necessary to wait few minutes to allow the Process Server to perform the discovery of VMware virtual machine on the specified environment.

Then you need to define the settings for the target of the replica:

  • On which subscription and what recovery model (ASM or ARM).
  • Which storage account use to host the replicated data.
  • vNet on which attest the replicated systems.

Figure 18 – Target replication settings

The next step involves defining the replication policy in terms of RPO (in minutes), retention of the recovery points (expressed in hours) and how often make consistent snapshot at the application level.

Figure 19 – Creation of the replication policy

Upon completion of this task is proposed to carry out the analysis of your environment using the tool Deployment Planner (available directly through the link in the Azure Portal) in order to ensure that the requirements, network resources and storage resources are sufficient to ensure the proper operation of the solution.

Figure 20 - Steps of infrastructure preparation completed successfully

After completing the infrastructure preparation steps you can activate the replication process:

Figure 21 - Source and Replica Target

Figure 22 - Selection of the virtual machines and of the related discs to be replicated

This section also specifies which account the Process Server will use to install the Mobility Service on each VMware virtual machine (account configured previously as documented in Figure 16).

Figure 23 - Replication policies selection and optionally enable Multi-VM consistency

If the "Multi-VM consistency" option will be selected it will create a Replication Group within which will be included the VMs that you want to replicate together for using shared recovery point. This option is recommended only when you need a consistency during the fail over to multiple virtual machines that deliver the same workload. Furthermore, by activating this option you should keep in mind that to activate the system failover process is necessary to set up a specific Recovery Plan and you can not enable failover for a single virtual machine.

At the end of these configurations you can activate the replication process

Figure 24 – Activation of the replication process and its result

Figure 25 - State of the replica for the VMware virtual machine

One of the biggest challenges when implementing a Disaster Recovery scenario is to have a chance to test its functionality without impacting production systems and its replication process. Equally true is that do not test properly the DR process is almost equivalent to not having it. Azure Site Recovery allow you to tests in a very simple way the Disaster Recovery procedure to assess the effectiveness:

Figure 26 – Testing the Failover procedure

Figure 27 - Outcome of the Test Failover process

Conclusions

Being able to rely on a single solution as Azure Site Recovery that lets you enable and test procedures for business continuity in heterogeneous infrastructures, contemplating even virtual machines in VMware environment, certainly has many advantages in terms of flexibility and effectiveness. ASR makes it possible to deal with the typical obstacles encountered during the implementation of Disaster Recovery plans reducing the cost and complexity and increasing the levels of compliance. The same solution can also be used to deal with the actual migration to Azure with minimal impact on end users thanks to nearly zero application downtime.

Windows Server 2016: Introduction to Hyper-V VHD Set

In Windows Server 2016 a cool new feature was introduced in Hyper-V, codenamed VHD Set. This is a new way of creating virtual disks that need to be shared among multiple virtual machines, useful for implementing guest cluster. In this article you will learn the characteristics of VHD Set, you will learn how to implement them at best and how to effectively address migration scenarios.

Features

The ability to share virtual disks across multiple virtual machines is required for clustered configurations guest requiring shared storage and to avoid having to configure access to storage via for example virtual HBA or through the use of iSCSI Protocol.

Figure 1 – VHD Set

In the Hyper-V this feature was introduced with Windows Server 2012 R2 technology called Shared VHDX, which has the following important limitations that often prevent the use in production environment:

  • Back up the Shared VHDX should occur with specific agents and host based backup is not supported
  • Hyper-V replication scenarios are not supported
  • The resize online of Shared VHDX is not covered

With Hyper-V in Windows Server 2016 This feature was revolutionized with the introduction of the VHD Set instead of Shared VHDX which removes the limitations listed above making it a mature technology and reliable even for production environments. In fact the virtual machines that are configured to access the VHD Set you can protect them by host based backup, without having to install agents on guest machines. In this case we recommend a check to determine whether the backup solution supports this configuration. Also the discs in the VHD format Set support online resizing, without the need to stop the guest cluster configured to access it. Even the Hyper-V replication supports VHD format disks Set allowing you to implement disaster recovery scenarios for guest cluster configurations.

At the moment the only limitations in using VHDs Set are given by non-support for creating virtual machine checkpoint that access and the inability to perform a live migration of virtual machines with VHD storage Set. Microsoft's goal for the future is anyway to make virtual machines configured with VHD joint Set all other functionality.

Requirements for using VHD Set

The VHD Set format is supported only for guest operating systems Windows Server 2016. You can also configure guest cluster where virtual machines are accessing shared virtual disks you must fall into one of the following scenarios:

  • Hyper-V failover cluster with all files of VMs, including in the VHD Set format , residing on a Cluster Shared Volumes (CSV).
  • Hyper-V failover cluster that has as a storage location for VHD Set a SMB share 3.0 output from one Scale Out File Server (SOFS).

Figure 2 – Supported scenarios for using shared virtual disks

How To Create VHD Set

Creating virtual disks in the VHD Set format can be made either with a Graphical User Interface (GUI) that using Powershell. To create them via GUI simply open Hyper-V Manager and from the Actions Select New, Hard Drive. Among the possible formats will also be VHD Set as shown in the following figure:

Figure 3 – Selecting the virtual disk format in the creation wizard

Continuing with the Wizard, you can specify whether the disk should be classified as Fixed rather than Dynamic, the name, the location and its size if you choose to create a new blank disk. The same thing can also be done using Powershell cmdlet New-VHD, specifying as an extension of the virtual disk the new extension .vhds, as shown in the following example:

Figure 4 – Example of creating a disc in the VHD format Set using Powershell

Creating a disk in VHD Set format creates the following files in the specified location:

Figure 5 – Files generated by creating a disc in VHD format Set

The file with extension .avhdx contains data and can be fixed or dynamic depending on the choice made when creating, while the file .vhds contains the metadata required to coordinate access by different guest cluster nodes.

Virtual machine configuration with VHD Set

In order to add the drives in VHD Set format to virtual machines by modifying the properties and configure properly connecting SCSI controller:

Figure 6 – Addition of the Shared Drive in the properties of the VM

Next you must select the location of the file:

Figure 7 – Configuring the location of the shared drive

The same thing you will have to do it for all the virtual machines that will make up the guest cluster. After configuring the shared storage, that adds to the virtual machines the disks in VHS Set format, you can continue to configure the guest environment cluster according to the standard procedure for creating a cluster is described in Microsoft's official documentation.

Converting Shared VHDX in VHD Set

In Hyper-V infrastructure upgrade scenarios from Windows Server 2012 R2 to Windows Server 2016, may have to deal with the migration of Shared VHDX in VHD Set to take advantage of all the benefits in the new technology of virtual disk sharing. Moving to Windows Server 2016 there is no automatic update of Shared VHDX in VHD Set and is not prevented from continuing to use the shared disks in Shared VHDX format in Windows Server 2016. In order to migrate the Shared VHDX in VHD Set format you need to follow the steps manual:

  • Shut down all virtual machines connected to the Shared VHDX you intend to migrate.
  • Disconnect the Shared VHDX from all VMs using Powershell cmdlet Remove-VMHardDiskDrive or by using Hyper-V Manager.
  • Start converting the Shared VHDX in VHD format Sets via Powershell cmdlet Convert-VHD
  • Connect the disk you just converted to the VHD format Set to all VMs using Powershell cmdlet Add-VMHardDiskDrive or by using Hyper-V Manager.
  • Turn on virtual machines connected to VHD Set.

When using disks in VHD format sets can be useful the following Powershell cmdlets:

  • Get-VHDSet: useful for displaying various information about the disk in the VHD Set format, including a list of any checkpoint.
  • Optimize-VHDSet: needed to optimize space allocation used by the disk in the VHD Set format.

Conclusions

In Windows Server 2016 the introduction of VHD Set in Hyper-V enables you to easily implement architectures guest cluster without using storage sharing technologies that require heavy and complex configurations. Were also removed most of the restrictions regarding the methodology of sharing of virtual disks, present in the previous version of Hyper-V, making VHD Set a mature technology, reliable and therefore can also be used in production environments.

Virtual Machine Manager 2016: Installation of ’ agent in Windows Server 2016 (Server Core)

This article contains the steps that are required in order to install the Virtual machine Manager Agent via push 2016 on a Windows server 2016 installed in Server Core mode, that is certainly the most common installation option for Hyper-V systems.

Let's start with the specified that during the installation of Windows Server 2016 the wizard asks you to choose one of the following options:

  • Windows Server 2016 that equates all installation ’ Server Core. This is the recommended server installation mode less than special needs which require the use of the user interface or the graphical tools of management as it requires less disk space usage, reduces the potential attack surface and reduces l ’ management effort. This installation mode is not present in the standard user interface (“Server Graphical Shell”) and to manage the server you must use the command line, Windows PowerShell or you can do it from a remote system.
  • Windows Server (Server with the Desktop Experience) that corresponds to an equivalent version ’ Full of Windows Server 2012 R2 with installed the feature “Desktop Experience”.

Unlike previous versions of Windows Server there is the possibility of converting a Server Core installation to a Server installation with the Desktop Experience or vice versa, the only possibility of conversion is to perform a new installation of the operating system.

In Windows Server 2016 You can also use the Nano Server mode (for owners of the Datacenter Edition) for having a footprint further reduced. For more information about Nano Server I invite you to consult the following articles Windows Server 2016: Introduction to Nano Servers and Windows Server 2016: Use Nano Server Image Builder.

Trying to push install VMM agent 2016 on a default installation of Windows Server 2016 (Server Core) you will receive the following error message because it is necessary to make a number of preliminary tasks:

Figure 1 – VMM error 2016 on default installation of WS2016

By checking the details of the error you are directed towards a series of checks that should be carried out and that require different corrective actions.

  1. Ensure ' Host ' is online and not blocked by a firewall.

The first point is obvious and requires that the system is online and that there is no firewall blocking the communication systems from the VMM server.

  1. Ensure that file and printer sharing is enabled on ‘Host’ and it not blocked by a firewall.

Using the following command you can check that by default the firewall rule ‘File and Printer Sharing (Echo Request – ICMPv4-In)’ non è abilitata. Nell’immagine seguente è riportato il comando necessario per consentire questo tipo di traffico in ingresso:

Figure 2 – Gestione regola del firewall ‘File and Printer Sharing (Echo Request – ICMPv4-In)

  1. Ensure that WMI is enabled on ‘Host’ and it not blocked by a firewall.

Similar situation also regarding the firewall rule to allow traffic Windows Management Instrumentation (WMI) inbound, default is inactive and you must enable the feature:

Figure 3 – Gestione regola del firewall ‘Windows Management Instrumentation (WMI-In)

  1. Ensure that there is sufficient free space on the system volume.

Of course you need to make sure that on the system volume there is enough disk space for the installation of the VMM agent that requires a few dozen MB.

  1. Verify that the ADMIN $ share on ' Host ' exists. If the ADMIN $ share does not exist, restart ' Host ' and then try the operation again.

During the first phase of push installation of the VMM agent is done copying the setup share ADMIN $ remote server. Windows Server 2016 installed in server core mode is devoid of the File Server role:

Figure 4 – Check for File Server role

By default there is instead the feature to support the SMB Protocol v 1.0 / CIFS which in this case can safely be removed as unnecessary.

To allow access to this share ADMIN $ You then add the File Server role by using the following Powershell command:

Figure 5 – File Server role installation and removal feature for SMB support v 1.0 / CIFS

Terminate these operations you can install the VMM agent push 2016 on a default installation of Windows Server 2016 (Server Core):

Figure 6 – Job of the VMM agent installation successfully completed

 

Conclusions

In Windows Server 2016 installed in Server Core mode task as simple as the VMM agent push installation 2016 require a careful and timely system setup, Despite this I believe this installation mode is the preferred choice in most deployment of Hyper-V.

Windows Server 2016: the new Virtual Switch in Hyper-V

In this article we'll delve into the characteristics and we will see how to configure a Virtual Switch to Hyper-V in Windows Server 2016 in the mode Switch Embedded Teaming (SET). This is a new technology, alternative to NIC Teaming, allowing you to have multiple network adapters on the same physical host joins virtualization Hyper-V Virtual Switch.

With Windows Server 2012 It was introduced the ability to create an operating system natively network teaming (up to a maximum of 32 network adapters) without having to install special software vendors. It was common practice to define virtualization hosts of Hyper-V virtual switch attestandoli on these NIC teaming. To have high availability and balance virtual machine network traffic was necessary to unify these two different constructs, the Team and the Hyper-V Virtual Switch. Using this configuration, you should specify that the teaming LBFO tradition (Load Balancer Fail Over) It wasn't compatible with RDMA.

In Windows Server 2016 introduces a further possibility regarding Hyper-V Virtual Switch configuration Switch Embedded call Teaming (SET), figura 1, which allows you to unify multiple network adapters (up to a maximum of 8) in one Virtual switches without configuring any teaming. SET includes the teaming of network within the Virtual Switch providing high performance and fault tolerance in the face of hardware failure of the single NIC. In this configuration there is the possiblity to RDMA technology on individual network adapters, and therefore becomes invalid the need to have a separate set of NIC (one for use with the Virtual Switch and one for using RDMA).

2016_ 12_16_virtualswitch-01
Figure 1 – Architecture SET

When evaluating the adoption of Embedded Teaming mode Switch (SET) It is important to consider the compatibility with other technologies related to networking.

SET is compatible with:

  • Datacenter bridging (DCB)
  • Hyper-V Network Virtualization in both NVGRE that VxLAN
  • Receive side Checksum offloads (IPv4, IPv6, TCP) – If supported by the hardware model of NIC
  • Remote Direct Memory Access (RDMA)
  • SDN Quality of Service (QoS)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – If supported by the hardware model of NIC
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scalaing (RSS)

SET is Not compatible with the following network technologies:

  • 1X authentication
  • Ipsec Task Offload (IPsecTO)
  • QoS (implemented the host side)
  • Receive side coalescing (Us $)
  • Receive side scaling (RSS)
  • Single root i/o virtualization (SR-IOV)
  • Tcp Chimney Offload
  • Virtual Machine QoS (Vm-QoS)

 

Differences from the NIC Teaming

Switch Embedded Teaming (SET) It differs from traditional NIC Teaming in particular to the following aspects:

  • When you do a deployment of SET is not supported use NIC in standby mode, but all network adapters must be active;
  • You cannot assign a specific name to the team, but only to the Virtual Switch;
  • SET only supports the mode Switch Independent, While the NIC teaming has several modes of operation. This means that your network switches, where the NIC that belong to the SET are attested, I am not aware of the presence of this configuration and therefore does not introduce any control over how to distribute network traffic between the different members.

When you configure SET you only need to specify which network adapters belong to the team and the mechanism of balancing of traffic.

A Virtual Switch in SET should consist of network adapters that are certified by Microsoft that have passed compatibility testing "Windows Hardware Qualification and Logo (WHQL)"for Windows Server 2016. An important aspect is that the NIC must be identical in terms of brand, model, drivers and firmware.

As for how network traffic is distributed among the different members of SET has two modes: Hyper-V Port and Dynamic.

Hyper-V Port

In this configuration, the network traffic is divided between different team members based on the virtual switch port binding and virtual machine's MAC address associated. This mode is particularly suitable when using Virtual Machine technology in conjunction with the Queues (VMQs). It is also necessary to consider that in the case where there are a few virtual machines on the virtualization host you might not have a homogeneous traffic balancing being a little granular mechanism. In this mode also available bandwidth for a network adapter to a virtual machine (with traffic so always from a single switch port) is always limited to the available bandwidth on a single network interface.

Dynamic

This mechanism of load balancing has the following features:

  • The outgoing traffic is distributed (based on a hash TCP ports and IP addresses) the principle called flowlets (based on TCP communication breaks present in fluxes). Also in Dynamic mode is a mechanism of re-balancing of traffic in real time between the various members of SET.
  • The incoming traffic Instead it is distributed exactly as in Hyper-V mode Port.

Regarding the configuration of sets as well as for all components belonging to software-defined networking (SDN) It is recommended to adopt System Center Virtual Machine Manager (VMM). In configuring the Logical Switch simply select "Embedded" in the Uplink Mode as shown in Figure 2.

2016_ 12_16_virtualswitch-02
Figure 2 – SET configuration in VMM

As an alternative to configuring SET you can use the following PowerShell commands:

Create Virtual Switch in SET

2016_ 12_16_virtualswitch-03

The parameter EnableEmbeddedTeaming to create a team SET is optional in case they are listed multiple network adapters, but it is useful when you want to configure a Virtual Switch in this mode with a single network adapter to be extended with additional NIC subsequently.

Review of traffic distribution algorithm

2016_ 12_16_virtualswitch-04

Conclusions

Thanks to this new mechanism for creating Virtual Switch introduced in Hyper-V 2016 You can have more flexibility in networking management by reducing the number of network adapters and configuration complexity, while enjoying high performance and high availability.

Windows Server 2016: What’s New in Hyper-V

Windows Server 2016 was officially released and there are several new features related to Hyper-V that make it increasingly a powerful virtualization platform full of new and exciting features. In this article, I will show the new Hyper-V feature in Windows Server 2016 and attention will be paid to the changes from the previous version.

Nested Virtualization

This feature allows you to have a virtual machine using the Hyper-V role and consequences to host on it other virtual machine. This feature is useful for test and development environments, but it is not suitable to be used in production environments. In order to use the nested virtualization must be respected the following requirements:

  • The virtual machine using the Hyper-V role must have at least 4 GB RAM
  • Guest virtual machines must also be Windows Server 2016
  • When the nested virtualization is available only if the physical host that holds the VM with Hyper-V has Intel processors (VT-x and EPT).

For further information about can you see Windows Server 2016: Introduction to Hyper-V Nested Virtualization edited by Silvio Benedict or the Microsoft document Nested Virtualization.

Networking Features

Also in networking introduces important new features that allow you to take full advantage of the hardware and get more performance:

  • Remote direct memory access (RDMA) embedded switches and teaming (SET). Switch Embedded Teaming (SET) is a new technology alternative to NIC Teaming allowing you to have multiple network adapters that are joined to the same Virtual Switch to Hyper-V. Prior to Windows Server 2016 It was necessary to have a separate set of NIC (one for use with the Virtual switches and one to take advantage of RDMA) Since the teaming of the OS was not compatible with RDMA. In Windows Server 2016 There is the possiblity to RDMA on network adapters that are associated with a Virtual switches configured with or without Embedded Switch Teaming mode (SET)
  • Virtual machine multi queues (VMMQ). Improvement on QMV throughput with the ability to allocate multiple hardware queues per virtual machine
  • Quality of service (QoS) for software-defined networks

Hot add and remove for network adapters and memory

For generating virtual machine 2 running both Windows and Linux you can add or remove network adapters while the virtual machine is running, without no catch. In addition to build both virtual machines 1 build confidence 2, but running Windows Server 2016 or Windows 10 You can change the amount of RAM assigned to it while it is running, Although dynamic memory is not enabled.

Hyper-V Cluster Rolling Upgrade

Important are the changes in the cluster. Provides the ability to add a node Windows Server 2016 to an existing Hyper-V cluster consisting of Windows Server nodes 2012 R2. This allows us to update the cluster systems without any downtime. As long as all cluster nodes are not upgraded to Windows Server 2016 the cluster remains with the features of Windows Server 2012 R2. After the update process of the various nodes in the cluster, you must upgrade the level of functionality via Powershell cmdlet Update-ClusterFunctionalLevel.

Start Order Priority for Clustered Virtual Machines

Thanks to this feature it is possible to obtain more control over the virtual machine boot priority in clustered environment. This can be useful to start virtual machines that provide services before others that take advantage of these services. All this is easily achieved by configuring the set, assigning virtual machines to different sets and defining dependencies.

Production Checkpoints

The creation of production checkpoint relies on backup technologies inside the virtual machine instead of the save state (Save state of Hyper-V). For Windows OS based machines uses Volume Snapshot Service (VSS), While Linux virtual machines is done a flush of different file system buffers to create a checkpoint that is consistent at the file system level. The Production checkpoint are the default for new virtual machines, but there is always the possibility to create checkpoints based on the State to save the virtual machine, called checkpoint standard time.

Host Resource Protection

This feature helps prevent conditions where the operations carried out by a single virtual machine can degrade the performance of the Hyper-V host or other virtual machine. When this monitoring mechanism detects a VM with excessive activity in this reduces the resources assigned. By default this control mechanism is disabled and you can activate it using the following Powershell command: Set-VMProcessor-EnableHostResourceProtection $true

Shielded Virtual Machines

The Shielded virtual machines, aggregating several features, make it much more difficult all those activities that can be made by malware or Hyper-V administrators themselves to inspect, tampering and misappropriation of data. The data and status of Shielded virtual machines are encrypted and Hyper-V administrators is not permitted to view the output videos and data on virtual disks. These virtual machines can be carried out on specific Hyper-V hosts defined and in health status according to the policies issued by Guardian Server Host. The shielded virtual machines are compatible with the Hyper-V feature Replica. To enable replication that can be authorised for Hyper-V hosts on which you want to replicate the shielded virtual machine. For more details about these new features, please consult the document Guarded Fabric and Shielded VMs.

Virtualization-based Security for Generation 2 Virtual Machines

New security features were introduced to the virtual machine generation 2 (starting with version 8 of the configuration file) like Device Guard and Credential Guard, they are able to increase the security of the operating system from malware attacks.

Encryption Support for the Operating System drive in Generation 1 Virtual Machines

Now there is the possibility to protect the operating system disks using BitLocker to build virtual machine 1. Thanks to new feature Key Storage (requires at least version 8 of the configuration file) creates a small drive dedicated to hosting the keys used by BitLocker instead of using a Trusted Platform Module (TPM) that is only available for virtual machines by generation 2.

Linux Secure Boot

Build the virtual machine 2 based on the Linux operating system can boot using the Secure Boot. Are enabled to Secure Boot on Windows host Server 2016 the following Linux distributions:

  • Ubuntu 14.04 +
  • SUSE Linux Enterprise Server 12 +
  • Red Hat Enterprise Linux 7.0 +
  • CentOS 7.0 +

Windows PowerShell Direct

This is a new way to run Windows PowerShell commands on a virtual machine directly from the host impartendoli, without requiring an access via network and remote management tools regardless of configuration. Windows PowerShell Direct is an excellent alternative to the tools currently used by Hyper-V administrators as PowerShell, Remote Desktop or Hyper-V Virtual Machine Connection (Vmconnect) and offers a great experience in scripting and automation (for example, difficult to achieve with VMConnect).

Compatible with Connected Standby

When the Hyper-V role is enabled on a system that uses as power model Always On/Always Connected (AOAC) is now available as a power state Connected Standby.

Discrete Device Assignment

This exciting feature enables you to provide direct and exclusive access to a virtual machine to some harware PCIe devices. Using a device in this mode is bypassed the entire Hyper-V virtualization stack thus ensuring faster access to hardware. For more information about hardware requirements, please refer to the section "Discrete device assignment" in the document System requirements for Hyper-V on Windows Server 2016.

Windows Containers

Speaking of what's new in Hyper-V is also worth mentioning the Windows Containers that enable on a system of isolated applications. Among the main strengths of containers we find the speed of creation, high scalability and portability. There are two types of runtime containers, each provides a different level of application isolation:

  • Windows Server Containers using namespace and process isolation
  • Hyper-V virtual machine that uses a small Containers for each container

For more details on containers I invite you to consult official documentation of the Windows Container and the specific section on WindowServer.it containing several very interesting articles.

 

Update feature in Windows Server 2016

More Memory and Processors for Generation 2 Hyper-V Virtual Machines and Hosts

Build the virtual machine 2, starting with version 8 of the configuration file, can be configured with more resources in terms of virtual processors and memory RAM. Have also been revised even the maximum resource usage of physical hosts. For all details about Hyper-V scalability in Windows Server 2016 You can read the document Plan for Hyper-V scalability in Windows Server 2016.

Virtual Machine Configuration File Format

Virtual machine configuration file uses a new format that allows you to read and write more efficiently the different configurations. This new format also makes it more resilient to corruption if you experience failure disk subsystem. The extension of the new configuration file that holds the virtual machine configuration is .vmcx While the extension .vmrs is used to hold the runtime state.

Virtual Machine Configuration Version

The configuration file version represents the level of virtual machine compatibility with the version of Hyper-V. The virtual machine configuration file version 5 are compatible Windows Server 2012 R2 and can be activated either on Windows Server 2012 R2 on Windows Server 2016. Virtual machines with configuration file versions introduced in Windows Server 2016 cannot be performed in Hyper-V on Windows Server 2012 R2. In order to use all the new features on virtual machines created with Windows Server 2012 R2 and then migrated or imported in Hyper-V on Windows Server 2016 You must update the configuration of virtual machines. The update is automatic. The downgrade of the configuration file is not supported. Full details on how to upgrade the version of the virtual machine configuration can be found in the following: Upgrade virtual machine version in Hyper-V on Windows 10 or Windows Server 2016.

Hyper-V Manager Improvements

Hyper-V manager also introduces important improvements:

  • Alternate credentials support – Provides the ability to use a different set of credentials in Hyper-V Manager when connecting to a remote host Windows Server 2016 or Windows 10. Credentials can also be saved to be easily reused.
  • Manage earlier versions – Using Hyper-V Manager on Windows Server 2016 and Windows 10 You can also manage Hyper-V Windows Server based systems 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Updated management protocol – Hyper-V Manager uses the WS-MAN to communicate with the remote Hyper-V host. This communication protocol enables CredSSP authentication, Kerberos or NTLM and facilitates the host configuration to allow remote management.

Integration Services Delivered Through Windows Update

Very useful is the ability to update the virtual machine integration services based on Windows operating system via Windows Update. This is a matter of particular interest to service providers because thanks to this mechanism on monitoring the application of these updates is left in the hands of the tenant who owns the virtual machine. Tenants can then update independently its own virtual machine Windows with all updates, including the integration services, using a single method..

Shared Virtual Hard Disks

Provides the ability to resize the virtual hard disks including shared, used to create environments guest clustering, without any downtime. The size of the shared virtual hard disks can be extended or reduced while the virtual machine is online. The guest cluster using shared virtual hard disks can now be protected with the Hyper-V Replication for disaster recovery.

Storage Quality of Service (QoS)

In storage you can create QoS policy on Scale-Out File Server and assign them to different virtual disks associated with Hyper-V virtual machines. This gives you the ability to check the performance of the storage by preventing the use of the same for individual virtual machines can impact the entire disk subsystem. You can find full details of this topic in the document Storage Quality of Service.

 

Conclusions

There are many new features in Microsoft Windows Server virtualization platform 2016 that make it even more complete and rich in new features. Microsoft Hyper-V is now available on the market for several years, has reached the highest levels of reliability and offers a great enterprise-class virtualization solution. In choosing the virtualization platform is well not to overlook even the various possibilities that we offer to scale down public cloud or to implement hybrid architectures.

You can test all the new features of Microsoft Hyper-V Server 2016 by downloading the trial version from the TechNet Evaluation Center.

For more insights on the topic I invite you to participate in the sessions devoted to Hyper-V during the SID // Windows Server 2016 Roadshow, the free event dedicated to the new operating system, open to all companies, consultants and partners who want to prepare for the future and who want to know the latest news and best practices to implement new server operating system.

Microsoft Azure Site Recovery: Hyper-V virtual machine replication in Microsoft Azure

Microsoft Azure Site Recovery provides the ability to replicate virtual machines on Hyper-V towards a specific cloud service for disaster recovery.

By accessing the following link you can see all the details about prerequisites and about supported scenarios for using Azure Site Recovery: http://msdn.microsoft.com/it-it/library/azure/dn469078.aspx Continue reading