Category Archives: Storage

Disaster Recovery Solutions: Storage Replica vs DFS Replication

Microsoft Storage Replica is a technology introduced starting with Windows Server 2016 which among the main usage scenarios involves volume replication, synchronously or asynchronously, between servers or clusters, for disaster recovery purposes. Nowadays, several Microsoft customers continue to use DFS Replication (DFS-r) as a Disaster Recovery solution for unstructured data such as home folders and departmental shares. The question many are asking is whether the recent Storage Replica technology really takes the place of the well-established DFS-r mechanism. In this article the characteristics of the two solutions will be explored in order to clarify when it is convenient to use Storage Replica and when DFS Replication (DFS-r).

DFS Replication (DFS-r)

DFS Replication is a solution that can be activated through a role in Windows Server that allows you to replicate folders on different servers and geographic sites. The solution is based on an efficient replication engine, that contemplates the presence of more masters, which can be used to keep folders synchronized between different servers, even through network connections with limited bandwidth. DFS-r uses a compression algorithm known as Remote Differential Compression (RDC), able to detect changes in a file and replicate only the changed blocks instead of the entire file. DFS-r has long since replaced the File Replication Service (FRS) and it is also used for Active Directory Domain Services SYSVOL folder replication (AD DS) in domains where the functional level is at least Windows Server 2008.

The activation of DFS-r involves the creation of replication groups that contain the files and folders to be replicated:

Figure 1 – DFS-r Replication Process

For more information about the DFS Replication service (DFS-r) you can see the related Microsoft documentation.

Storage Replica

Storage Replica is a technology introduced from Windows Server 2016 and it allows the replication of volumes between servers or clusters for Disaster Recovery purposes.

Figure 2 – Server-to-server and Cluster-to-cluster storage replication scenarios

This technology also allows you to create stretch failover cluster with nodes spread over two different site, keeping storage in sync.

Figure 3 – Stretch clustered storage replication scenarios

In Windows Server 2016 the ability to use storage replication is only available if you use the Datacenter Edition of the operating system, while in Windows Server 2019 there is the possibility to activate Storage Replica also by adopting the Standard Edition, but with the following limitations:

  • You can replicate a single volume instead of an unlimited number of volumes.
  • The maximum size of the replicated volume should not exceed 2 TB.

For more information on Storage Replica, please consult the related Microsoft documentation.

Comparison of the solutions

The DFS-r solution is particularly effective in environments with limited network bandwidth and where it is necessary to replicate content on different nodes for which a limited number of changes are expected. However, DFS-r has significant limitations as a data replication solution, including:

  • Cannot replicate files in use or open.
  • It does not have a synchronous replication mechanism.
  • The latency for asynchronous replication can be of considerable duration (minutes, hours or even days).
  • It is based on a database that may require costly consistency checks as a result of any system outages.
  • The burden of environmental management is high.

The Storage Replica solution does not have the limitations listed above, but it is good to take into consideration the following aspects that differentiate it from the DFS-r solution and that in some scenarios could be considered as critical elements:

  • Replication occurs at the volume level and only one-to-one replication between volumes is allowed. However, it is possible to replicate different volumes between multiple servers.
  • Allows you to replicate synchronously or asynchronously, but it is not designed for low-bandwidth, high-latency networks.
  • Users are not allowed to access protected data on the target server while replication is in progress.
    • To validate the effectiveness of the replication process, it is still possible to carry out a Failover Test, which allows you to mount a writable snapshot of the replicated storage. To perform this operation, for testing purposes or backup, you must have a volume, not involved in replication, on the destination server. The Failover Test has no impact on the replication process, which will continue to ensure the protection of the data and the changes to the snapshot will remain circumscribed to the test volume.

How to replace DFS Replication (DFS-r) with Storage Replica?

If the characteristics of Storage Replica are not considered blocking, this latest technology can be adopted to replace the DFS Replication solution (DFS-r). The high-level process involves the following steps:

  • Install new systems at least Windows Server 2016, paying attention to evaluate the limits imposed by the Standard Edition, and configure storage. To learn more about improvements in Storage Replication with Windows Server 2019 you can consult this article.
  • Migrate the data that you want to replicate to one or more volumes of data (for example through Robocopy).
  • Enable Storage Replica replication and complete initial synchronization.
  • We recommend enabling snapshots through Volume Shadow Copies, in particular in the case of asynchronous replication. Snapshots are also replicated along with the files. In case of emergency, this will allow you to restore files from snapshots on the target server that may have only been partially replicated asynchronously.
  • Share the data on the source server and make it accessible through a DFS namespace. This aspect is important to ensure that users can still access the data when the server name changes during the activation of the Disaster Recovery plan. On the replication target server (DR site) you can create shares (not accessible during normal operations). The server on the DR site can be added to the DFS Namespaces keeping the target folders disabled.

If disaster recovery scenario needs to be activated, using the storage replica solution, you should do the following:

  • Make the server located on the DR site primary, so that it can show replicated volumes to users.
  • In case of synchronous replication, no data recovery will be required, unless during the loss of the origin server the user was using an application that wrote data without transaction protection.
  • In case of asynchronous replication, you may need to mount a snapshot to ensure application-wide data consistency.
  • Enable the target folders in DFS Namespaces to allow users to access their data.

Conclusions

Microsoft is continuing to make major investments in storage and Storage Replica is the tangible result that allows customers to adopt an effective and performing storage replication solution. In the Disaster Recovery field, there are several scenarios where Storage Replica can replace the DFS Replication service (DFS-r), however you should carefully evaluate the characteristics of both solutions to choose the one that best suits your usage scenario.

Azure storage: Disaster Recovery and failover capabilities

Microsoft recently announced a new feature that allows, for geo-redundant Azure storage account, to carry out a piloted failover. This feature increases control on this type of storage accounts, allowing greater flexibility in Disaster Recovery scenarios. This article shows the working principle and the procedure to follow in order to use this new feature.

Types of storage accounts

In Azure there are different types of storage account with distinct replication characteristics, to obtain different levels of redundancy. If you wish to keep the data present on the storage account even in the face of failures of an entire region of Azure it is necessary to adopt the geo-redundant storage account, among them there are two different types:

  • Geo-redundant storage (GRS): the data is replicated asynchronously into two geographical region of Azure, distant hundreds of miles between them.
  • Read-access geo-redundant storage (RA-GRS): it follows the same replication principle as previously described, but with the characteristic that the secondary endpoint can be accessed to read the replicated data.

Using these types of storage account are maintained three copies of the data in the primary region of Azure, selected during the configuration phase, and an additional three asynchronous copies of the data in another region of Azure, following the principle of Azure Paired Regions.

Figure 1 - Normal operation of the storage type GRS/RA-GRS

For more information about the different types of storage account and its redundancy you can consult the Microsoft's official documentation.

Characteristics of storage account failover process

Thanks to this new feature, the administrator has the option to start the account failover process deliberately, when deemed appropriate. The failover process update the public DNS record of the storage account, in this way, the requests are routed to the endpoint of the storage account in the secondary region.

Figure 2 – Failover process for a GRS/RA-GRS storage account

After the failover process, the storage account is configured to be a locally redundant storage (LRS) and it is necessary to proceed with its configuration to make geo-redundant again.

An important aspect to keep in mind, when you decide to take a failover of the storage account, is that this operation can result in a loss of data, because replication between the two regions of Azure is done asynchronously. Because of this aspect, in case of unavailability of the primary region, may not have been replicated to the secondary region all changes. To verify this condition you can refer to the property Last Sync Time that indicates when it is guaranteed that the data was successfully replicated to the secondary region.

Storage account failover procedure from the Azure Portal

Following, shows the steps to fail over to a storage account directly from the Azure Portal.

Figure 3 – Storage failover process account

Figure 4 – Storage account failover process confirmation

The procedure to start the failover of a storage account can be carried out not only by the portal Azure, but also through PowerShell, Azure CLI, or by using the API for the Azure Storage resources.

How to identify the problems on the storage account

Microsoft recommends that applications that use the storage accounts are designed to support possible errors in the writing stage. In this way, the application should expose any failures encountered in writing, in order to be alerted to the possible unavailability in gaining access to storage in a given region. This would allow take corrective actions, such as the failover of the GRSRA-GRS storage account.

Natively the platform , through the service Azure Service Health, provides detailed information if you experience conditions that affect the operation of its services available in Azure, including storage. Thanks to the complete integration of Service Health on Azure Monitor, which holds the alerting engine of Azure, you can configure specific Alerts if there are issues on Azure side, that impact on the operation of the resources present on your own subscription.

Figure 5 - Create Health alert in the Service Health Service

Figure 6 - Notification rule of issues on storage

The notification occurs through Action Groups, following which it is possible to evaluate the real need to take the storage account failover process.

Conclusions

Before the release of this feature, with GRS/RA-GRS storage account type, failover still had to be driven by Microsoft staff against a storage fault of an entire Azure region. This feature provide to the administrator the ability to failover, providing greater control over storage account. At the moment this feature is available for preview and only for storage accounts created in certain Azure regions. As with other Azure functionality in preview it is best to wait for the official release before using it for workloads in a production environment.

Azure File Sync: solution overview

The Azure File Sync service (AFS) allows you to centralize the network folders of your infrastructure in Azure Files, allowing you to maintain the typical characteristics of a file server on-premises, in terms of performance, compatibility and flexibility and at the same time to benefit from the potential offered by cloud. This article describes the main features of the Azure File Sync service and the procedures to be followed to deploy it.

Figure 1 – Overview of Azure File Sync

Azure File Sync is able to transform Windows Server in a "cache" for quick access to content on a given Azure file share. Local access to data can occur with any protocol available in Windows Server, such as SMB, NFS, and FTPS. You have the possibility to have multiple "cache" servers in different geographic locations.

These are the main features of Azure File Sync:

  • Multi-site sync: you have the option to sync between different sites, allowing write access to the same data between different Windows Servers and Azure Files.
  • Cloud tiering: are maintained locally only recently accessed data.
  • Integration with Azure backup: becomes invalid the need to back up data on premises. You can get content protection through Azure Backup.
  • Disaster recovery: you have the option to immediately restore metadata files and retrieve only the data you need, for faster service reactivation in Disaster Recovery scenarios.
  • Direct access to the cloud: is allowed to directly access content on the File Share from other Azure resources (IaaS and PaaS).

 

Requirements

In order to deploy Azure File Sync, you need the following requirements:

A Azure Storage Account, with a file share configured on Azure Files, in the same region where you want to deploy the AFS service. To create a storage account, you can follow the article Create a storage account, while the file share creation process is shown in this document.

A Windows Server system running Windows Server 2012 R2 or later, who must have:

  • PowerShell 5.1, which is included by default since Windows Server 2016.
  • PowerShell Modules AzureRM.
  • Azure File Sync agent. The setup of the agent can be downloaded at this link. If you intend to use AFS clustered environment, you should install the agent on all nodes in the cluster. In this regard Windows Server Failover Clustering is supported by Azure Sync Files of deployment type “File Server for general use”. The Failover Cluster environment is not supported on “Scale-Out File Server for application data” (SOFS) or on Clustered Shared Volumes (CSVS).
  • You should keep the option "Internet Explorer Enhanced Security Configuration" disabled for Administrators and for Users.

 

Concepts and service configuration

After confirming the presence of these requirements the Azure File Sync activation requires to proceed with the creation of the service Storage Sync:

Figure 2 – Creating Storage Sync service

This is the top-level resource for Azure File Sync, which acts as a container for the synchronization relationships between different storage accounts and multiple Sync Group. The Sync Group defines the synchronization topology for a set of files. The endpoints that are located within the same Sync Group are kept in sync with each other.

Figure 3 – Creating Sync Group

At this point you can proceed with server registration by starting the agent Azure File Sync.

Figure 4 – Initiation of the process of Sign-in

Figure 5 – Selection of server registration parameters

Figure 6 – Confirmation of registration of the agent

After the registration the server will also appear in the "Registered servers" section of the Azure portal:

Figure 7 – Registered servers into Storage Sync service

At the end of the server registration is appropriate to insert a Server Endpoints within the Sync Group, which integrates a volume or a specific folder, with a Registered Server, creating a location for the synchronization.

Figure 8 – Adding a Server Endpoint

Adding a Server Endpoint you can enable Cloud tiering that preserves, locally on the Windows Server cache, most frequently accessed files, while all the remaining files are saved in Azure on the basis of specific policies that can be configured. More information about Cloud Tiering capabilities can be found in the Microsoft's official documentation. In this regard, it is appropriate to specify that there's no support between Azure File Sync with enabled cloud tiering, and data deduplication. If you want to enable Windows Server Data Deduplication, cloud tiering capabilities must be maintained disabled.

After adding one or more Server Endpoint you can check the status of the Sync Group:

Figure 9 – Status of Sync Group

 

To achieve successful Azure File Sync deployment you should also carefully check compatibility with antivirus and backup solutions that are used.

Azure File Sync and DFS Replication (DFS-R) are two data replication solutions and can also operate in side-by-side as long as these conditions are met:

  1. Azure File Sync cloud tiering must be disabled on volumes with DFS-R replicated folders.
  2. The Server endpoints should not be configured on DFS-R read-only folders.

Azure File Sync can be a great substitute for DFS-R and for the migration you can follow the instructions in this document. There are still some specific scenarios that might require the simultaneous use of both replication solutions:

  • Not all on-premises servers that require a copy of the files can be connected to the Internet.
  • When the branch servers consolidate data in a single hub server, on which is then used Azure File Sync.
  • During the migration phase of deployment of DFS-R to Azure File Sync.

Conclusions

Azure File Sync is a solution that extends the classic file servers deployed on-premises with new features for content synchronization, using the potential of Microsoft public cloud in terms of scalability and flexibility.

Storage Replica: What's new in Windows Server 2019 and the management with Windows Admin Center

Storage Replica, in Microsoft home, is a technology introduced in Windows Server 2016 that you use to replicate, synchronously or asynchronously, volumes between servers or clusters, for disaster recovery purposes. This technology also allows you to create stretch failover cluster with nodes spread over two different site, keeping in sync the storage. This article will present the news, regarding Storage Replica, that will be introduced in Windows Server 2019 and you will be shown how to enable Storage Replica by using the new management tool Windows Admin Center.

 

What's new in Storage Replica in Windows Server 2019

In Windows Server 2016 there is the possibility of using Storage Replica only if you use the Datacenter Edition as operating system version, while in Windows Server 2019 there will be the option to enable Storage Replica also adopting the Standard Edition, but right now, with the following limitations:

  • You can replicate a single volume instead of an unlimited number of volumes.
  • The maximum size of the replicated volume should not exceed 2 TB.
  • The volume in replica can have only one partnership, instead of an unlimited number of partners.

By adopting a new Log format used by Storage Replica (Log v 1.1), imported performance improvements are introduced regarding throughput and latency. You can benefit from these improvements if all systems involved in the replication process will be Windows Server 2019 and will be especially noticeable on all-flash arrays and on Storage Spaces Direct cluster (S2D).

To validate the effectiveness of the replication process, is introduces the ability to perform a Test Failover. Through this new feature it is possible to mount a writable snapshots of the replicated storage. To perform this operation, for testing purposes or backup, you must have a volume, not involved in replication, on the destination server. The Failover Test has no impact on the replication process, which will continue to ensure the protection of the data and the changes to the snapshot will remain circumscribed to the test volume. Upon completion of testing it is appropriate to conduct a discard of the snapshot.

Storage Replica in Windows Admin Center

Windows Admin Center, also known as Project Honolulu, enables via an HTML5-based web console, to manage the infrastructure in a centralized way.

Through Windows Admin Center, you can install on the servers the Storage Replica feature and the related PowerShell module.

Figure 1 - Add the Storage Replica feature from Windows Admin Center

Figure 2 - Confirm the installation of Storage Replica and its dependencies

Figure 3 - Notification that the installation was successful

After the installation, the server requires a restart.

At this point you can configure, through Windows Admin Center, a new partnership of replica. The same thing could be accomplished using the Windows Powershell cmdlet New-SRPartnership.

Figure 4 - Adding new Storage Replica partnership between two replication Groups

Figure 5 - Settings required for the configuration of the Partnership

Windows Admin Center reports, at the end of the configuration, the details of the partnership.

Figure 6 - Details about the replication partnership

Furthermore, you can manage the replication status (suspend \ resume), switch the direction of synchronization and modify the configurations (add \ remove the replica volumes and settings of the partnership).

Figure 7 - Switch the replication direction

Figure 8 - Changing the partnership settings

Conclusions

Windows Server 2019 will introduce significant changes in Storage Replica service that, In addition to evolve it in terms of performance and effectiveness, will make it even more accessible. The whole is enriched by the possibilities offered by Windows Admin Center to easily, quickly and completely manage Storage Replica. Microsoft is making significant investments in storage and the results are obvious and tangible. For those wishing to test the latest new features about Windows Server 2019 can participate in the program Windows Insider.

 

 

Windows Server 2016: Introduction to Hyper-V VHD Set

In Windows Server 2016 a cool new feature was introduced in Hyper-V, codenamed VHD Set. This is a new way of creating virtual disks that need to be shared among multiple virtual machines, useful for implementing guest cluster. In this article you will learn the characteristics of VHD Set, you will learn how to implement them at best and how to effectively address migration scenarios.

Features

The ability to share virtual disks across multiple virtual machines is required for clustered configurations guest requiring shared storage and to avoid having to configure access to storage via for example virtual HBA or through the use of iSCSI Protocol.

Figure 1 – VHD Set

In the Hyper-V this feature was introduced with Windows Server 2012 R2 technology called Shared VHDX, which has the following important limitations that often prevent the use in production environment:

  • Back up the Shared VHDX should occur with specific agents and host based backup is not supported
  • Hyper-V replication scenarios are not supported
  • The resize online of Shared VHDX is not covered

With Hyper-V in Windows Server 2016 This feature was revolutionized with the introduction of the VHD Set instead of Shared VHDX which removes the limitations listed above making it a mature technology and reliable even for production environments. In fact the virtual machines that are configured to access the VHD Set you can protect them by host based backup, without having to install agents on guest machines. In this case we recommend a check to determine whether the backup solution supports this configuration. Also the discs in the VHD format Set support online resizing, without the need to stop the guest cluster configured to access it. Even the Hyper-V replication supports VHD format disks Set allowing you to implement disaster recovery scenarios for guest cluster configurations.

At the moment the only limitations in using VHDs Set are given by non-support for creating virtual machine checkpoint that access and the inability to perform a live migration of virtual machines with VHD storage Set. Microsoft's goal for the future is anyway to make virtual machines configured with VHD joint Set all other functionality.

Requirements for using VHD Set

The VHD Set format is supported only for guest operating systems Windows Server 2016. You can also configure guest cluster where virtual machines are accessing shared virtual disks you must fall into one of the following scenarios:

  • Hyper-V failover cluster with all files of VMs, including in the VHD Set format , residing on a Cluster Shared Volumes (CSV).
  • Hyper-V failover cluster that has as a storage location for VHD Set a SMB share 3.0 output from one Scale Out File Server (SOFS).

Figure 2 – Supported scenarios for using shared virtual disks

How To Create VHD Set

Creating virtual disks in the VHD Set format can be made either with a Graphical User Interface (GUI) that using Powershell. To create them via GUI simply open Hyper-V Manager and from the Actions Select New, Hard Drive. Among the possible formats will also be VHD Set as shown in the following figure:

Figure 3 – Selecting the virtual disk format in the creation wizard

Continuing with the Wizard, you can specify whether the disk should be classified as Fixed rather than Dynamic, the name, the location and its size if you choose to create a new blank disk. The same thing can also be done using Powershell cmdlet New-VHD, specifying as an extension of the virtual disk the new extension .vhds, as shown in the following example:

Figure 4 – Example of creating a disc in the VHD format Set using Powershell

Creating a disk in VHD Set format creates the following files in the specified location:

Figure 5 – Files generated by creating a disc in VHD format Set

The file with extension .avhdx contains data and can be fixed or dynamic depending on the choice made when creating, while the file .vhds contains the metadata required to coordinate access by different guest cluster nodes.

Virtual machine configuration with VHD Set

In order to add the drives in VHD Set format to virtual machines by modifying the properties and configure properly connecting SCSI controller:

Figure 6 – Addition of the Shared Drive in the properties of the VM

Next you must select the location of the file:

Figure 7 – Configuring the location of the shared drive

The same thing you will have to do it for all the virtual machines that will make up the guest cluster. After configuring the shared storage, that adds to the virtual machines the disks in VHS Set format, you can continue to configure the guest environment cluster according to the standard procedure for creating a cluster is described in Microsoft's official documentation.

Converting Shared VHDX in VHD Set

In Hyper-V infrastructure upgrade scenarios from Windows Server 2012 R2 to Windows Server 2016, may have to deal with the migration of Shared VHDX in VHD Set to take advantage of all the benefits in the new technology of virtual disk sharing. Moving to Windows Server 2016 there is no automatic update of Shared VHDX in VHD Set and is not prevented from continuing to use the shared disks in Shared VHDX format in Windows Server 2016. In order to migrate the Shared VHDX in VHD Set format you need to follow the steps manual:

  • Shut down all virtual machines connected to the Shared VHDX you intend to migrate.
  • Disconnect the Shared VHDX from all VMs using Powershell cmdlet Remove-VMHardDiskDrive or by using Hyper-V Manager.
  • Start converting the Shared VHDX in VHD format Sets via Powershell cmdlet Convert-VHD
  • Connect the disk you just converted to the VHD format Set to all VMs using Powershell cmdlet Add-VMHardDiskDrive or by using Hyper-V Manager.
  • Turn on virtual machines connected to VHD Set.

When using disks in VHD format sets can be useful the following Powershell cmdlets:

  • Get-VHDSet: useful for displaying various information about the disk in the VHD Set format, including a list of any checkpoint.
  • Optimize-VHDSet: needed to optimize space allocation used by the disk in the VHD Set format.

Conclusions

In Windows Server 2016 the introduction of VHD Set in Hyper-V enables you to easily implement architectures guest cluster without using storage sharing technologies that require heavy and complex configurations. Were also removed most of the restrictions regarding the methodology of sharing of virtual disks, present in the previous version of Hyper-V, making VHD Set a mature technology, reliable and therefore can also be used in production environments.