Category Archives: Cluster

Azure Stack HCI: introduction to the solution

The use of hyper-converged infrastructure in recent years has increased sharply and estimates from authoritative sources report that in the coming 12-18 months investing in solutions of this kind will be among the most significant for the modernization of datacenters, for about the 54% of the organizations. With the arrival of Windows Server 2019, Microsoft introduced the solution Azure Stack HCI, that can run virtual machines and easy connection to Azure with a hyper-converged infrastructure (HCI). This article lists the main features of the solution and its potential.

The trend that is emerging is the transition from a "three tier" traditional infrastructure, composed of network switches, appliance, physical systems with onboard hypervisors, storage fabric and SAN, toward hyper-converged infrastructure (HCI), where different hardware components are removed, substitutes by the "magic" of the software, able to combine the layer of compute, storage and network in one solution.

Figure 1 – "Three Tier" Infrastructure vs Hyper-Converged Infrastructure (HCI)

All this is made possible by the new operating system Windows Server 2019, that lets you use Hyper-V, a solid and reliable hypervisor, along with Software Defined Storage and Software-Defined Networking solutions. To this is added Windows Admin Center, that allows you to fully manage and with a graphical interface the hyper-converged environment. The whole is implemented on hardware specially validated by various vendors.

Figure 2 – Azure Stack HCI Solution overview

The positioning of the solution Azure Stack HCI is as follows, side-by-side with Azure and Azure Stack, but with specific and distinct purposes.

Figure 3 – Azure Family

Azure Stack HCI is an evolution of Windows Server Software-Defined solution (WSSD) available in the past with Windows Server 2016. Azure Stack HCI was inducted into the Azure family as it shares the same software-defined technologies used from Azure Stack.

Azure Stack HCI allows the execution of virtualized applications in the on-premises environment, on hardware tested and validated specifically. In order to get certified hardware is subjected to rigorous validation testing, that guarantee the reliability and stability of the solution. To consult the different solutions for Azure Stack HCI of the various hardware vendors you can access this page.

Figure 4 – Azure Stack HCI solutions hardware partners

Proper hardware sizing is critical to achieving the desired results in terms of performance and stability, Therefore, you should always use hardware solutions validated in a specific way and do not use hardware components assembled at will. This condition is also required to obtain a solution of Azure Stack HCI fully supported.

Through the use and support of the latest innovations in hardware devices, Azure Stack HCI enables you to achieve very high performance, much to achieve an important record of IOPS (-> 13.798.674) for the hyper-converged platforms, doubling the maximum performance that had been reached with Windows Server 2016.

Figure 5 - Hardware Innovations supported by Azure Stack HCI

The hyper-converged solution with Windows Server 2016 saw a big problem due to the fact that the configuration and management of the environment had to be made predominantly from the command line.

Thanks to the introduction of Windows Admin Center you have the ability to manage and control hyper-converged environment totally via web interface. Furthermore, many vendors of hardware solutions provide the Windows Admin Center extensions to enhance the management capabilities.

The following video shows the management of a hyper-converged environment from Windows Admin Center:

In software-defined storage, the Storage Space Direct technology allows you to take advantage of many features, making it a complete solution, reliable and secure.

Figure 6 – Features in software-defined storage scope

In Windows Server 2019 important improvements have been made in the field of data deduplication and compression that allow you to have a higher quantity of usable storage space.

Figure 7 – Possible disk space savings using deduplication and compression

This configuration can be achieved very easily directly from Windows Admin Center.

Figure 8 – Enabling deduplication and compression from Windows Admin Center

Azure Stack HCI can be used for smaller environments with two nodes and can scale up to a maximum of 16 nodes.

Figure 9 -Scalability of the solution

In the presence of clusters composed by exactly two nodes Windows Server 2019 you can use the Nested resiliency, a new feature in Storage Spaces Direct, introduced in Windows Server 2019, that allows you to support more faults at the same time without losing access to storage.

Figure 10 - Hardware Fault supported

Using this feature you will have a lower capacity than a classic two-way mirror, but you get better reliability, essential for hyper-converged infrastructure, exceeding the limit from previous versions of Windows Server in the presence of cluster environments with only two nodes . The nested resiliency brings together two new options in the resiliency, implemented in software and without the need for specific hardware:

  • Nested two-way mirror: on each server is used locally a two-way mirror, and an additional resiliency is ensured by a two-way mirror between the two servers. Actually it's a four-way mirror, where there are two copies of the data for each server.
  • Nested mirror-accelerated parity: mixes two-way mirror, described above, with the nested parity.

Figure 11 – Nested two-way mirror + Nested mirror-accelerated parity

Azure Stack HCI connects on-premises resources to public cloud Azure to extend the feature set, a totally different approach from Azure Stack, that allows you to adopt the Azure services on-premises, getting a totally consistent experience to the public cloud, but with resources that are located in your datacenter.

Figure 12 – Hybrid approach: Azure Stack vs Azure Stack HCI

The ability to connect Azure Stack HCI with Azure services to obtain a hybrid hyper-converged solution is an important added value that differs strongly from other competitors. Also in this case the integration can be done directly from Windows Admin Center to enjoy the following services Azure:

  • Azure Site Recovery to implement disaster recovery scenarios.
  • Azure Monitor to monitor, in a centralized way, what happens at the application level, on the network and in its hyper-converged infrastructure, with advanced analysis using artificial intelligence.
  • Cloud Witness to use Azure storage account as cluster quorum.
  • Azure Backup for offsite protection of your infrastructure.
  • Azure Update Management to make an assessment of the missing updates and proceed with its distribution, for both Windows and Linux systems, regardless of their location, Azure or on-premises.
  • Azure Network Adapter to easily connect on-premises resources with the VMs in Azure via a point-to-site VPN.
  • Azure Security Center for monitoring and detecting security threats in virtual machines.

Figure 13 – Windows Azure hybrid Integration services from Admin Center

Conclusions

Microsoft has made significant investments to develop, improve and make its own proposition for hyper-converged scenarios more reliable and efficient. Azure Stack HCI is now a mature solution, that exceeds the limits of previous Windows Server Software-Defined solution (WSSD) and incorporates everything you need to create a hyper-converged environment into a single product and a single license: Windows Server 2019. The ability to connect remotely Azure Stack HCI to various Azure services also make it an even more complete and functional solution.

Windows Server 2019: introduction to the news for the cluster environment

October is the month of the official release of the final version of Windows Server 2019. The new server operating system from Microsoft introduces, in different areas, important new features that let you get Hyper-converged infrastructure (HCI) more reliable and flexible. To achieve this in Windows Server 2019 the cluster solution introduces a number of changes that are documented in this article.

Cluster Sets

Cluster Sets is a new technology for scale-out cluster environment introduced in Windows Server 2019. With this feature, you can group multiple Failover Clusters into a single entity to achieve greater fluidity of virtual machines among different clusters. All this is especially useful for load balancing and for maintenance, such as the replacement of entire cluster, without impacting the execution of virtual machines. In terms of management you can govern all using a single namespace. Cluster Sets do not distort the normal operating principles of traditional cluster environment (Preffered Owner, Node Isolation, Load Balancing, etc.), but remain completely unchanged, adding benefits such as Azure-like Fault Domains and Availability Sets between different clusters.

Figure 1 – Cluster Sets overview

File share witness

In clustered environment you have the ability to configure as witness the "File Share Witness" option (FSW), for which the following innovations were introduced.

It blocked the use of share of type Distributed File System (DFS). Theuse of DFS share as a File Share Witness (FSW) has never been a supported configuration as it introduces potential instability in cluster. In Windows Server 2019 was introduced a logic capable of detecting whether a share uses DFS and if so Failover Cluster Manager blocks the creation of the witness, displaying an error message saying that it is an unsupported configuration.

Figure 2 – Error message trying to configure witness on DFS share

In order to use a configuration with FSW, before the introduction of Windows Server 2019, one of the requirements to be met was that the Windows Server system that hosted the share had to be joined to a domain, and part of the same Active Directory forest. This requirement was due to the fact that the Failover Cluster used the Kerberos Authentication with the Cluster Name Object (CNO) to authenticate and connect to the share. In Windows Server 2019 you can create a File Share Witness (FSW) without using the CNO, it simply uses a local account to connect to FSW. To use File Share Witness is no longer required Kerberos authentication, the Cluster Name Object and your Active Directory environment. It follows that extend the possible usage scenarios for FSW, and it is possible to contemplate the use of, for example, NAS appliance, Windows systems not joined to the domain, etc.

 

Move the cluster in a different domain

Changing the domain membership of a Failover Cluster has always been an operation that required the destruction and recreation of the environment, with an important impact in terms of time and in operations. In Windows Server 2019 there is a specific procedure to change the membership of a new Active Directory domain of the cluster nodes, with the introduction of two new PowerShell commands:

  • New-ClusterNameAccount: creates from Active Directory a Cluster Name Account
  • Remove-ClusterNameAccount: removes from Active Directory a Cluster Name Account

The procedure requires that the nodes are first configured in Workgroup and then put in join to the new Active Directory domain. During the migration activity is required a stop of hosted workloads from the cluster.

Figure 3 - Domain Migration steps of a cluster

 

Removing the dependency with NTLM authentication

Windows Server Failover Clusters no longer uses NTLM authentication in any way, but only uses Kerberos authentication and certificate-based authentication. All this in Windows Server 2019 is natively, without the need to do special configuration, allowing to reap the resulting benefits in terms of security.

 

Conclusions

In Windows Server 2019 important investments have been made to achieve an agile OS, suitable for hybrid scenarios, more secure and allows you to deploy Hyper-converged infrastructure with outstanding features in terms of scalability and performance. Innovations like that shown in clustered environment help to ensure a better development of companies, offering fundamentals elements to support the process of innovation and modernization of the datacenter.

Windows Server 2016: Configuring the Failover Cluster Witness in the Cloud

In the article Windows Server 2016: What's New in Failover Clustering all were thorough main innovations introduced with Windows Server 2016 in the failover clustering. In this article we will detail the configuration of the cluster witness in Microsoft Azure cloud, analyzing the possible scenarios and the benefits of this new feature.

 

Possible scenarios supported by Witness Cloud

Among the supported scenarios that lend themselves more to this type of configuration are:

  • Multi-site stretched cluster.
  • Failover Cluster that does not require shared storage (SQL Always On, Exchange DAGs, etc).
  • Failover Cluster composed of nodes hosted on Microsoft Azure, other public or private cloud.
  • Scale-out type cluster File Server.
  • Cluster made actually small branch-office.

 

Cloud Witness configuration

We begin by specifying that a requirement to configure the cluster to use the Cloud Witness is that all nodes that make up the cluster has an internet access to Azure. Cloud Witness in fact uses the HTTPS protocol (door 443) to establish a connection with the Azure blob Storage Service Account.

 

Configuring the subscription requires a Witness Azure Cloud in which to configure a Storage Account that will be used as Witness and Cloud on which are written the blob file used for the arbitration of the cluster.

 

From the Azure portal you must create a storage account type Genaral Purpose. For this purpose is incorrect, create it with a performance level standard as they are not necessary for high performance that is provided with the use of SSDS. After selecting the most suitable location and replication policies you can proceed with the process of creating.

 

Figure 1 – Storage Account creation

 

After you create your storage account you must retrieve its required access key for authentication, which will be required in configuration steps.

 

Figure 2 – Account Storage access keys

 

At this point you can change the settings of the cluster Quorum from Failover Cluster Manager by following the steps below:

 

Figure 3 – Failover Cluster Quorum Settings Configuration Manager

 

Figure 4 – Witness Quorum selection

 

Figure 5 – Selection of Cloud Witness

 

Figure 6 – Storage Account name and access key

 

After successful configuration will be present among the various cluster resources also Cloud Witness:

 

Figure 7 – Cloud Resource Witness

 

Azure Storage Account is created a container named msft-cloud-witness, within which there will be a single blob file that has as its name the ID I joined the cluster. This means that you can use the same Microsoft Azure Storage Account to set up the different Cloud cluster Witness, where there will be a blob file for each cluster.

 

Figure 8 – Container inside of the Storage Account and its contents

 

Advantages of using Cloud Witness

The use of Cloud Witness gets the following benefits:

  • Eliminates the need to have an additional separate data center for certain cluster configurations by using Microsoft Azure.
  • Cancels the administrative effort required to maintain an additional virtual machine cluster witness role.
  • Given the small amount of data written to the Storage Account service charge is ridiculous.
  • The same Microsoft Azure Storage Account can be used as a witness to different clusters.

 

Conclusions

In the Windows Server failover cluster 2016 proves ready for integration with the cloud. With the introduction of cloud cluster systems more easily is possible Witness substantially reducing overall costs for implementing, the management effort and increasing flexibility of architecture cluster.

Windows Server 2016: What's New in Failover Clustering

Very frequently in order to ensure the high availability and business continuity for critical applications and services you need to implement a Failover Cluster running Microsoft. In this article we'll delve into the main innovations introduced with Windows Server 2016 in the failover clustering and analyse the advantages in adopting the latest technology.

Cluster Operating System Rolling Upgrade

In Windows Server 2016 introduces an important feature that allows you to upgrade the nodes of a Hyper-V cluster or Scale-Out File Server from Windows Server 2012 R2 to Windows Server 2016 without any disruption and avoiding to stop it hosted workloads.

The upgrade process involves these steps:

  • Put the node that you want to update paused and move all the virtual machine or the other workloads on the other nodes in the cluster
  • Remove the node from the cluster and perform a clean installation of Windows Server 2016
  • Add the node Windows Server 2016 the existing cluster. By this time the Mixed mode cluster with both Windows Server nodes 2012 R2 and nodes Windows Server 2016. In this connection it is well to specify that the cluster will continue to provide the services in Windows Server 2012 R2 and will not be yet available features introduced in Windows Server 2016. At this stage you can add and remove nodes is Windows Server 2012 R2 and nodes Windows Server 2016
  • Upgrading of all the cluster nodes in the same way as previously described
  • Only when all cluster nodes have been upgraded to Windows Server 2016 You can change the functional level to Windows Server cluster 2016. This operation is not reversible and to complete it you must use the PowerShell Update-ClusterFunctionalLevel. After you run this command you can reap all the benefits introduced in Windows Server 2016 stated below

Cloud Witness

Windows Server 2016 introduces the ability to configure the cluster witness directly in Microsoft Azure cloud. Cloud Witness, just like the tall types of witness, will provide a vote by participating in the calculation of quorum arbitrary.


Figure 1 – Cloud Witness in Failover Cluster Manager

Configuring the Cloud Witness involves two simple steps:

  • Creating a subscription to an Azure Storage Account that you will use Azure Cloud Witness
  • Configuring the Cloud Witness in one of the following ways

PowerShell

Failover Cluster Manager


Figure 2 – Cloud Witness Configuration Step 1


Figure 3 – Cloud Witness Configuration Step 2

 


Figure 4 – Cloud Witness Configuration Step 3

The use of Cloud Witness gets the following benefits:

  • Leverages Microsoft Azure eliminating the need for an additional separate data center for certain cluster configurations
  • Working directly with a Microsoft Azure Blob Storage canceling this way the administrative effort required to keep a virtual machine in a public cloud
  • The same Microsoft Azure Storage Account can be used for multiple clusters
  • View the mole little data that is written to the Storage Account service charge is ridiculous

Site-Aware Failover Clusters

Windows Server 2016 introduces the concept of clustered failover site-aware and is able to gather groups of nodes in a cluster based on the geographical location configuration stretched (site). During the lifetime of a cluster site-aware placement policies, the heartbeat between nodes and failover operations and calculation of the quorum are designed and improved for this particular cluster environment configuration. For more details about I invite you to consult the article Site-aware Failover Clusters in Windows Server 2016.

Multi-domain and workgroup Cluster

In Windows Server 2012 R2 and in previous versions of Windows, all nodes in a cluster must necessarily belong to the same Active Directory domain. With Windows Server 2016 removes these barriers and provides the ability to create a Failover Cluster without Active Directory dependencies.

In Windows Server 2016 supports the following configurations:

  • Single-domain Cluster: clusters where all nodes are in the same domain
  • Multi-domain Cluster: cluster composed of nodes joined to different Active Directory domains
  • Workgroup Cluster: cluster with nodes in WFWG (not joined to a domain)

In this regard it is good to specify what are the supported workloads and its limitations to Multi-domain and Workgroup cluster:

Cluster Workload

Support

DettagliMotivazione

SQL Server

Supported

Recommended SQL Server authentication.

File Server

Supported, but not recommended

Kerberos authentication (not available in these environments) is the recommended authentication protocol Server Message Block traffic (SMB).

Hyper-V

Supported, but not recommended

Does not support Live Migration, but only the Quick Migration.

Message Queuing (MSMQ)

Not supported

Message Queuing save property in AD DS.

Diagnostic in Failover Clustering

In Windows Server 2016 the following innovations have been introduced to facilitate troubleshooting if problems arise cluster environment:

SMB Multichannel and Multi-NIC Cluster Network

In Windows Server 2016 There are several new features in the network regarding the clustered environment that help ease configuration and get better performance.

The main benefits introduced in Windows Server 2016 can be summarised in the following points:

  • SMB Multichannel is enabled by default
  • Failover cluster can recognize automatically the NIC attested on the same subnet as the same switch
  • A single resource IP Address is configured for each Access Point Cluster (Zip code) Network Name (NN)
  • The network with Link-Local IPv6 addresses only (FE80) are recognized as private networks (cluster only)
  • The cluster validation does not report more warning messages in case there are more NIC attested on the same subnet

For more information I refer you to the Microsoft documentation: Simplified SMB Multichannel and Multi-NIC Cluster Networks.

Conclusions

Windows Server 2016 introduces major changes in the Failover Clustering making the solution more flexible and opening up new configuration scenarios. Furthermore the upgrade process allows us to easily update existing clusters to take advantage of all the benefits introduced by Windows Server 2016 for different workloads.