Category Archives: Cloud & Datacenter Management

How the End of Support of Windows Server 2012 can be a great opportunity for CTOs

The end of support for operating systems Windows Server 2012 and 2012 R2 is fast approaching and, for Chief Technology Officer (CTO) of companies, this aspect must be carefully evaluated as it has significant impacts on the IT infrastructure. At the same time, end of support can be an important opportunity to modernize the IT environment in order to ensure greater security, new features and improved business continuity. This article outlines the strategies you can adopt to deal with this situation, thus avoiding exposing your IT infrastructure to security issues caused by this situation.

When does Windows Server 2012/2012R2 support end and what does it mean?

The 10 October 2023 marks the end of extended support for Windows Server 2012 and Windows Server 2012 R2. Without the support of Microsoft, Windows Server 2012 and Windows Server 2012 R2 will no longer receive security patches, unless you take certain actions below. This means that any vulnerabilities discovered in the operating system will no longer be fixed and this could make systems vulnerable to cyber attacks. Furthermore, this condition would result in a state of non-compliance with specific regulations, such as the General Data Protection Regulation (GDPR).

Furthermore, users will no longer receive bug fixes and other updates needed to keep the operating system in line with the latest technology, which could lead to compatibility issues with newer software and introduce potential performance issues.

On top of all that, Microsoft will no longer provide online technical support and technical content updates for this operating system.

All these aspects have a significant impact on the IT organizations that still use these operating systems.

Possible strategies and opportunities related to the end of support

This situation is certainly not very pleasant for those who find themselves facing it now, given the limited time, but it can also be seen as an important opportunity for renewal and innovation of its infrastructure. The following paragraphs show the possible strategies that can be implemented.

Upgrading on-premises systems

This strategy involves moving to a new version of Windows Server in an on-premises environment. The advice in this case is to approach at least Windows Server 2019, but it is preferable to adopt the latest version, Windows Server 2022, that can provide the latest security innovations, application performance and modernization.

Furthermore, where technically possible it is preferable not to proceed with in place updates of the operating system, but to manage migration in side-by-side.

This method usually requires the involvement of the application provider, to ensure software compatibility with the new version of the operating system. Since the software is not recent, often it require the adoption of updated versions of the same, which may comprise architecture adjustment and an in-depth phase of testing for the new release . By adopting this upgrade process, the time and effort are considerable, but the result you get is critical to complying with the technological renewal.

Maintaining Windows Server 2012/2012 R2, but with security updates for others 3 years

To continue receiving security updates for Windows Server 2012\2012 R2 hosted on on-premises environment, one option is to join the programExtended Security Update (ESU). This paid program guarantees the provisioning of Security Updates classified as "critical" and "important" for an additional three years, in the specific case until 13 October 2026.

The Extended Security Update program (ESU) is an option for customers who need to run some legacy microsoft products beyond the end of support and who are not in a position to undertake other strategies. The updates included in the ESU program do not include new features and non-security related updates.

Azure adoption

Migrating systems to Azure

Migrating Windows Server Systems 2012 and Windows Server 2012 R2 on-premises in Azure environment will continue to receive security updates for another three years, classified as critical and important, without having to join the ESU program. This scenario is not only useful to ensure compliance with its systems, but it opens the way towards hybrid architectures where you can get the cloud advantages. In this regard, Microsoft offers a great solution that can provide a large set of tools needed to best deal with the most common migration scenarios: Azure Migrate, that structure the migration process in different phase (discovery, assessment, and migration).

Also Azure Arc can be very useful for inventory digital assets in heterogeneous and distributed environments.

Adopting this strategy can be faster than upgrading systems and allows you to have more time to deal with software renewal. In this regard, the cloud allows you to have excellent flexibility and agility in testing applications in parallel environments.

Before starting the migration path to Azure, it is also essential to structure the networking of the hybrid environment appropriately and evaluate the iterations with the other infrastructure components, to see whether the application can also work well in the cloud.

Migration to Azure can take place to IaaS virtual machines or, in the presence of a large number of systems to be migrated in a VMware environment, Azure VMware Solution can be a solution to consider to face a massive migration quickly and minimizing the interruption of the services provided.

Extending Azure in your datacenter with Azure Stack HCI

Azure Stack HCI is the Microsoft solution that allows you to create a hyper-converged infrastructure (HCI) for running workloads in an on-premises environment and that provides a strategic connection to various Azure services. Azure Stack HCI was specifically designed by Microsoft to help customers modernize their hybrid datacenter, offering a complete and familiar Azure experience in an on-premises environment. For more information on the Microsoft Azure Stack HCI solution, I invite you to readthis article or to viewthis video.

Azure Stack HCI allows you to get free, just like in Azure, important security patches for Microsoft's legacy products that are past their end of support, through the Extended Security Update program (ESU). For further information you can consult this Microsoft's document. This strategy allows you to have more time to undertake an application modernization process, without neglecting security aspects.

Application modernization

Under certain circumstances, an application modernization process could be undertaken, maybe focused on the public cloud, with the aim of increasing innovation, agility and operational efficiency. Microsoft Azure offers the flexibility to choose from a wide range of options to host your applications, covering the spectrum of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Container-as-a-Service (CaaS) and serverless. In a journey to move away from legacy operating systems, customers can use containers even for applications not specifically designed to use microservices-based architectures. In these cases, it is possible to implement a migration strategy for existing applications that only involves minimal changes to the application code or changes to configurations. These are strictly necessary changes to optimize the application in order to be hosted on PaaS and CaaS solutions. To get some ideas about it, I invite you to read on this article.

Steps to a successful transition

For companies intending to undertake one of the strategies listed, there are some important steps that need to be taken to ensure a successful transition.

Regardless of the strategy you decide to adopt, the advice is to make a detailed assessment, so you can categorize each workload by type, criticality, complexity and risk. This way you can prioritize and proceed with a structured migration plan.

Furthermore, it is necessary to carefully evaluate the most suitable transition strategy considering how to minimize any disruption to company activities. This may include scheduling tests and creating adequate backup sets before migration.

Finally, once the migration is complete, It is important to activate a modern monitor system to ensure that the application workload is stable and working as expected.

Conclusions

Windows Server end of support 2012 and Windows Server 2012 R2 presents a challenge for many companies that still use these operating systems. However, it can also be seen as an opportunity for companies to start an infrastructure or application modernization process. In this way you will have more modern resources, also taking advantage of the opportunities they offer in terms of security, scalability and performance.

Maximize the performance of Azure Stack HCI: discover the best configurations for networking

Hyperconverged infrastructure (HCI) are increasingly popular as they allow you to simplify the management of the IT environment, reduce costs and scale easily when needed. Azure Stack HCI is the Microsoft solution that allows you to create a hyper-converged infrastructure for the execution of workloads in an on-premises environment and which provides a strategic connection to various Azure services to modernize your IT infrastructure. Properly configuring Azure Stack HCI networking is critical to ensuring security, application reliability and performance. In this article, the fundamentals of configuring Azure Stack HCI networking are explored, learning more about available networking options and best practices for networking design and configuration.

There are different network models that you can take as a reference to design, deploy and configure Azure Stack HCI. The following paragraphs show the main aspects to consider in order to direct the possible implementation choices at the network level.

Number of nodes that make up the Azure Stack HCI cluster

A single Azure Stack HCI cluster can consist of a single node and can scale up to 16 nodes.

If the cluster consists of a single server at the physical level it is recommended to provide the following network components, also shown in the image:

  • single TOR switch (L2 or L3) for north-south traffic;
  • two-four teamed network ports to handle management and computational traffic connected to the switch;

Furthermore, optionally it is possible to provide the following components:

  • two RDMA NIC, useful if you plan to add a second server to the cluster to scale your setup;
  • a BMC card for remote management of the environment.

Figure 1 – Network architecture for an Azure Stack HCI cluster consisting of a single server

If your Azure Stack HCI cluster consists of two or more nodes you need to investigate the following parameters.

Need for Top-Of-Rack switches (TOR) and its level of redundancy

For Azure Stack HCI clusters consisting of two or more nodes, in production environment, the presence of two TOR switches is strongly recommended, so that we can tolerate communication disruptions regarding north-south traffic, in case of failure or maintenance of the single physical switch.

If the Azure Stack HCI cluster is made up of two nodes, you can avoid providing a switch connectivity for storage traffic.

Two-node configuration without TOR switch for storage communication

In an Azure Stack HCI cluster that consists of only two nodes, to reduce switch costs, perhaps going to use switches already in possession, storage RDMA NICs can be connected in full-mesh mode.

In certain scenarios, which include for example branch office, or laboratories, the following network model can be adopted which provides for a single TOR switch. By applying this pattern, you get cluster-wide fault tolerance, and is suitable if interruptions in north-south connectivity can be tolerated when the single physical switch fails or requires maintenance.

Figure 2 – Network architecture for an Azure Stack HCI cluster consisting of two servers, without storage switches and with a single TOR switch

Although the SDN services L3 are fully supported for this scheme, routing services such as BGP will need to be configured on the firewall device that sits on top of the TOR switch, if this does not support L3 services.

If you want to obtain greater fault tolerance for all network components, the following architecture can be provided, which provides two redundant TOR switches:

Figure 3 – Network architecture for an Azure Stack HCI cluster consisting of two servers, without storage switches and redundant TOR switches

The SDN services L3 are fully supported by this scheme. Routing services such as BGP can be configured directly on TOR switches if they support L3 services. Features related to network security do not require additional configuration for the firewall device, since they are implemented at the virtual network adapter level.

At the physical level, it is recommended to provide the following network components for each server:

  • two-four teamed network ports, to handle management and computational traffic, connected to the TOR switches;
  • two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each cluster node must have a redundant connection to the other cluster node;
  • as optional, a BMC card for remote management of the environment.

In both cases the following connectivities are required:

Networks Management and computational Storage BMC
Network speed At least 1 GBps,

10 GBps recommended

At least 10 GBps Tbd
Type of interface RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45
Ports and aggregation Twofour ports in teaming Two standalone ports One port

Two or more node configuration using TOR switches also for storage communication

When you expect an Azure Stack HCI cluster composed of more than two nodes or if you don't want to preclude the possibility of being able to easily add more nodes to the cluster, it is also necessary to merge the traffic concerning the storage from the TOR switches. In these scenarios, a configuration can be envisaged where dedicated network cards are maintained for storage traffic (non-converged), as shown in the following picture:

Figure 4 – Network architecture for an Azure Stack HCI cluster consisting of two or more servers, redundant TOR switches also used for storage traffic and non-converged configuration

At the physical level, it is recommended to provide the following network components for each server:

  • two teamed NICs to handle management and computational traffic. Each NIC is connected to a different TOR switch;
  • two RDMA NICs in standalone configuration. Each NIC is connected to a different TOR switch. SMB multi-channel functionality ensures path aggregation and fault tolerance;
  • as optional, a BMC card for remote management of the environment.

These are the connections provided:

Networks Management and computational Storage BMC
Network speed At least 1 GBps,

10 GBps recommended

At least 10 GBps Tbd
Type of interface RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45
Ports and aggregation Two ports in teaming Two standalone ports One port

Another possibility to consider is a "fully-converged" configuration of the network cards, as shown in the following image:

Figure 5 – Network architecture for an Azure Stack HCI cluster consisting of two or more servers, redundant TOR switches also used for storage traffic and fully-converged configuration

The latter solution is preferable when:

  • bandwidth requirements for north-south traffic do not require dedicated cards;
  • the physical ports of the switches are a small number;
  • you want to keep the costs of the solution low.

At the physical level, it is recommended to provide the following network components for each server:

  • two teamed RDMA NICs for traffic management, computational and storage. Each NIC is connected to a different TOR switch. SMB multi-channel functionality ensures path aggregation and fault tolerance;
  • as optional, a BMC card for remote management of the environment.

These are the connections provided:

Networks Management, computational and storage BMC
Network speed At least 10 GBps Tbd
Type of interface SFP+ or SFP28 RJ45
Ports and aggregation Two ports in teaming One port

SDN L3 services are fully supported by both of the above models. Routing services such as BGP can be configured directly on TOR switches if they support L3 services. Features related to network security do not require additional configuration for the firewall device, since they are implemented at the virtual network adapter level.

Type of traffic that must pass through the TOR switches

To choose the most suitable TOR switches it is necessary to evaluate the network traffic that will flow from these network devices, which can be divided into:

  • management traffic;
  • computational traffic (generated by the workloads hosted by the cluster), which can be divided into two categories:
    • standard traffic;
    • SDN traffic;
  • storage traffic.

Microsoft has recently changed its approach to this. In fact,, TOR switches are no longer required to meet every network requirement regarding various features, regardless of the type of traffic for which the switch is used. This allows you to have physical switches supported according to the type of traffic they carry and allows you to choose from a greater number of network devices at a lower cost, but always of quality.

In this document lists the required industry standards for specific network switch roles used in Azure Stack HCI implementations. These standards help ensure reliable communication between nodes in Azure Stack HCI clusters. In this section instead, the switch models supported by the various vendors are shown, based on the type of traffic expected.

Conclusions

Properly configuring Azure Stack HCI networking is critical to ensuring that hyper-converged infrastructure runs smoothly, ensuring security, optimum performance and reliability. This article covered the basics of configuring Azure Stack HCI networking, analyzing the available network options. The advice is to always carefully plan the networking aspects of Azure Stack HCI, choosing the most appropriate network option for your business needs and following implementation best practices.