Category Archives: Cloud & Datacenter Management (2023-2024)

Microsoft Defender for Cloud: a summer of innovations to reshape corporate security

In an era where data security and efficient management of cloud resources have become essential priorities, Microsoft Defender for Cloud emerges as a strategic tool for modern businesses. This solution, integrated into the Azure environment, offers advanced protection, facilitating enterprise-wide security and compliance management. In this article, will be explored the main innovations that characterized Defender for Cloud in the summer 2023, outlining how these innovations can represent added value for companies.

The benefits of adopting Defender for Cloud

Adopting Defender for Cloud in a business context is not just a strategic choice, but a growing need. This solution allows you to centralize and simplify security management, offering a holistic view that facilitates continuous monitoring and rapid response to security threats. Furthermore, helps optimize the security posture of hybrid and multi-cloud environments, while ensuring advanced protection and compliance with different regulatory compliances.

Summer news 2023

Ability to include Defender for Cloud in business cases made with Azure Migrate

For companies intending to move their resources to cloud platforms such as Azure, migration planning is key. With the integration of Defender for Cloud in Azure Migrate, it is now possible to guarantee advanced protection right from the initial migration phase. This integration ensures that security strategies are well integrated into the migration plan, providing a more secure and seamless transition to the cloud.

Defender for Cloud, increasingly agentless

Many Defender for Cloud features are now available without the need to install an agent:

  • Container protection in Defender CSPM: discovery agentless. The transition from agent-driven discovery to agentless discovery, for protecting containers in Defender CSPM, represents a notable qualitative leap towards more streamlined and effective security management. This new feature eliminates the need to install agents on each container, thus simplifying the discovery process and reducing resource usage.
  • Defender for Containers: agentless discovery per Kubernetes. Defender for Containers has launched agentless discovery for Kubernetes, representing a notable step forward in container security. This feature provides a detailed view and comprehensive inventory capability of Kubernetes environments, ensuring an unparalleled level of security and compliance.
  • Defender for Servers P2 & Defender CSPM: agentless secret scanning for Virtual Machines. The functionality of scanning secrets without the use of agents, inside in Defender for Server P2 and Defender CSPM, allows you to discover unsupervised and vulnerable secrets stored on virtual machines. This tool is essential to prevent lateral movement actions in the network and reduce the related risks.

Data Aware Security Posture

Adopting a conscious security posture for data is essential and now Microsoft Defender for Cloud is able to satisfy this need too. This feature allows companies to minimize data risks, providing tools that automatically identify sensitive information and assess related threats, improving response to data breaches. In particular, sensitive data identification for PaaS databases is currently being previewed. This allows users to catalog critical data and recognize types of information within their databases, proving fundamental for the effective management and protection of sensitive data.

GCP support in Defender CSPM

Introducing support for Google Cloud Platform (GCP) in Defender CSPM, currently in preview, marks a significant step towards more integrated and versatile security management, extending Defender CSPM capabilities to a wide range of services in Google's public cloud.

Malware scanning in Defender for Storage

Defender for Storage introduces malware scanning functionality, overcoming traditional malware protection challenges and providing an ideal solution for highly regulated industries. This function, available as an add-on, represents a significant enhancement of Microsoft Defender for Storage security solutions. With malware scanning you get the following benefits.

  • Protection, in near real time, without agent: ability to intercept advanced malware such as polymorphic and metamorphic ones.
  • Cost Optimization: thanks to flexible pricing, you can control costs based on the amount of data examined and with resource-level granularity.
  • Enablement at scale: without the need for maintenance, supports automated responses at scale and offers several options for activation via tools and platforms such as Azure policy, Bicep, ARM, Terraform, REST API and the Azure portal.
  • Application versatility: based on feedback from beta users over the last two years, Malware scanning has proven useful in a variety of scenarios, as web applications, content protection, compliance, integrations with third parties, collaborative platforms, data streams and datasets for machine learning (ML).

Express Configuration for Vulnerability Assessments in Defender for SQL

The configuration option 'express’ for vulnerability assessments in Defender for SQL, now available for everyone, facilitates the recognition and management of vulnerabilities, ensuring a timely response and more effective protection.

GitHub Advanced Security per Azure DevOps

It is now possible to view GitHub Advanced Security for Azure DevOps alerts (GHAzDO) related to CodeQL, secrets and dependencies, directly in Defender for Cloud. The results will appear in the DevOps section and Recommendations. To see these results, you need to integrate your GHAzDO-enabled repositories into Defender for Cloud.

New auto-provisioning process for SQL Server plan(preview)

The Microsoft Monitoring Agent (MMA) will be deprecated in August 2024. Defender for Cloud has updated its strategy by replacing MMA with the release of an Azure Monitor agent auto-provisioning process targeted at SQL Server.

Revisiting the business model and pricing structure

Microsoft has revised the business model and pricing structure of Defender for Cloud plans. These changes, aimed at offering greater clarity in expenses and making the cost structure more intuitive, were made in response to customer feedback to improve the overall user experience.

Conclusion

Summer 2023 marked a period of significant innovation for Microsoft Defender for Cloud. These new things, oriented towards more integrated and simplified security management, they promise to bring tangible benefits to companies, facilitating data protection and compliance in increasingly complex cloud environments.

Learn about foolproof strategies to optimize costs on Azure

The peculiarities and undeniable advantages of cloud computing can, in certain situations, hide pitfalls if not handled with due attention. Wise cost management is one of the crucial aspects of cloud governance. In this article, will be explored and outlined the principles and techniques that can be used to optimize and minimize expenses relating to the resources implemented in the Azure environment.

The issue of optimizing costs related to the cloud is a topic that is attracting increasingly greater interest among numerous customers. So that, for the seventh year in a row, emerges as the leading initiative in the cloud industry, as reported in Flexera's annual report 2023.

Figure 1 – Initiatives reported in the Flexera report of 2023

Principles to better manage costs

For effective management of costs associated with Azure, It is essential to adopt the principles outlined in the following paragraphs.

Design

A well-structured design process, which includes a meticulous analysis of business needs, it is essential to customize the adoption of cloud solutions. It therefore becomes crucial to outline the infrastructure to be implemented and how it will be used, through a design plan that aims to optimize the efficiency of the resources allocated in the Azure environment.

Visibility

It is vital to equip yourself with tools that offer a global view and allow you to receive notifications regarding Azure costs, thus facilitating constant and proactive monitoring of expenses.

Responsibility

Assigning cloud resource costs to the respective organizational units within the company is a smart practice. This ensures that managers are fully aware of the expenses attributable to their team, promoting an in-depth understanding of Azure spending at an organizational level. For this purpose, It is advisable to structure Azure resources in such a way as to facilitate the identification and attribution of costs.

Optimization

It is advisable to undertake periodic reviews of Azure resources with the intention of minimizing expenses where possible. Making use of available information, you can easily identify underutilized resources, eliminate waste and capitalize on cost saving opportunities.

Iteration

It is essential that IT staff are continuously engaged in the iterative processes of optimizing the costs of Azure resources. This represents a key element for responsible and effective management of the cloud environment.

Techniques to optimize costs

Regardless of the specific tools and solutions used, to refine cost management in Azure, you can adhere to the following strategies:

  • Turn off unused resources, given that the pricing of the various Azure services is based on the actual use of the resources. For those resources that do not require uninterrupted operation and that allow, without any loss of configurations or data, a deactivation or suspension, it is possible to implement an automation system. This system, regulated by a predefined schedule, facilitates the optimization of use and, consequentially, more economical management of the resources themselves.
  • Adequately size resources, consolidating workloads and proactively intervening on underutilized resources, allows us to avoid waste and guarantee a more efficient and targeted use of available capacities.
  • For resources used continuously in the Azure environment, evaluate the option of Reservations can prove to be an advantageous strategy. Azure Reservations offer the opportunity to benefit from a significant cost reduction, which can reach up to 72% compared to pay-as-you-go rates. This benefit can be obtained by committing to pay for the use of Azure resources for a period of one or three years. This payment can be made in advance or on a monthly basis, at no additional cost. The purchase of Reservations can be made directly from the Azure portal and is available to customers with the following subscription types: Enterprise Agreement, Pay-As-You-Go and Cloud Solution Provider (CSP).
  • To further mitigate costs associated with Azure, it is appropriate to consider the implementation of’Azure Hybrid Benefit. This advantage allows you to achieve significant savings, as Microsoft only allows you to bear the costs relating to the Azure infrastructure, while the licenses for Windows Server or SQL Server are covered by a Software Assurance contract or an existing subscription.

The Azure Hybrid Benefit can also be extended to Azure SQL Database, to SQL Servers installed on Azure virtual machines and SQL Managed Instances. These benefits facilitate the transition to cloud solutions, bidding up to 180 days of dual use right, and help leverage pre-existing investments in terms of SQL Server licenses. To learn more about how to use the Azure Hybrid Benefit for SQL Server, please consult the FAQs present in this document. It is important to note that this benefit is also applicable to RedHat and SUSE Linux subscriptions, further expanding the opportunities for savings and cost optimization.

The Azure Hybrid Benefit can be combined with Azure Reserved VM Instances, creating an opportunity for significant savings that can reach 80% of the total, especially when you opt for an Azure Reserved Instance purchase for the duration of 3 years. This synergy not only makes the investment cheaper, but also maximizes operational efficiency.

  • Considering the integration of new technologies and the application of architectural optimizations is crucial. This process involves the selection of the most appropriate Azure service for the specific needs of the application in question, ensuring not only optimal technological alignment, but also more efficient cost management.
  • Allocate and de-allocate resources dynamically is critical to meeting fluctuating performance needs. This approach is known as “autoscaling”, a process that facilitates the flexible allocation of resources to meet specific performance needs at any time. As the workload intensifies, an application may require additional resources to maintain desired performance levels and meet SLAs (Service Level Agreement). On the contrary, when demand reduces and additional resources are no longer essential, these can be de-allocated to minimize costs. Autoscaling capitalizes on the elasticity of cloud environments, allowing not only more effective cost management, but also reducing the administrative burden, as resources can be managed more smoothly and with less manual intervention.
  • For test and development environments, it is advisable to consider the use of Dev/Test subscriptions, which offer the opportunity to access significant discounts on Azure fees. These subscriptions can be activated under an Enterprise Agreement, thus facilitating more advantageous cost management and more agile and economical experimentation during the development and testing phases.

Conclusions

The adoption of a methodological approach in managing cloud costs, together with the use of appropriate strategies, represents a fundamental pillar for successfully navigating the complex challenge of cloud economic management. Drawing from the principles and techniques outlined in this article, users can not only optimize expenses, but also make the most of their investment in the cloud, ensuring a balance between costs and benefits.

Hotpatching in Windows Server: a revolution in virtual machine management

In the digital age, ensuring business continuity is essential, no longer just an added value. For many companies, frequent interruptions, even of short duration, are unacceptable for their critical workloads. However, ensuring that continuity can be complex, whereas the management of virtual machines (VM) with Windows Server operating system is in some respects complex, especially in relation to applying security patches and updates. With the advent of the hotpatching feature from Microsoft, a new chapter in VM management has opened: a more efficient approach that minimizes disruption, guaranteeing servers that are always up-to-date and protected. This article looks at the features and benefits of this innovative solution.

What is Hotpatching?

Hotpatching, introduced by Microsoft, is an advanced technique that allows you to update Windows Server operating systems without the need to restart. Imagine you can “change tires” of your moving car without having to stop it. This is the "magic" of hotpatching.

Where you can use Hotpatching

Hotpatch functionality is supported on “Windows Server 2022 Datacenter: Azure Edition”, that you can use it for VMs running in Azure and Azure Stack HCI environment.

The Azure images available for this feature are:

  • Windows Server 2022 Datacenter: Azure Edition Hotpatch (Desktop Experience)
  • Windows Server 2022 Datacenter: Azure Edition Core

Note that Hotpatch is enabled by default on Server Core images and that Microsoft recently extended hotpatching support to include Windows Server with Desktop Experience, further expanding the scope of this feature.

Updates supported

Hotpatch covers Windows security updates and maintains an alignment with the content of security updates issued in the regular Windows update channel (non hotpatch).

There are some important considerations for running a Windows Server Azure Edition VM with hotpatch enabled:

  • reboots are still required to install updates that are not included in the hotpatch program;
  • reboots are also required periodically after a new baseline has been installed;
  • reboots keep the VM in sync with non-security patches included in the latest cumulative update.

Patches not currently included in the hotpatch program include non-security updates released for Windows, .NET updates and non-Windows updates (as driver, firmware updates, etc.). These types of patches may require a reboot during the Hotpatch months.

Benefits of Hotpatching

The benefits of this technology are many:

  • Better security: with hotpatching, security patches are applied quickly and efficiently. This reduces the window of vulnerability between the release of a patch and its application, offering fast protection against threats.
  • Minimization of downtime: one of the main benefits of hotpatching is the ability to apply updates without the need to restart the server. This means fewer outages and higher availability for applications and services.
  • More flexible management: system administrators have the freedom to decide when to apply patches, without the worry of having to do careful planning to ensure that running processes are not interrupted while applying updates.

How hotpatching works

During a hotpatching process, the security patch is injected into the operating system's running code in memory, updating the system while it is still running.

Hotpatch works by first establishing a baseline with the current Cumulative Update for Windows Server. Periodically (on a quarterly basis), the baseline is updated with the latest Cumulative Update, after which they are released hotpatch for the next two months. For example,, if a Cumulative Update is released in January, February and March would see the release of hotpatch. For the hotpatch release schedule, you can consult the Release Notes for Hotpatch in Azure Automanage for Windows Server 2022.

The hotpatch contain updates that do not require a restart. Because Hotpatch fixes the in-memory code of running processes without the need to restart the process, applications hosted on the operating system are not affected by the patching process. This action is separate from any performance and functionality implications of the patch itself.

The following image shows an example of an annual update release schedule (including examples of unplanned baselines due to zero-day corrections).

Figure 1 – Outline of a sample yearly schedule for releasing Hotpatch updates

There are two types of baselines:

  • Planned Baselines: are released on a regular basis, with hotpatch releases in between. Planned Baselines include all updates in a newer Cumulative Update and require a restart.
  • Unplanned Baselines: they are released when a major update is released (like a zero-day correction) and that particular update cannot be released as a hotpatch. When unscheduled baselines are released, a hotpatch release is replaced with an unplanned baseline in that month. Unplanned Baselines also include all updates in a newer Cumulative Update and require a restart.

The programming shown in the example image illustrates:

  • four baseline releases planned in a calendar year (five total in the diagram) and eight hotpatch releases;
  • two unplanned baselines that would replace the hotpatch releases for those months.

Patch orchestration process

Hotpatch is to be considered as an extension of Windows Update and patch orchestration tools vary depending on the platform in use.

Hotpatch orchestration on Azure

Virtual machines created in Azure are enabled by default for automatic patching when using a supported image of "Windows Server Datacenter: Azure Edition”:

  • patches classified as Critical or Security are automatically downloaded and applied to the VM;
  • patches are applied during off-peak hours considering the time zone of the VM;
  • Azure handles patch orchestration and patches are applied following the availability principles;
  • the health status of the virtual machine, determined through Azure platform health signals, is monitored for patching failures.

Hotpatch orchestration on Azure Stack HCI

Hotpatch updates for active VMs in Azure Stack HCI environment can be orchestrated using:

  • Group Policy to configure Windows Update client settings;
  • Windows Update client settings or SCONFIG per Server Core;
  • a third-party patch management solution.

Considerations and Limitations

However, like any technology, even hotpatching has its nuances. Not all patches are suitable for hotpatching; some may still require a traditional restart. Furthermore, before applying any patches, it remains crucial to test it in a controlled environment to avoid potential problems.

Installing Hotpatch updates does not support automatic rollback. In fact,, if a VM experiences a problem during or after an upgrade, you need to uninstall the update and install the latest known good baseline update. After the rollback you will need to restart the VM.

Conclusion

The introduction of hotpatching by Microsoft represents a significant step forward in the management of VMs running Windows Server operating system. With the ability to apply security patches and updates non-disruptively, administrators can ensure that their servers are protected and updated in a faster and more effective way. In a world where safety is paramount and where every second counts, hotpatching is positioned as a valuable solution for any company that uses Windows Server in an Azure environment or in an Azure Stack HCI environment.

Revolutionize cloud cost management with AI: discover the new Microsoft Cost Management co-pilot!

In the digital age, cloud computing has become an essential component for many companies, offering flexibility, scalability and agility. However, with the ever more widespread adoption of the cloud, the management of associated costs has become an increasingly complex challenge and companies are looking for innovative solutions to optimize their expenses in the cloud. In this context, Microsoft introduced “Copilot” in Cost Management, a new feature based on artificial intelligence, designed to help businesses navigate this complex landscape. This article shows the main features of this integration, that promises to revolutionize the way businesses manage and optimize their spending on cloud resources.

A clear view of costs with Microsoft Cost Management

Microsoft Cost Management, available directly from the Azure portal, offers a detailed view of operating costs, allowing businesses to better understand how their funds are being spent. This tool provides detailed information about your expenses, highlighting any anomalies and spending patterns. Furthermore, allows you to set budgets, share costs among different teams and identify opportunities for optimization.

AI at the service of cost management

With the introduction of AI in Microsoft Cost Management, users can now ask questions in natural language to quickly get the information they need. For example,, to understand a recent invoice, it is possible to request a detailed breakdown of expenses. The AI ​​will provide an overview of the different spending categories and their impact on the total.

As well as providing an overview of costs, the AI ​​offers suggestions on how to analyze expenses further. Users can compare monthly bills, examine specific expenses or investigate any anomalies. The AI ​​also provides detailed information on any changes in costs and suggests corrective actions.

The AI ​​integrated into Microsoft Cost Management interprets user intentions and retrieves the necessary data from various sources. This information is then presented to an advanced language model which generates a response. It is important to note that the retrieved data is not used to train the model, but only to provide the context needed to generate a relevant response.

Future perspectives

The capabilities of AI in Microsoft Cost Management are constantly evolving. In the future, users will be able to take advantage of simulations and modeling “what-if” to make informed decisions. For example,, will be able to explore how storage costs will vary as the business grows or evaluate the impact of moving resources from one region to another.

Figure 1 – Example of simulation and modeling “what-if”

Benefits

The introduction of AI in Microsoft Cost Management allows to obtain the following benefits:

  • Greater visibility and cost control: with greater visibility and understanding of cloud resource costs, organizations can make more informed decisions and better manage their budgets.
  • Operational efficiency: using AI to analyze and interpret data reduces the time and effort needed to gain valuable insights. Furthermore, users can ask specific questions in natural language and receive detailed answers, customized to their needs.

Figure 2 – Examples of questions

  • Optimization: with AI-driven tips and recommendations, organizations can identify and implement optimization opportunities to further reduce costs.

Conclusion

The integration of Copilot into Microsoft Cost Management represents a significant step forward in cloud cost management. With the help of artificial intelligence, businesses now have a powerful tool to optimize their spending and ensure they operate at peak efficiency. With the constant evolution of artificial intelligence, further and interesting innovations are expected in the field of cloud cost management and beyond.

Azure by your side: new solutions for Windows Server 2012/R2 end of support

In the era of Artificial Intelligence and native services for the cloud, organizations continue to rely on Windows Server as a secure and reliable platform for their mission-critical workloads. However, it is important to note that support for Windows Server 2012/R2 will end on 10 October 2023. After that date, Windows Server 2012/R2 systems will become vulnerable if action is not taken, as they will no longer receive regular security updates. Recently, Microsoft has announced that Azure offers new solutions to better manage the end of support of Windows Server 2012/R2. These solutions will be examined in detail in this article, after a brief summary to set the context.

The impact of end of support for Windows Server 2012 R2: what it means for companies?

Microsoft has announced the end of support for Windows Server 2012 and 2012 R2, fixed for 10 October 2023. This event represents a turning point for many organizations that rely on these servers to access applications and data. But what exactly does end of support mean (EOL) and what are the implications for companies?

Understanding end of support

Microsoft has a lifecycle policy that provides support for its products, including Windows Server 2012 and 2012 R2. End of support refers to when a product is no longer supported by Microsoft, which means no more security updates will be provided, patches or technical support.

Why companies should care

Without regular updates and patches, companies using Windows Server 2012 and 2012 R2 are exposed to security vulnerabilities, such as ransomware attacks and data breaches. Furthermore, using an unsupported product such as Windows Server 2012 or 2012 R2 can lead to non-compliance issues. Finally, outdated software can cause compatibility issues with newer applications and hardware, hampering efficiency and productivity.

An opportunity to review IT strategy

Companies should use the EOL event as an opportunity to review their IT strategy and determine the desired business goals for their technology. In this way, they can align the technology with their long-term goals, leveraging the latest cloud solutions and improving operational efficiency.

The strategies that can be adopted to deal with this situation, thus avoiding exposing your IT infrastructure to security issues, have already been addressed in the article: How the End of Support of Windows Server 2012 can be a great opportunity for CTOs.

About this, Microsoft has introduced two new options, provided through Azure, to help manage this situation:

  • updating servers with Azure Migrate;
  • distribution on Azure Arc-enabled servers of updates deriving from the ESU (Extended Security Updates).

The following paragraphs describe the characteristics of these new options.

Updating Windows servers in end of support phase (EOS) with Azure Migrate

Azure Migrate is a service offered by Microsoft Azure that allows you to assess and migrate on-premises resources, as virtual machines, applications and databases, towards the Azure cloud infrastructure. Recently, Azure Migrate has introduced support for in-place upgrades for Windows Server 2012 and later, when moving to Azure. This allows organizations to move their legacy applications and databases to a fully supported operating system, compatible and compliant as Windows Server 2016, 2019 or 2022.

Key benefits of Azure Migrate's OS update feature

Risk mitigation: Azure Migrate creates a replica of the original server in Azure, allowing the OS to be updated on the replica while the source server remains intact. In case of problems, customers can easily go back to the original operating system.

Compatibility Test: Azure Migrate provides the ability to perform a test migration in an isolated environment in Azure. This is especially useful for OS updates, allowing customers to evaluate the compatibility of their operating system and updated applications without impacting production. This way you can identify and fix any problems in advance.

Reduced effort and downtime: integrating OS updates with cloud migration, customers can significantly save time and effort. With only one additional data, the version of the target operating system, Azure Migrate takes care of the rest, simplifying the process. This integration further reduces downtime of the server and applications hosted on it, increasing efficiency.

No separate Windows licenses: with the Azure Migrate OS update, you do not need to purchase an operating system license separately to upgrade. That the customer uses Azure Hybrid Benefits (AHB) o PAYG, is covered when migrating to an Azure VM using Azure Migrate.

Large-scale server upgrade: Azure Migrate supports large-scale server OS upgrades, allowing customers to upgrade up to 500 server in parallel when migrating to Azure. Using the Azure portal, you will be able to select up to 10 VMs at a time to set up replicas. To replicate multiple VMs you can use the portal and add VMs to be replicated in multiple batches of 10 VMs, or use the Azure Migrate PowerShell interface to configure replication.

Supported OS versions

Azure Migrate can handle:

  • Windows Server 2012: supports upgrading to Windows Server 2016;
  • Windows Server 2012 R2: supports upgrading to Windows Server 2016, Windows Server 2019;
  • Windows Server 2016: supports upgrading to Windows Server 2019, Windows Server 2022;
  • Windows Server 2019: supports upgrading to Windows Server 2022.

Deployment of ESU-derived updates on Azure Arc-enabled servers

Azure Arc is a set of Microsoft solutions that help businesses manage, govern and protect assets in various environments, including on premise, edge e multi-cloud, extending the management capabilities of Azure to any infrastructure.

For organizations unable to modernize or migrate before Windows Server 2012/R2 end of support date, Microsoft has announced Extended Security Updates (ESU) enable Azure Arc. With Azure Arc, organizations will be able to purchase and distribute Extended Security Updates seamlessly (ESU) in on-premises or multicloud environments, direct from the Azure Portal.

To get Extended Security Updates (ESU) for Windows Server 2012/R2 and SQL Server 2012 enable Azure Arc, you need to follow the steps below:

  • Preparing the Azure Arc environment: first of all, you need an Azure environment and a working Azure Arc infrastructure. Azure Arc can be installed on any server running Windows Server 2012/R2 or SQL Server 2012, provided that the connectivity requirements are met.
  • Server registration in Azure Arc: once the Azure Arc environment is set up, you need to register your Windows servers or SQL Server systems in Azure Arc. This process allows systems to become managed resources in Azure, making them eligible for ESUs.
  • Purchase of ESUs: once the servers are registered in Azure Arc, ESUs can be purchased, for each server you want to protect, through Azure.
  • ESU activation: after the purchase of the ESUs, you need to activate them on the servers. This process involves installing a license key and downloading security updates from Windows Update or your local update distribution infrastructure.
  • Installing updates: finally, once the ESUs are activated, you can install security updates on servers. This process can be managed manually or by automating it through update management tools.

Note: ESUs only provide critical and important security updates, they do not include new features or performance improvements. Furthermore, ESUs are only available for a limited time after Microsoft's end of support. Therefore, we recommend that you consider migrating to newer versions of servers to have access to all features, in addition to security updates.

Conclusions

This year, Microsoft celebrates the 30th anniversary of Windows Server, a goal achieved thanks to relentless innovation and customer support. However, customers must commit to keeping their Windows Server systems up-to-date near the end of support. In particular, the end of support for Windows Server 2012 and 2012 R2 poses a significant risk to companies, but it also presents an opportunity to review and improve their IT strategy. Identifying desired business goals, engaging in strategic planning e, if necessary, using these new solutions offered by Azure, companies can ensure a smooth and successful transition, optimizing their IT infrastructure to achieve their long-term goals.

Microsoft Azure and Nutanix: a strategic partnership for hybrid cloud

In the last few years, the adoption of cloud computing has grown exponentially, revolutionizing the way organizations manage their IT assets. One of the key concepts that has gained popularity is the “hybrid cloud”, an operating model that combines the best of public and private cloud services in a single flexible solution. To deliver new hybrid cloud solutions that combine application agility with unified management between private cloud and Azure, Microsoft has entered into a strategic partnership with Nutanix, leader in hyperconverged infrastructure. This article will explore the key details of this strategic partnership, illustrating how hybrid cloud solutions offered by Azure and Nutanix can help companies achieve their digital transformation goals, while ensuring security, reliability and efficiency, essential for success in the cloud era.

Recognizing the need to offer solutions that fit specific customer needs, Microsoft Azure was designed from the ground up with the goal of reducing cost and complexity, while improving reliability and efficiency. This vision has materialized into a comprehensive platform that offers choice and flexibility for your IT environment.

Figure 1 – Overview of the possibilities offered by Microsoft Azure in terms of infrastructure

Moving to the cloud is not always a smooth process and there are situations where existing on-premises platforms continue to play a vital role. Azure enables customers to adopt the cloud at their own pace, ensuring continuity in the use of already known local platforms. This opportunity has long been available for VMware and is now also available for Nutanix.

What are Nutanix Cloud Clusters (NC2)?

Nutanix Cloud Cluster (NC2) are bare metal instances that are physically located within public clouds, including Microsoft Azure and AWS. NC2 runs the core of the Nutanix HCI stack, which includes the following main components:

  • Nutanix Acropolis Hypervisor (AHV): the Kernel-based Virtual Machine-based hypervisor (KVM) open source;
  • Nutanix Acropolis Operating System (AOS): the operating system that abstracts the Nutanix components to the end user, such as KVM, virsh, qemu, libvirt and iSCSI, and which manages the entire backend configuration;
  • Prism: the solution that provides administrators with centralized access to configure, easily monitor and manage Nutanix environments.

Figure 2 – Overview of Nutanix Cloud Cluster on Azure

The Nutanix cluster on Azure will consist of at least three nodes. SKUs available for NC2 on Azure, with core details, RAM, storage and network are available at this link.

The connection of the on-premise environment to Azure is supported both via Express Route, both via VPN Gateway.

An example of implementation of NC2 is shown, from a networking point of view, in Azure:

Figure 3 – Example implementation of NC2 in Azure

Main adoption scenarios

The adoption of the Nutanix solution in Azure can take place to address the following scenarios:

  • disaster recovery and business continuity;
  • need to expand your data center;
  • need to quickly and easily migrate your Nutanix workloads to Azure

Benefits of this solution

The main benefits that can be obtained by adopting this solution are reported.

  • Adopt a consistent hybrid deployment strategy: a consistent hybrid deployment strategy can be established, combining on-premises resources with Nutanix clusters in Azure. This allows you to operate in a homogeneous way and without diversity between the two environments.
  • Easy activation and scalability: with Azure, you have the ability to easily activate and scale applications and services without encountering particular limitations. Indeed, the global infrastructure of Azure provides the scalability and flexibility necessary to meet changing business needs.
  • Optimization of investments made: you can continue to leverage your investment in Nutanix tools and expertise.
  • Modernization through the potential of Azure: with Azure, it is possible to modernize the architecture through the integration with innovative and cutting-edge services. In fact,, once customers activate their Nutanix environment, can benefit from further integration with Azure, enabling application developers to access the full ecosystem of services offered by Azure.

Cost model

Customers must bear costs to purchase Nutanix software and must pay Microsoft for use of cloud resources. Nutanix software on clusters can be licensed in several ways:

  • BYO licenses (Bring Your Own): this type of license allows customers to use their own Nutanix licenses they already own or are purchasing. In this way, customers can port their on-premises licenses to NC2. It is important to note that the Nutanix AOS license must be of the Pro or Ultimate type, since the AOS Starter license cannot be used with NC2.
  • PAYG (Pay-As-You-Go): this licensing model provides hourly payments based on the number of cores used or SSD usage. Customers pay only for resources actually used during the time the cluster is active.
  • Cloud Commit: this model requires a minimum commitment from the customer for a specific period of time. Customers commit to using Nutanix resources on NC2 for a specific period and receive preferential rates based on that commitment.

Support options

Microsoft offers support for NC2 bare metal infrastructure on Azure. To request assistance, simply open a specific request directly from the Azure portal. Nutanix, instead, provides support for NC2 Nutanix software on Azure. This level of support is called Production Support for NC2.

Conclusions

Thanks to the collaboration between Microsoft and Nutanix, this solution offers customers who already have a Nutanix on-premises environment the possibility to take advantage of the same features also in the Microsoft public cloud, also allowing you to access the wide range of services offered by Azure. This solution makes it possible to adopt a consistent operating model, which can increase agility, the speed of deployment and resiliency of critical workloads.

Azure Stack HCI: IT infrastructure innovation that reduces environmental impact

The era of technological innovation has a duty to merge with environmental sustainability, and Microsoft Azure Stack HCI represents a significant step forward in this direction. In the fast-paced world of enterprise IT, organizations are constantly looking for solutions that not only offer excellent performance and innovation, but which also contribute to reducing the environmental impact of their IT infrastructures. Azure Stack HCI stands as a cutting-edge solution that combines technological excellence with a commitment to environmental sustainability. In this article, we will explore the positive environmental implications of adopting Azure Stack HCI.

 

Reduction of energy consumption

In a hyper-converged infrastructure (HCI), several hardware components are replaced by software, which combines the processing layers, storage and networking in a single solution. Azure Stack HCI is the Microsoft solution that allows you to create a hyper-converged infrastructure (HCI), where computing resources, storage and networking are consolidated into a single platform. This eliminates the need for separate devices, such as appliance, storage fabric and SAN, leading to an overall reduction in energy consumption. Furthermore, Azure Stack HCI systems are purpose-built to operate efficiently, making the most of available resources. This elimination of separate devices and optimization of resources help reduce the amount of energy required to maintain and cool the infrastructure, thus contributing to the reduction of carbon emissions.

Figure 1 – "Three Tier" Infrastructure vs Hyper-Converged Infrastructure (HCI)

Intelligent use of resources

Azure Stack HCI allows you to flexibly scale resources based on workload needs and allows you to extend its functionality with Microsoft Azure cloud services, including:

  • Azure Site Recovery to implement disaster recovery scenarios;
  • Azure Backup for offsite protection of your infrastructure;
  • Update Management which allows you to make an assessment of the missing updates and proceed with the corresponding deployment, for both Windows machines and Linux systems, regardless of their geographical location;
  • Azure Monitor which offers a centralized way to monitor and control what is happening at the application level, network and hyper-converged infrastructure, using advanced analytics based on artificial intelligence;
  • Defender for Cloud which guarantees monitoring and detection of security threats on workloads running in the Azure Stack HCI environment;
  • Cloud Witness to use Azure storage account as cluster quorum.

Furthermore, there is the possibility of modernizing and making the file server more efficient as well, which remains a strategic and widely used component in data centers, by adopting the solution Azure File Sync. This solution allows you to centralize the network folders of the infrastructure in Azure Files, while ensuring flexibility, the performance and compatibility of a traditional Windows file server. Although it is possible to maintain a complete copy of the data in an on-premises environment, Azure File Sync turns Windows Server into a “cache” which allows quick access to the contents present in a specific Azure file share: then, all files reside in the cloud, while only the latest files are also kept in the on-premises file server. This approach allows you to significantly reduce the storage space required in your datacenter.

Figure 2 – Platform integration with cloud solutions

Figure 2 – Platform integration with cloud solutions

Thanks to virtualization, the dynamic allocation of resources and the adoption of solutions in the cloud environment, you can use only the resources you need on-premises, avoiding waste of energy. This approach to infrastructure reduces the environmental impact of manufacturing, management and disposal of obsolete hardware components.

Optimization of physical space

Consolidating resources into a single Azure Stack HCI platform reduces the need for physical space for server installation, storage devices and network devices. This results in a significant reduction in the surface area occupied in server rooms, allowing for more efficient space management and higher computational density. In turn, the reduction of the occupied space means lower cooling and lighting needs, thus contributing to overall energy savings.

Conclusions

The adoption of Microsoft Azure Stack HCI offers significant benefits in terms of environmental sustainability. The reduction of energy consumption, resource optimisation, the intelligent use of physical space and the wide flexibility help to reduce the environmental impact of data centers and IT infrastructures. Azure Stack HCI represents a step forward towards the adoption of more sustainable IT solutions, enabling organizations to optimize resources, reduce carbon emissions and promote more efficient and environmentally conscious management of IT resources.

Cloud Security Posture Management (CSPM) in Defender for Cloud: protect your assets with an advanced security solution

In the context of today's digital landscape, the adoption of cloud computing has opened up new opportunities for organizations, but at the same time new challenges have emerged in terms of security of cloud resources. The adoption of a Cloud Security Posture Management solution (CSPM) is critical to ensuring that cloud resources are configured securely and that security standards are properly implemented. Microsoft Azure offers Defender for Cloud, a complete solution that combines the power of a CSPM platform with advanced security features to help organizations protect their cloud resources effectively. This article dives into the CSPM features offered by Defender for Cloud.

The pillars of security covered by Microsoft Defender for Cloud

The features of Microsoft Defender for Cloud are able to contemplate three major pillars of security for modern architectures that adopt cloud components:

  • DevOps Security Management (DevSecOps): Defender for Cloud helps you incorporate security best practices early in the software development process. In fact,, helps secure code management environments (GitHub and Azure DevOps), the development pipelines and allows to obtain information on the security posture of the development environment. Defender for Cloud currently includes Defender for DevOps.
  • Cloud Security Posture Management (CSPM): it is a set of practices, processes and tools aimed at identifying, monitor and mitigate security risks in cloud resources. CSPM offers broad visibility into the security posture of assets, enabling organizations to identify and correct non-compliant configurations, vulnerabilities and potential threats. This proactive approach reduces the risk of security breaches and helps maintain a secure cloud environment.
  • Cloud Workload Protection Platform (CWPP): Proactive security principles require implementing security practices that protect workloads from threats. Defender for Cloud includes a wide range of advanced and intelligent protections for workloads, provided through specific Microsoft Defender plans for the different types of resources present in the Azure subscriptions and in hybrid and multi-cloud environments.

Figure 1 – The security pillars covered by Microsoft Defender for Cloud

CSPM in Defender for Cloud

Defender for Cloud is the advanced security solution from Microsoft Azure that contemplates the CSPM scope to offer a wide range of security features and controls for cloud resources. With Defender for Cloud, organizations can get complete visibility into their assets, identify and resolve vulnerabilities and constantly monitor the security posture of resources. Some of the key features offered by Defender for Cloud include:

  • Configuration analysis: Defender for Cloud examines cloud resource configurations for non-compliant settings and provides recommendations to fix them. This ensures that resources are configured securely and that security standards are met.
  • Identification of vulnerabilities: the solution continuously scans cloud resources for known vulnerabilities. Recommendations and priorities are provided to address these vulnerabilities and reduce the risk of exploitation by potential threats.
  • Continuous monitoring: Defender for Cloud constantly monitors the security posture of cloud resources and provides real-time alerts in the event of insecure configurations or suspicious activity. This enables organizations to respond promptly to threats and maintain a secure cloud environment.
  • Automation and orchestration: Defender for Cloud automates much of the process of managing the security posture of cloud environments, allowing organizations to save valuable time and resources.

Defender for Cloud offers core CSPM capabilities for free. These features are automatically enabled on any subscription or account that has onboarded Defender for Cloud. If deemed necessary, it is possible to expand the set of features by activating the plan Defender CSPM.

Figure 2 – Comparison between CSPM plans

For a complete comparison you can refer to Microsoft's official documentation.

The optional Defender CSPM plan offers advanced security posture management capabilities, among the main ones we find:

  • Security Governance: security teams are responsible for improving the security posture of their organizations, but they may not have the resources or authority to actually implement the security recommendations. Assigning managers with expiration dates and defining governance rules create accountability and transparency, so you can lead the process of improving your organization's security.
  • Regulatory compliance: with this feature, Microsoft Defender for Cloud simplifies the process of meeting regulatory compliance requirements, providing a specific dashboard. Defender for Cloud continuously assesses the environment to analyze risk factors based on the controls and best practices of the standards applied to the subscriptions. The dashboard reflects your compliance status with these standards. The Microsoft cloud security benchmark (MCSB) instead it is automatically assigned to subscriptions and accounts when you sign in to Defender for Cloud (foundational CSPM). This benchmark builds on the cloud security principles defined by the Azure Security Benchmark and applies them with detailed technical implementation guidance for Azure, for other cloud providers (such as AWS and GCP) and for other Microsoft clouds.
  • Cloud Security Explorer: allows you to proactively identify security risks in your cloud environment by graphically querying the Cloud Security Graph, which is the context definition engine of Defender for Cloud. Requests from the security team can be prioritized, taking into account the context and the specific rules of the organization. With the Cloud Security Explorer it is possible to interrogate the security problems and the context of the environment, such as resource inventory, Internet exposure, the permissions and the “lateral movement” across resources and across multiple clouds (Azure and AWS).
  • Attack path analysis: analyzing attack paths helps address security issues, related to the specific environment, which represent immediate threats with the greatest potential for exploitation. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach the specific environment. Furthermore, highlights security recommendations that need to be addressed to mitigate them.
  • Agentless scanning for machines: Microsoft Defender for Cloud maximizes coverage of OS posture issues and goes beyond the coverage provided by specific agent-based assessments. Get instant visibility with agentless scanning for virtual machines, wide and unobstructed regarding potential posture problems. All without having to install agents, meet network connectivity requirements or impact machine performance. Agentless scanning for virtual machines provides vulnerability assessment and software inventory, both through Microsoft Defender Vulnerability Management, in Azure and Amazon AWS environments. Agentless scanning is available in both Defender Cloud Security Posture Management (CSPM) both in Defender for Servers P2.

Conclusions

In the increasingly complex context of IT asset security, especially in the presence of hybrid and multi-cloud environments, the Cloud Security Posture Management (CSPM) has become an essential component of an organizations security strategy. Defender for Cloud in Microsoft Azure offers an advanced CSPM solution, which combines configuration analysis, identification of vulnerabilities, continuous monitoring and automation to ensure that IT assets are adequately protected. Investing in a CSPM solution like Defender for Cloud enables organizations to mitigate security risks and protect IT assets.

How the End of Support of Windows Server 2012 can be a great opportunity for CTOs

The end of support for operating systems Windows Server 2012 and 2012 R2 is fast approaching and, for Chief Technology Officer (CTO) of companies, this aspect must be carefully evaluated as it has significant impacts on the IT infrastructure. At the same time, end of support can be an important opportunity to modernize the IT environment in order to ensure greater security, new features and improved business continuity. This article outlines the strategies you can adopt to deal with this situation, thus avoiding exposing your IT infrastructure to security issues caused by this situation.

When does Windows Server 2012/2012R2 support end and what does it mean?

The 10 October 2023 marks the end of extended support for Windows Server 2012 and Windows Server 2012 R2. Without the support of Microsoft, Windows Server 2012 and Windows Server 2012 R2 will no longer receive security patches, unless you take certain actions below. This means that any vulnerabilities discovered in the operating system will no longer be fixed and this could make systems vulnerable to cyber attacks. Furthermore, this condition would result in a state of non-compliance with specific regulations, such as the General Data Protection Regulation (GDPR).

Furthermore, users will no longer receive bug fixes and other updates needed to keep the operating system in line with the latest technology, which could lead to compatibility issues with newer software and introduce potential performance issues.

On top of all that, Microsoft will no longer provide online technical support and technical content updates for this operating system.

All these aspects have a significant impact on the IT organizations that still use these operating systems.

Possible strategies and opportunities related to the end of support

This situation is certainly not very pleasant for those who find themselves facing it now, given the limited time, but it can also be seen as an important opportunity for renewal and innovation of its infrastructure. The following paragraphs show the possible strategies that can be implemented.

Upgrading on-premises systems

This strategy involves moving to a new version of Windows Server in an on-premises environment. The advice in this case is to approach at least Windows Server 2019, but it is preferable to adopt the latest version, Windows Server 2022, that can provide the latest security innovations, application performance and modernization.

Furthermore, where technically possible it is preferable not to proceed with in place updates of the operating system, but to manage migration in side-by-side.

This method usually requires the involvement of the application provider, to ensure software compatibility with the new version of the operating system. Since the software is not recent, often it require the adoption of updated versions of the same, which may comprise architecture adjustment and an in-depth phase of testing for the new release . By adopting this upgrade process, the time and effort are considerable, but the result you get is critical to complying with the technological renewal.

Maintaining Windows Server 2012/2012 R2, but with security updates for others 3 years

To continue receiving security updates for Windows Server 2012\2012 R2 hosted on on-premises environment, one option is to join the programExtended Security Update (ESU). This paid program guarantees the provisioning of Security Updates classified as "critical" and "important" for an additional three years, in the specific case until 13 October 2026.

The Extended Security Update program (ESU) is an option for customers who need to run some legacy microsoft products beyond the end of support and who are not in a position to undertake other strategies. The updates included in the ESU program do not include new features and non-security related updates.

Azure adoption

Migrating systems to Azure

Migrating Windows Server Systems 2012 and Windows Server 2012 R2 on-premises in Azure environment will continue to receive security updates for another three years, classified as critical and important, without having to join the ESU program. This scenario is not only useful to ensure compliance with its systems, but it opens the way towards hybrid architectures where you can get the cloud advantages. In this regard, Microsoft offers a great solution that can provide a large set of tools needed to best deal with the most common migration scenarios: Azure Migrate, that structure the migration process in different phase (discovery, assessment, and migration).

Also Azure Arc can be very useful for inventory digital assets in heterogeneous and distributed environments.

Adopting this strategy can be faster than upgrading systems and allows you to have more time to deal with software renewal. In this regard, the cloud allows you to have excellent flexibility and agility in testing applications in parallel environments.

Before starting the migration path to Azure, it is also essential to structure the networking of the hybrid environment appropriately and evaluate the iterations with the other infrastructure components, to see whether the application can also work well in the cloud.

Migration to Azure can take place to IaaS virtual machines or, in the presence of a large number of systems to be migrated in a VMware environment, Azure VMware Solution can be a solution to consider to face a massive migration quickly and minimizing the interruption of the services provided.

Extending Azure in your datacenter with Azure Stack HCI

Azure Stack HCI is the Microsoft solution that allows you to create a hyper-converged infrastructure (HCI) for running workloads in an on-premises environment and that provides a strategic connection to various Azure services. Azure Stack HCI was specifically designed by Microsoft to help customers modernize their hybrid datacenter, offering a complete and familiar Azure experience in an on-premises environment. For more information on the Microsoft Azure Stack HCI solution, I invite you to readthis article or to viewthis video.

Azure Stack HCI allows you to get free, just like in Azure, important security patches for Microsoft's legacy products that are past their end of support, through the Extended Security Update program (ESU). For further information you can consult this Microsoft's document. This strategy allows you to have more time to undertake an application modernization process, without neglecting security aspects.

Application modernization

Under certain circumstances, an application modernization process could be undertaken, maybe focused on the public cloud, with the aim of increasing innovation, agility and operational efficiency. Microsoft Azure offers the flexibility to choose from a wide range of options to host your applications, covering the spectrum of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Container-as-a-Service (CaaS) and serverless. In a journey to move away from legacy operating systems, customers can use containers even for applications not specifically designed to use microservices-based architectures. In these cases, it is possible to implement a migration strategy for existing applications that only involves minimal changes to the application code or changes to configurations. These are strictly necessary changes to optimize the application in order to be hosted on PaaS and CaaS solutions. To get some ideas about it, I invite you to read on this article.

Steps to a successful transition

For companies intending to undertake one of the strategies listed, there are some important steps that need to be taken to ensure a successful transition.

Regardless of the strategy you decide to adopt, the advice is to make a detailed assessment, so you can categorize each workload by type, criticality, complexity and risk. This way you can prioritize and proceed with a structured migration plan.

Furthermore, it is necessary to carefully evaluate the most suitable transition strategy considering how to minimize any disruption to company activities. This may include scheduling tests and creating adequate backup sets before migration.

Finally, once the migration is complete, It is important to activate a modern monitor system to ensure that the application workload is stable and working as expected.

Conclusions

Windows Server end of support 2012 and Windows Server 2012 R2 presents a challenge for many companies that still use these operating systems. However, it can also be seen as an opportunity for companies to start an infrastructure or application modernization process. In this way you will have more modern resources, also taking advantage of the opportunities they offer in terms of security, scalability and performance.

Maximize the performance of Azure Stack HCI: discover the best configurations for networking

Hyperconverged infrastructure (HCI) are increasingly popular as they allow you to simplify the management of the IT environment, reduce costs and scale easily when needed. Azure Stack HCI is the Microsoft solution that allows you to create a hyper-converged infrastructure for the execution of workloads in an on-premises environment and which provides a strategic connection to various Azure services to modernize your IT infrastructure. Properly configuring Azure Stack HCI networking is critical to ensuring security, application reliability and performance. In this article, the fundamentals of configuring Azure Stack HCI networking are explored, learning more about available networking options and best practices for networking design and configuration.

There are different network models that you can take as a reference to design, deploy and configure Azure Stack HCI. The following paragraphs show the main aspects to consider in order to direct the possible implementation choices at the network level.

Number of nodes that make up the Azure Stack HCI cluster

A single Azure Stack HCI cluster can consist of a single node and can scale up to 16 nodes.

If the cluster consists of a single server at the physical level it is recommended to provide the following network components, also shown in the image:

  • single TOR switch (L2 or L3) for north-south traffic;
  • two-four teamed network ports to handle management and computational traffic connected to the switch;

Furthermore, optionally it is possible to provide the following components:

  • two RDMA NIC, useful if you plan to add a second server to the cluster to scale your setup;
  • a BMC card for remote management of the environment.

Figure 1 – Network architecture for an Azure Stack HCI cluster consisting of a single server

If your Azure Stack HCI cluster consists of two or more nodes you need to investigate the following parameters.

Need for Top-Of-Rack switches (TOR) and its level of redundancy

For Azure Stack HCI clusters consisting of two or more nodes, in production environment, the presence of two TOR switches is strongly recommended, so that we can tolerate communication disruptions regarding north-south traffic, in case of failure or maintenance of the single physical switch.

If the Azure Stack HCI cluster is made up of two nodes, you can avoid providing a switch connectivity for storage traffic.

Two-node configuration without TOR switch for storage communication

In an Azure Stack HCI cluster that consists of only two nodes, to reduce switch costs, perhaps going to use switches already in possession, storage RDMA NICs can be connected in full-mesh mode.

In certain scenarios, which include for example branch office, or laboratories, the following network model can be adopted which provides for a single TOR switch. By applying this pattern, you get cluster-wide fault tolerance, and is suitable if interruptions in north-south connectivity can be tolerated when the single physical switch fails or requires maintenance.

Figure 2 – Network architecture for an Azure Stack HCI cluster consisting of two servers, without storage switches and with a single TOR switch

Although the SDN services L3 are fully supported for this scheme, routing services such as BGP will need to be configured on the firewall device that sits on top of the TOR switch, if this does not support L3 services.

If you want to obtain greater fault tolerance for all network components, the following architecture can be provided, which provides two redundant TOR switches:

Figure 3 – Network architecture for an Azure Stack HCI cluster consisting of two servers, without storage switches and redundant TOR switches

The SDN services L3 are fully supported by this scheme. Routing services such as BGP can be configured directly on TOR switches if they support L3 services. Features related to network security do not require additional configuration for the firewall device, since they are implemented at the virtual network adapter level.

At the physical level, it is recommended to provide the following network components for each server:

  • two-four teamed network ports, to handle management and computational traffic, connected to the TOR switches;
  • two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each cluster node must have a redundant connection to the other cluster node;
  • as optional, a BMC card for remote management of the environment.

In both cases the following connectivities are required:

Networks Management and computational Storage BMC
Network speed At least 1 GBps,

10 GBps recommended

At least 10 GBps Tbd
Type of interface RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45
Ports and aggregation Twofour ports in teaming Two standalone ports One port

Two or more node configuration using TOR switches also for storage communication

When you expect an Azure Stack HCI cluster composed of more than two nodes or if you don't want to preclude the possibility of being able to easily add more nodes to the cluster, it is also necessary to merge the traffic concerning the storage from the TOR switches. In these scenarios, a configuration can be envisaged where dedicated network cards are maintained for storage traffic (non-converged), as shown in the following picture:

Figure 4 – Network architecture for an Azure Stack HCI cluster consisting of two or more servers, redundant TOR switches also used for storage traffic and non-converged configuration

At the physical level, it is recommended to provide the following network components for each server:

  • two teamed NICs to handle management and computational traffic. Each NIC is connected to a different TOR switch;
  • two RDMA NICs in standalone configuration. Each NIC is connected to a different TOR switch. SMB multi-channel functionality ensures path aggregation and fault tolerance;
  • as optional, a BMC card for remote management of the environment.

These are the connections provided:

Networks Management and computational Storage BMC
Network speed At least 1 GBps,

10 GBps recommended

At least 10 GBps Tbd
Type of interface RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45
Ports and aggregation Two ports in teaming Two standalone ports One port

Another possibility to consider is a "fully-converged" configuration of the network cards, as shown in the following image:

Figure 5 – Network architecture for an Azure Stack HCI cluster consisting of two or more servers, redundant TOR switches also used for storage traffic and fully-converged configuration

The latter solution is preferable when:

  • bandwidth requirements for north-south traffic do not require dedicated cards;
  • the physical ports of the switches are a small number;
  • you want to keep the costs of the solution low.

At the physical level, it is recommended to provide the following network components for each server:

  • two teamed RDMA NICs for traffic management, computational and storage. Each NIC is connected to a different TOR switch. SMB multi-channel functionality ensures path aggregation and fault tolerance;
  • as optional, a BMC card for remote management of the environment.

These are the connections provided:

Networks Management, computational and storage BMC
Network speed At least 10 GBps Tbd
Type of interface SFP+ or SFP28 RJ45
Ports and aggregation Two ports in teaming One port

SDN L3 services are fully supported by both of the above models. Routing services such as BGP can be configured directly on TOR switches if they support L3 services. Features related to network security do not require additional configuration for the firewall device, since they are implemented at the virtual network adapter level.

Type of traffic that must pass through the TOR switches

To choose the most suitable TOR switches it is necessary to evaluate the network traffic that will flow from these network devices, which can be divided into:

  • management traffic;
  • computational traffic (generated by the workloads hosted by the cluster), which can be divided into two categories:
    • standard traffic;
    • SDN traffic;
  • storage traffic.

Microsoft has recently changed its approach to this. In fact,, TOR switches are no longer required to meet every network requirement regarding various features, regardless of the type of traffic for which the switch is used. This allows you to have physical switches supported according to the type of traffic they carry and allows you to choose from a greater number of network devices at a lower cost, but always of quality.

In this document lists the required industry standards for specific network switch roles used in Azure Stack HCI implementations. These standards help ensure reliable communication between nodes in Azure Stack HCI clusters. In this section instead, the switch models supported by the various vendors are shown, based on the type of traffic expected.

Conclusions

Properly configuring Azure Stack HCI networking is critical to ensuring that hyper-converged infrastructure runs smoothly, ensuring security, optimum performance and reliability. This article covered the basics of configuring Azure Stack HCI networking, analyzing the available network options. The advice is to always carefully plan the networking aspects of Azure Stack HCI, choosing the most appropriate network option for your business needs and following implementation best practices.