Archivi categoria: Cloud & Datacenter Management (2024-2025)

The Importance of GPUs in the Field of Artificial Intelligence and the Innovations Introduced in Windows Server 2025

The evolution of technologies related to artificial intelligence (AI) has led to an increasing demand for computing power, essential for managing the training, learning, and inferencing of machine learning and deep learning models. In this context, GPUs (Graphics Processing Units) have established themselves as fundamental components, thanks to their ability to perform large-scale parallel computations extremely efficiently. With the upcoming releases of Windows Server 2025 and Azure Stack HCI 24H2, Microsoft introduces significant innovations that enable companies to fully harness the potential of GPUs not only in AI but beyond. These advanced new features simplify hardware resource management and provide an optimized platform for developing and deploying AI solutions on a large scale. In this article, we will explore the importance of GPUs in the AI ecosystem and analyze how the new versions of Windows Server 2025 further enhance these capabilities, transforming how companies tackle the challenges and opportunities presented by AI.

Computing Power and GPU Optimization for Deep Learning on Virtual Infrastructures

Deep learning, an advanced branch of artificial intelligence that leverages deep artificial neural networks, requires a vast amount of computing power to function effectively. Training these models involves processing large volumes of data through multiple layers of interconnected nodes, each performing complex mathematical operations. While traditional CPUs are highly powerful in sequential data processing, they are not optimized to handle a large number of parallel operations, as required by deep learning models.

In this context, GPUs (Graphics Processing Units) are particularly well-suited due to their ability to execute thousands of operations simultaneously. This makes GPUs ideal for training deep learning models, especially complex ones like convolutional neural networks (CNNs), which are widely used in image recognition. For example, training a CNN on a large dataset could take weeks on a CPU, while with the help of a GPU, the time required can be drastically reduced to just days or even hours, depending on the model’s complexity and the dataset’s size.

With the imminent release of Windows Server 2025 and Azure Stack HCI 24H2, Microsoft will offer its customers the ability to allocate an entire GPU’s capacity to a virtual machine (VM), which can run both Linux and Windows Server operating systems within a fault-tolerant cluster, thanks to Discrete Device Assignment (DDA) technology. This means that critical AI workloads for businesses can be reliably executed on a VM within a cluster, ensuring that, in the event of an unexpected failure or planned migration, the VM can be restarted on another node in the cluster using the GPU available on that node.

Microsoft recommends working closely with OEM (Original Equipment Manufacturer) partners and independent GPU hardware manufacturers (IHV) to plan, order, and configure the necessary systems to support the desired workloads with the right configurations and software. Additionally, if GPU acceleration via DDA is desired, it is advisable to consult with OEM and IHV partners to obtain a list of GPUs compatible with DDA. To ensure the best possible performance, Microsoft also suggests creating a homogeneous configuration for GPUs across all servers in the cluster. A homogeneous configuration implies installing the same GPU model and configuring the same number of partitions on all GPUs in the cluster’s servers. For example, in a cluster consisting of two servers each with one or more GPUs, all GPUs should be of the same model, brand, and size, and the number of partitions on each GPU should be identical.

Scalability and Flexibility of GPUs in AI Computing Architectures

In addition to their extraordinary computational speed, GPUs also offer significant advantages in terms of scalability, a crucial factor in modern AI computing architectures. Often, the datasets used to train AI models are so vast that they exceed the computational capabilities of a single processor. In these cases, GPUs allow the workload to be distributed across multiple computing units, ensuring high operational efficiency and enabling the simultaneous processing of enormous amounts of data.

Another critical aspect of GPUs is their flexibility in handling a variety of workloads, ranging from real-time inference, used for example in speech recognition applications, to the training of complex models that require weeks of intensive computation. This versatility makes GPUs an indispensable tool not only for advanced research centers but also for commercial applications that require high performance on a large scale.

GPU Partitioning: Maximizing Efficiency and Resource Utilization

One of the most significant innovations in the field of GPUs is the concept of GPU Partitioning, which is the ability to divide a single GPU into multiple virtual partitions, each of which can be dedicated to different workloads. This technique is crucial for optimizing GPU resources, as it maximizes operational efficiency while minimizing waste. In the context of artificial intelligence, where computational requirements can vary significantly depending on the models used, GPU Partitioning offers the flexibility to dynamically allocate portions of the GPU to various tasks, such as training machine learning models, real-time inference, or other parallel operations. This approach is particularly advantageous in data centers, as it allows multiple users or applications to share the same GPU resources without compromising overall system performance.

The introduction of GPU Partitioning not only improves the flexibility and scalability of computing infrastructures but also helps reduce operational costs by avoiding the need to purchase additional hardware when not strictly necessary. Additionally, this technology promotes a more balanced use of resources, preventing situations of GPU overload or underutilization, contributing to more sustainable and efficient management of AI-related operations.

With the release of Windows Server 2025 Datacenter, Microsoft has integrated and enhanced support for GPU Partitioning, allowing customers to divide a supported GPU into multiple partitions and assign them to different virtual machines (VMs) within a fault-tolerant cluster. This means that multiple VMs can share a single physical GPU, with each receiving an isolated portion of the GPU’s capabilities. For example, in the retail and manufacturing sectors, customers can perform inferences at the edge using GPU support to obtain rapid results from machine learning models, results that can be used before the data is sent to the cloud for further analysis or continuous improvement of ML models.

GPU Partitioning utilizes the Single Root IO Virtualization (SR-IOV) interface, which provides a hardware-based security boundary and ensures predictable performance for each VM. Each VM can only access the GPU resources dedicated to it, with secure hardware partitioning preventing unauthorized access by other VMs.

Another significant development concerns live migration capability for VMs using GPU Partitioning. This allows customers to balance critical workloads across various cluster nodes and perform hardware maintenance or software updates without interrupting VM operations. During a planned or unplanned migration, the VMs can be restarted on different nodes within the cluster, using available GPU partitions on those nodes.

Finally, Microsoft has made the Windows Administration Center (WAC) available to configure, use, and manage VMs that leverage virtualized GPUs, both in standalone configurations and in failover clusters. The WAC centralizes the management of virtualized GPUs, significantly simplifying administrative complexity.

Innovations and Future Prospects

The future of GPUs in artificial intelligence looks extremely promising. With the increasing complexity of AI models and the growing demand for solutions capable of leveraging real-time AI, the parallel computing power offered by GPUs will become increasingly essential. In particular, their ability to perform a large number of simultaneous operations on vast datasets makes them an indispensable component in cloud solutions.

The significant innovations in GPUs, supported by the upcoming releases of Windows Server 2025 and Azure Stack HCI 24H2, are the result of ongoing and close collaboration between Microsoft and NVIDIA. Microsoft Azure handles some of the world’s largest workloads, pushing CPU and memory capabilities to the limit to process enormous volumes of data in distributed environments. With the expansion of AI and machine learning, GPUs have become a key component of cloud solutions as well, thanks to their extraordinary ability to perform large-scale parallel operations. Windows Server 2025 will bring many benefits to the GPU sector as well, further enhancing features related to storage, networking, and the scalability of computing infrastructures.

Conclusions

The importance of GPUs in the field of artificial intelligence is set to grow exponentially, thanks to their ability to process large volumes of data in parallel with efficiency and speed. The innovations introduced in Windows Server 2025 and Azure Stack HCI 24H2 represent a significant step toward optimizing computing infrastructures, providing companies with advanced tools to manage and fully exploit GPU resources. These developments not only enhance the computing power necessary for AI but also introduce greater flexibility and scalability, essential for addressing future challenges. With the adoption of technologies like GPU Partitioning and support for live VM migration, Microsoft demonstrates its leadership in providing solutions that not only improve performance but also enhance the reliability and sustainability of AI-related business operations. The future prospects see GPUs playing an increasingly crucial role, not only in data centers but also in edge and cloud applications, ensuring that technological innovation continues to drive the evolution of AI across all sectors.

Useful References

Everything you need to know about the new OEM Licensing model for Azure Stack HCI

Microsoft recently introduced a new OEM licensing model for Azure Stack HCI, designed to simplify the licensing process and offer numerous benefits. This new model, available through major hardware vendors like HPE, Dell, and Lenovo, provides companies with an additional option to manage their Azure Stack HCI licenses. In this article, we will explore the current licensing options in detail and the features of the new OEM license, highlighting the technical aspects and benefits for users.

Existing Licensing Options

Before diving into the new OEM licensing option, it is essential to understand the currently available licensing models for Azure Stack HCI. For all details on the Azure Stack HCI cost model, you can consult this article.

Overview of the New OEM License

The new OEM licensing option for Azure Stack HCI is a prepaid license available through specific hardware vendors, such as HPE, Dell, and Lenovo. Intended for Azure Stack HCI hardware, including Premier Solutions, Integrated Systems, and Validated Nodes, this license offers a pre-installed solution that is activated in Azure and remains valid for the duration of the hardware.

The Azure Stack HCI OEM license includes three essential components:

  • Azure Stack HCI: The foundational platform for hybrid cloud that enables running virtualized workloads.
  • Azure Kubernetes Services (AKS): The container orchestration service that simplifies the management and deployment of containerized applications.
  • VM and guest containers: Through Windows Server Datacenter 2022, Windows Server VMs can be activated on an Azure Stack HCI cluster using generic keys for Automatic Virtual Machine Activation (AVMA), via Windows Admin Center or PowerShell.

This license ensures access to the latest versions of Azure Stack HCI and AKS, allowing for the use of unlimited VMs and containers.

OEM License Features

The features of the Azure Stack HCI OEM license are as follows:

  • Inclusion of Azure Stack HCI and AKS: The license includes Azure Stack HCI and Azure Kubernetes Services (AKS) with unlimited virtual CPUs. This is a significant advantage compared to the Azure Hybrid Benefit, which limits the use of AKS to the number of licensed physical cores.
  • Physical core licensing: Each physical core in the server must be licensed. The base license covers up to 16 cores, with additional components available in two and four core increments for systems with more than 16 cores. For example, a 36-core system requires two 16-core licenses plus an additional four-core license. This license does not support a dynamic per-core model.
  • Prepaid and permanent license: This license does not require annual renewals or subscriptions. It is a prepaid license that remains valid for the duration of the hardware on which the Azure Stack HCI operating system is installed.
  • No support for mixed nodes: Currently, this license does not support environments with mixed nodes in the same Azure Stack HCI system. For more information, it is advisable to consult the mixed node scenarios.
  • Non-transferable license: The license is tied to the original hardware on which the Azure Stack HCI operating system is pre-installed and cannot be transferred to different hardware or systems. This approach ensures that the license and its benefits remain specific to the initial hardware configuration.
  • Automatic activation: This pre-installed license does not include product keys or COA. The license is automatically activated once the device is registered in Azure. In the event of a failure requiring reinstallation, it is necessary to contact the OEM vendor.
  • No CAL requirements: For this specific license, no Device or User CAL is required.

Technical Details

The new OEM license is pre-installed on the hardware and automatically activates in Azure. This process eliminates the need for physical licenses or additional activation steps. When users connect Azure Stack HCI nodes to Azure, the system recognizes the OEM license and automatically activates the associated benefits.

To verify if you have an active OEM license for Azure Stack HCI, you can follow these steps:

  1. Access the Azure portal.
  2. Search for your Azure Stack HCI cluster.
  3. Under the cluster, select Overview to check the billing status.
    • If you have an active OEM license for Azure Stack HCI, the billing status should be OEM License, and the OEM license status should be Activated.

Figure 1 – Azure Stack HCI Billing status

For support with the Azure Stack HCI OEM license, you must first contact your OEM vendor. If support is not available from the vendor, it is advisable to open an Azure support request through the Azure portal.

Advantages of the New OEM Licensing Mechanism

The new OEM licensing option offers several significant advantages for Azure Stack HCI users:

  • Simplified licensing: Users do not need to manage separate licenses or worry about additional documentation. The license is embedded in the hardware, simplifying the entire process and reducing administrative complexity.
  • Different and more predictable cost model: By prepaying the license, users avoid recurring monthly or annual costs, which can result in significant long-term savings. Users benefit from a one-time purchase that includes hardware, software, and full support, simplifying IT resource procurement and management.
  • Unlimited use of AKS: The inclusion of unlimited virtual CPUs for Azure Kubernetes Services (AKS) is a substantial advantage, particularly for organizations that extensively use Kubernetes for containerized applications.
  • Operational efficiency: The automatic activation feature ensures that users can quickly and easily start using their Azure Stack HCI infrastructure without additional configuration or licensing steps, improving operational efficiency. Moreover, a single license covers Azure Stack HCI, AKS, and Windows Server 2022 as guest VMs, offering an integrated solution that simplifies overall license management.

Conclusion

The new OEM licensing model for Azure Stack HCI represents a new opportunity for licensing hybrid infrastructures. Through direct integration with major hardware vendors like HPE, Dell, and Lenovo, this solution offers a prepaid and permanent license, simplifying the purchasing process and reducing administrative complexity. The benefits include unlimited use of Azure Kubernetes Services, a more predictable cost model, and automatic activation that allows users to quickly start using their infrastructure. While this licensing model does not support mixed node environments and is non-transferable, it makes Azure Stack HCI an even more attractive choice for companies seeking efficiency and flexibility in managing Microsoft hybrid solutions.

The New Azure Arc Solution for Efficient Management of Multicloud Environments

Companies are increasingly adopting a multicloud approach to leverage the specific advantages offered by various cloud service providers. This strategy helps avoid vendor lock-in, improve resilience, and optimize costs by utilizing the best offers available on the market. However, managing resources distributed across multiple cloud platforms presents significant challenges, especially regarding inventory management, reporting, analysis, consistent resource tagging, and provisioning. In this article, we will examine how the Azure Arc Multicloud Connector can help overcome these challenges, offering centralized and efficient management of cloud resources.

Challenges in Multicloud Management

Managing a multicloud environment involves numerous challenges that organizations must address to ensure effective and smooth operations. Key difficulties include:

  • Inventory Management: Keeping track of all resources distributed across various clouds.
  • Reporting and Analysis: Conducting detailed reports and analysis of cloud resources.
  • Consistent Resource Tagging: Applying tags uniformly to resources across all cloud platforms.
  • Provisioning and Management Tasks: Performing provisioning and other management operations consistently across multiple clouds.

What is the Azure Arc-Enabled Multicloud Connector?

The Azure Arc-enabled Multicloud Connector is a solution that allows the connection of non-Azure public cloud resources to Azure, providing a centralized source for managing and governing cloud resources. Currently, it supports AWS as a public cloud. This connector simply uses API calls to collect and manage resources without the need to install appliances within AWS.

Figure 1 – Solution overview

NOTE: The Multicloud Connector can work alongside the AWS connector of Defender for Cloud. If desired, both connectors can be used for more comprehensive cloud resource management.

The following paragraphs describe the currently supported features: inventory and onboarding.

Inventory Features

The Inventory solution of the Multicloud Connector provides an up-to-date view of resources from other public clouds within Azure, offering a single reference point to view all cloud resources. Once the Inventory solution is enabled, the metadata of the source cloud’s resources are included in the resource representations in Azure, allowing the application of Azure tags and policies. Additionally, it enables querying all cloud resources through the Azure Resource Graph, for example, to find all Azure and AWS resources with a specific tag.

The Inventory solution regularly scans the source cloud to keep the view of resources in Azure updated.

Representation of AWS Resources in Azure

After connecting the AWS cloud and enabling the Inventory solution, the Multicloud Connector creates a new resource group using the naming convention aws_IDAccountAws. The Azure representations of AWS resources are created in this group, using the AwsConnector namespace values described earlier. Azure tags and policies can be applied to these resources. The resources discovered in AWS and projected in Azure are placed in Azure regions using a standard mapping scheme, allowing consistent management of AWS resources within the Azure ecosystem.

Periodic Synchronization Options

The periodic synchronization time selected during the Inventory solution configuration determines how frequently the AWS account is scanned and synchronized with Azure. Enabling periodic synchronization ensures that changes to AWS resources are automatically reflected in Azure. For example, if a resource is deleted in AWS, the corresponding resource in Azure will also be deleted. Periodic synchronization can be disabled during solution configuration, but this may result in an outdated representation of AWS resources in Azure.

Querying for Resources in Azure Resource Graph

Azure Resource Graph is a service designed to extend Azure resource management by providing efficient and performant resource exploration capabilities. Large-scale queries across a set of subscriptions help manage the environment effectively. Queries can be executed using the Resource Graph Explorer in the Azure portal, with query examples for common scenarios available for consultation.

Arc Onboarding Features

The Arc onboarding automatically identifies EC2 instances running in the AWS environment and installs the Azure Connected Machine agent on the VMs, allowing them to be integrated into Azure Arc. Currently, AWS EC2 instances are supported. This simplified experience allows using Azure management services, such as Azure Monitor, on these VMs, providing a centralized method for jointly managing Azure and AWS resources.

Representation of AWS Resources in Azure

After connecting the AWS cloud and enabling the Arc Onboarding solution, the Multicloud Connector creates a new resource group following the naming convention aws_IDAccountAws. When EC2 instances are connected to Azure Arc, their representations appear in this resource group. These resources are assigned to Azure regions using a standard mapping scheme. By default, all regions are scanned, but specific regions can be excluded during solution configuration.

Connectivity Method

During the Arc Onboarding solution creation, it is possible to choose whether the Connected Machine agent should connect to the Internet via a public endpoint or a proxy server. If the proxy server is chosen, the URL of the proxy server to which the EC2 instance can connect must be provided.

Periodic Synchronization Options

The periodic synchronization time selected during the Arc Onboarding solution configuration determines how frequently the AWS account is scanned and synchronized with Azure. Enabling periodic synchronization ensures that whenever a new EC2 instance that meets the prerequisites is detected, the Arc agent will be automatically installed. If preferred, periodic synchronization can be disabled during solution configuration. In this case, new EC2 instances will not be automatically integrated into Azure Arc, as Azure will not be able to scan for new instances.

Configuration and Operational Details

The initial configuration of the multicloud connector requires using the Azure portal to create the connector itself, specifying the resource group and AWS account to be integrated. Subsequently, it is necessary to download and apply the CloudFormation templates in AWS to configure the required IAM roles. Finally, it is important to configure the synchronization intervals to periodically update resource information, with a default interval of one hour.

Pricing

The Multicloud Connector is free but integrates with other Azure services that have their own pricing models. Any Azure service used with the Multicloud Connector, such as Azure Monitor, will be charged according to the specific service pricing. For more information, you can consult the official Azure cost page.

After connecting the AWS cloud, the Multicloud Connector queries the AWS resource APIs multiple times a day. These read-only API calls incur no costs in AWS but are logged in CloudTrail if a trail for read events has been enabled.

Conclusions

The Azure Arc Multicloud Connector represents an advanced and strategic solution for addressing the challenges of multicloud management. By centralizing the governance and inventory of cloud resources, companies can achieve a unified and consistent view of their distributed infrastructures. This tool not only improves operational efficiency through periodic synchronization and consistent resource tagging but also enables more secure management integrated with Azure services. Moreover, adopting the Azure Arc Multicloud Connector allows organizations to optimize costs and enhance resilience by leveraging the best offers from various cloud providers without the risk of vendor lock-in. Ultimately, this solution proves fundamental for companies aiming for efficient, innovative, and scalable multicloud management.

Announcing my eighth Microsoft MVP Award in Cloud and Datacenter Management

Hey everyone,

I’m beyond excited to share some fantastic news with you all – I’ve just received the Microsoft Most Valuable Professional (MVP) Award for the eighth year in a row in the Cloud and Datacenter Management category!

What is the Microsoft Most Valuable Professional Award?

For those who might not know, the Microsoft MVP Award is a yearly recognition given to outstanding technology community leaders from all around the world. These are folks who go above and beyond to share their expertise and help others in the tech community. MVPs actively contribute through forums, blogs, social media, and speaking engagements, providing feedback to Microsoft and sharing their knowledge with peers.

Heartfelt Thanks to Microsoft

A huge thank you to Microsoft for this incredible opportunity. The chance to directly interact with the Microsoft Product Groups has been invaluable. These teams are always ready to help us get the most out of their solutions and are genuinely interested in our feedback, which helps them continually improve Microsoft products and technologies. Their support has been a game-changer for me.

Appreciation for Fellow MVPs

I also want to give a shoutout to my fellow MVPs. Our relationships often grow beyond just professional collaboration and turn into real friendships. It’s truly an honor to work with such talented professionals from all over the globe. The knowledge exchange and shared experiences are not only inspiring but also incredibly enriching.

Leading the Growing Cloud Community

As the Community Lead for our ever-expanding Cloud Community, I couldn’t be prouder of what we’ve achieved together. The Cloud Community has become a crucial hub for anyone in Italy who’s into the cloud, especially Microsoft solutions. Founded on the passion and expertise of top IT professionals, our community focuses on spreading high-quality technical content in Italian to inform, educate, and inspire.

Our mission is clear: we want to guide businesses in adopting cloud solutions. We aim to be the go-to community in Italy for all things Microsoft cloud, offering articles, guides, and insights to assist companies in their digital transformation and technological innovation.

With several Microsoft MVPs and many other industry experts among our contributors, we ensure that our content is not only current and technologically advanced but also of the highest professional quality. Our community is the perfect place to share experiences, resolve doubts, and find inspiration.

Our journey began in 2011 with the Italian System Center User Group, founded by my friend and colleague Daniele Grandini. As technology evolved and cloud solutions expanded, we broadened our focus in 2016 to include Operations Management Suite (OMS). This was a significant step towards integrating System Center solutions with public cloud technologies.

In 2019, the Italian System Center and OMS User Group transformed into the Cloud Community. Since then, we’ve continued to promote and share content on cloud management and governance, constantly expanding our horizons. Being part of the WindowServer.it team, led by my friend Silvio Di Benedetto, strengthens our presence and solidifies our role as a key reference point for all Microsoft technologies.

Commitment to Ongoing Contribution

Looking ahead, I’m committed to continuing this journey, offering my technical expertise and dedication to both Microsoft and the technical communities. I plan to keep creating content to share my knowledge with all of you through blogs, technical articles, books, and speeches. This passion for technology and the desire to share knowledge drives me forward.

Thank you all for your support, and I can’t wait to see what we’ll achieve together in the coming months.

Best regards,

Francesco

Azure Arc Site Manager: the solution for managing and governing on-premises IT resources

Azure Arc Site Manager is an innovative solution for system administrators, designed to offer centralized management of IT resources, associating them with the physical or logical locations of customer infrastructures and facilitating governance of these resources. This tool simplifies connection monitoring, alert management, and resource update statuses, enabling administrators to apply fixes and updates uniformly across all resources. Azure Arc Site Manager allows for the management and monitoring of on-premises environments as Azure Arc sites, providing a tailored experience for on-premises scenarios where infrastructure is often managed within common physical boundaries, such as stores or factories. With a unified view of distributed resources, this tool becomes indispensable for improving operational efficiency and ensuring optimal control of IT resources, supporting effective and targeted governance. This article will outline the main features of this solution.

Customer Challenges

Customers often face numerous challenges in managing on-premises infrastructures, especially when dealing with different types of resources distributed across various locations. In many situations, this requires using different dashboards to monitor the status and security of resources. This fragmented approach not only complicates overall management but can also significantly reduce operational efficiency, increasing the risk of errors and delays in responding to critical issues.

Arc Site Manager Features

Arc Site Manager has been developed to address the challenges of managing IT infrastructures, offering a range of key features:

  • Centralized Resource Management: Provides a unified platform for managing resources associated with the physical or logical locations of customers’ IT infrastructures, allowing an overview of resources distributed across different sites.
  • Connection Monitoring: Simplifies monitoring connections between Azure resources, ensuring they are always operational and connected. Administrators can quickly identify resources that need attention.
  • Alert Management: Integrates Azure Monitor alerts, providing timely notifications regarding security issues, updates, and performance that require intervention.
  • Update Status: Offers an interface to monitor the update status of resources, ensuring they are always up to date with the latest patches.
  • Uniform Application of Fixes: Enables IT administrators to apply fixes and updates uniformly to all resources through the Arc Site Manager portal, simplifying management and reducing the risk of errors.
  • Customized Experiences: Provides personalized experiences using tools like Azure Monitor, Update Manager, and other services to be added in the future, allowing for targeted and specific resource management.
  • Logical Representation of Resources: Allows grouping of resources based on technical and business criteria, creating a logical representation consistent with the company’s needs.

IMPORTANT NOTE: These features do not replace the Azure Arc control plane but enhance it, providing a more user-friendly view of resources, useful for managing complex environments with multiple types of resources distributed across different physical sites.

Figure 1 – Solution Overview

Technical Aspects

When creating a site, it is associated with a resource group or a subscription. The Arc site automatically collects all supported resources within its scope. Arc sites have a 1:1 relationship with resource groups and subscriptions: a site can only be associated with one resource group and one subscription, and vice versa.

Figure 2 – Arc Site Manager Console

Arc Site Manager allows customers managing on-premises infrastructures to view resources based on their site or physical location. However, sites do not necessarily have to be associated with a physical location; they can be used according to customer needs, grouping resources by function or type rather than location.

For more details, I invite you to watch this video with an interesting demo.

Future Developments

Future plans for Arc Site Manager include the ability to define dynamic site scopes, multiple hierarchical levels, more detailed monitoring views, and location-based capabilities. These new features aim to further enhance user experience and IT resource management, providing even more advanced and customized tools to meet the ever-evolving needs of IT administrators.

Conclusions

Azure Arc Site Manager represents a significant step forward in the management and governance of IT resources, offering administrators a centralized solution for monitoring, updating, and maintaining on-premises infrastructure. With advanced features such as centralized resource management, connection and alert monitoring, and the ability to apply updates uniformly, Arc Site Manager significantly simplifies the management of complex and distributed environments. Future expansions of its features promise to further improve operational efficiency and response capability to emerging challenges, making this tool highly valuable for companies looking to maintain optimal control over their IT resources.

Hyper-V: Evolution, Current Innovations, and Future Developments

Since the first release of Hyper-V in Windows Server 2008, Microsoft has never ceased innovating this virtualization solution, and it has no intention of stopping. Hyper-V represents a strategic technology for Microsoft, being widely used across various areas of the IT ecosystem: from Windows Server to Microsoft Azure, from Azure Stack HCI to Windows clients and even on Xbox. This article will explore the evolution of Hyper-V from its inception, examining the current innovations that make it one of the most robust and versatile virtualization platforms available on the market today. Additionally, we will look at future developments from Microsoft for Hyper-V, discovering how this technology will help evolve the landscape of modern IT infrastructures.

The Evolution of Microsoft Virtualization: From Virtual Server to Hyper-V

Microsoft boasts a long history in virtualization, starting with the release of Microsoft Virtual Server in the early 2000s. This product was designed to facilitate the execution and management of virtual machines on Windows Server operating systems. The subsequent version, Microsoft Virtual Server 2005, introduced significant improvements in terms of management and performance, allowing companies to consolidate servers and reduce operational costs. However, this approach was still limited compared to the growing virtualization needs.

With the introduction of Windows Server 2008, Microsoft launched Hyper-V, a fully integrated virtualization solution within the operating system, marking a significant qualitative leap from Virtual Server. Hyper-V offered more robust and scalable virtualization, with support for hypervisor-level virtualization, better resource management, virtual machine snapshots, and greater integration with Microsoft’s management tools, such as System Center.

In subsequent versions of Windows Server, Hyper-V was continuously improved, introducing advanced features such as Live Migration, support for large amounts of memory and high-performance processors, and virtual machine replication for disaster recovery. These developments have consolidated Hyper-V as one of the leading virtualization platforms in the market, effectively competing with third-party solutions like VMware and Citrix.

The Present of Hyper-V: Power and Flexibility

Hyper-V is a virtualization technology that uses the Windows hypervisor, requiring a physical processor with specific features. This hypervisor manages interactions between the hardware and virtual machines, ensuring an isolated and secure environment for each virtual machine. In some configurations, virtual machines can directly access physical resources concerning graphics, network, and storage.

Hyper-V Technology in Windows Server

Hyper-V is integrated into Windows Server at no additional cost. The main difference between the Standard and Datacenter editions concerns the number of allowed guest OS instances:

  • Windows Server Standard: Allows up to two instances of Windows Server guest OS environments.
  • Windows Server Datacenter: Allows an unlimited number of Windows Server guest OS instances.

Hyper-V supports a wide range of guest operating systems, including various Linux environments such as Red Hat Enterprise Linux, CentOS, Debian, Oracle Linux, SUSE, and Ubuntu, with the relevant Integration Services included in the Linux kernel. Additionally, it supports FreeBSD starting from version 10.0.

Windows Server Datacenter, with Hyper-V, also provides access to advanced technologies like Storage Spaces Direct and software-defined networking (SDN), significantly enhancing virtualization and resource management capabilities.

Advantages of Hyper-V in Windows Server:

  • Effective Hardware Utilization: Allows server and workload consolidation, reducing the number of physical computers needed and optimizing hardware resource use.
  • Improved Business Continuity: Minimizes downtime through synergy with other Microsoft solutions, ensuring greater service availability and reliability.
  • Private Cloud Creation: Facilitates the creation and expansion of private and hybrid clouds with flexible and cutting-edge solutions.
  • Efficient Development and Testing Environments: Enables the reproduction of computing environments without additional hardware, making development and testing processes faster and more cost-effective.

The Hypervisor in the Azure Ecosystem

Azure uses Microsoft Hyper-V as the hypervisor system, demonstrating the importance and reliability of this technology for Microsoft itself, which continues to optimize it constantly. Hyper-V offers a range of advanced features that ensure a secure and shared virtualization environment for multiple customers. Among these, the creation of guest partitions with separate address spaces allows parallel execution of operating systems and applications relative to the host operating system. The root partition, or privileged partition, has direct access to physical devices and peripherals, sharing them with guest partitions through virtual devices. These elements ensure a secure and reliable environment for managing virtual machines on Azure.

Hyper-V: More Than Just a Virtualization Tool

Hyper-V is not only a powerful virtualization tool but also essential for ensuring the security of various environments. In fact, Virtualization-Based Security (VBS) leverages hardware virtualization and the hypervisor to create an isolated virtual environment, which acts as a “root of trust” for the operating system, even if the kernel is compromised. Windows uses this isolated environment to host various security solutions, offering them additional protection against vulnerabilities and preventing the use of exploits that might try to bypass existing protections. VBS imposes restrictions to protect vital system and OS resources, as well as safeguard security aspects like user credentials.

Hyper-V is also used for containers, offering isolation that ensures high security and greater compatibility between different host and container versions. Thanks to Hyper-V isolation, multiple container instances can run simultaneously on a host, with each container operating within a virtual machine using its own kernel. The presence of the virtual machine provides hardware-level isolation between each container and the container host.

Hyper-V and Azure Stack HCI

Azure Stack HCI and Hyper-V in Windows Server are two fundamental pillars in Microsoft’s virtualization solution offerings, each designed to meet different needs within the IT landscape. While Azure Stack HCI positions itself as the cutting-edge solution for hybrid environments, offering advanced integrations with Azure services for optimized management and scalability, Hyper-V in Windows Server remains a solid choice for organizations requiring more traditional virtualized solutions, with particular attention to flexibility and management in disconnected scenarios. The choice between these two solutions depends on specific virtualization needs, the organization’s cloud strategy, and the need for access to advanced management and security features.

In this regard, it is important to note that Azure Stack HCI is built on proven technologies, including Hyper-V, and meets advanced security requirements for virtualization thanks to integrated support for Virtualization-Based Security (VBS).

The Future of Hyper-V: Innovations and Prospects

The new version of Windows Server, named “Windows Server 2025,” is expected this fall. Although Microsoft has not yet announced an official release date, some predictions can be made based on previous release cycles. The company’s latest product, Windows Server 2022, was made available to the public on September 1, 2021. If Microsoft follows a similar schedule, it is likely that Windows Server 2025 will be released in the fall of this year. This version will include a new release of Hyper-V with significant new features.

Indeed, Hyper-V in Windows Server 2025 will introduce support for GPU Partitioning, allowing a GPU to be shared among multiple virtual machines. This feature will also ensure full support for Live Migration and cluster environments. GPU-P will also enable the Live Migration of VMs with partitioned GPUs between two standalone servers, without the need for a cluster environment, making it ideal for specific test and development environments. Additionally, improved support for Direct Device Assignment (DDA) and the introduction of GPU pools for high availability will further enhance Hyper-V’s capabilities.

Moreover, Windows Server 2025 will introduce “Workgroup Clusters,” simplifying Hyper-V deployments in various scenarios. Until Windows Server 2022, deploying a cluster required Active Directory (AD), complicating implementations in environments where an AD infrastructure was not always available. With Windows Server 2025, it will be possible to deploy “Workgroup Clusters” with Hyper-V that do not require Active Directory but use a certificate-based solution, significantly simplifying deployment.

For more information on the new features of Windows Server 2025, you can consult this article: Windows Server 2025: the arrival of a new era of innovation and security for server systems.

Conclusion

Hyper-V has proven to be a valuable and continuously evolving virtualization solution in the IT landscape. From its introduction with Windows Server 2008 to the innovations planned for Windows Server 2025, Hyper-V has maintained a prominent position thanks to the constant introduction of advanced features and improvements in performance, management, and security. New features such as GPU Partitioning and Workgroup Clusters are just a few examples of how Microsoft continues to invest in this technology to meet the increasingly complex needs of modern IT infrastructures. The integration of Hyper-V in various environments, from the hybrid cloud of Azure Stack HCI to traditional virtualization servers, demonstrates its versatility and strategic importance. Looking ahead, it is clear that Hyper-V will remain a key element in Microsoft’s virtualization and cloud computing strategies, continuing to offer robust and innovative solutions for the challenges of IT infrastructures.

Evolve Business Continuity and Disaster Recovery with Azure’s Business Continuity Center

In today’s context, where cybersecurity threats and the need for operational continuity are constantly increasing, companies must adopt modern solutions to ensure data protection and recovery. Azure’s Business Continuity Center (ABCC) offers an innovative response to these challenges, providing an integrated platform to manage business continuity and disaster recovery. With a range of advanced features, ABCC enables the identification and resolution of system protection gaps, simplifies backup and recovery operations, and enhances the overall security posture of the organization. In this article, we will explore the main features of Azure’s Business Continuity Center and how it can evolve the management of operational continuity in modern businesses.

Challenges in Managing Distributed Environments

Managing IT environments distributed between on-premises and cloud infrastructures can present numerous challenges, making it complex to protect and recover data consistently and efficiently. Key challenges include:

  • Distributed nature of data: Managing data across on-premises and cloud environments creates complexity.
  • Fragmented experiences: Inconsistent experiences between Azure solutions and services lead to management difficulties.
  • Monitoring and recovery: Ensuring compliance and recovering data across various solutions can be complicated.
  • Unified objectives: Maintaining unified objectives for entire applications, rather than individual workloads, is challenging.
  • Consolidation needs: Consolidating BCDR strategies from fragmented experiences for greater efficiency is sought.

Key Features of the Business Continuity Center

Azure’s Business Continuity Center is the advanced version of the previous BCDR Center, now offering a more powerful and sophisticated platform. This center provides a wide range of features designed to help customers meet their security and protection needs. Below is a summary of the main features and benefits offered by this advanced solution.

Centralized Management Interface

The Business Continuity Center provides a unified platform within the Azure portal to manage and monitor backup and disaster recovery processes, eliminating the need to switch between multiple dashboards.

  • Unified view: Provides a unified view of all resources, both on-premises and cloud, simplifying monitoring and management.
  • Holistic monitoring: Offers actionable insights and notifications to quickly detect issues and take corrective actions.

Automated and Simplified Operations

Onboarding is very simple, requiring no configuration or prerequisites; just search for Business Continuity Center in the Azure portal. ABCC uses Azure policies to automate the protection of new resources. Azure offers built-in policies that automatically configure backups for newly created resources. These policies can be assigned to specific scopes such as subscriptions, resource groups, or management groups, ensuring that any new resource within these scopes is automatically protected without manual intervention.

Improved Security Posture

ABCC assesses the security configurations of resources, providing a security level score (e.g., poor, fair, excellent) and guidance on how to improve security against ransomware and other threats. Key security settings highlighted to protect backup data include:

  • Soft delete: Ensures that even if backup data is deleted, it remains recoverable for a specified period, providing an additional layer of protection against accidental or malicious deletions.
  • Immutability: Ensures that backup data cannot be altered or deleted within a specified retention period, protecting it from tampering and ransomware attacks.
  • Multi-user authorization: Requires multiple users to authorize critical operations, reducing the risk of unauthorized changes or deletions.

Optimized Data Protection and Governance

ABCC includes a compliance view showing how many resources comply with assigned backup policies and how many do not. This helps administrators ensure that all necessary resources are protected according to desired policies. Administrators can use policies to automate backups based on specific tags assigned to resources. For example, a policy can be set to back up all virtual machines with a specific tag (e.g., “production”).

Additionally, it consolidates alerts and metrics for easier tracking of backup and disaster recovery operations.

Support for Hybrid Environments

ABCC allows users to manage both Azure and non-Azure resources, providing information on protection status and compliance for both environments. It also offers flexible protection strategies, including the use of multiple solutions (e.g., Azure Backup and Site Recovery) to ensure regional redundancy.

Conclusion

Azure’s Business Continuity Center represents a significant step forward in managing operational continuity and disaster recovery for organizations using Azure solutions. Integrated into the platform, this center simplifies backup and recovery operations through advanced features and a centralized management interface. It improves security posture, automates backup policies, and offers holistic management of hybrid resources, enabling companies to consolidate their business continuity and disaster recovery strategies. This reduces operational complexities and ensures optimal data protection. With Azure’s Business Continuity Center, organizations can address the challenges of data protection and recovery more efficiently and reliably, ensuring operational continuity even in critical situations. The product has recently been released, so further developments are expected. It will be interesting to see if Microsoft decides to integrate it with third-party solutions as well.

5 reasons to choose Azure VMware Solution over other VMware solutions in the cloud

Broadcom’s acquisition of VMware is causing significant upheaval among organizations that use VMware solutions, pushing them to explore alternatives to counteract changes in licensing policies and uncertainties about the continuity of products and services. In this context of uncertainty, VMware solutions on public clouds are gaining relevance as valid options to consider in certain scenarios. Microsoft, through Azure VMware Solution (AVS), offers a promising option. However, it is essential to recognize that similar alternatives are also offered by other cloud giants such as AWS, Google Cloud, and Oracle Cloud. This article aims to analyze the unique advantages of AVS, demonstrating why it can be considered the most advantageous choice for organizations in this delicate transition period.

Use Cases for Azure VMware Solution

Azure VMware Solution (AVS) is not suitable for all types of customers but can be ideal in specific adoption scenarios that require particular features and benefits. The main scenarios in which AVS is chosen include:

  • Disaster Recovery and Business Continuity: AVS offers interesting features for those who intend to undertake a path towards disaster recovery and business continuity.
  • Expansion, reduction, or consolidation of the datacenter: whether it’s about expanding existing capacity, reducing physical footprint, or consolidating infrastructures, AVS can facilitate these processes.
  • Simple and fast migration of workloads to Azure: for companies seeking a rapid and seamless transition of their existing VMware workloads to the cloud, AVS offers an optimal solution without the need for complex new configurations.
  • Application Modernization: although less common, application modernization becomes an accessible possibility once the AVS environment is operational. This scenario allows for agile leveraging of Azure’s extensive service ecosystem to innovate and improve existing applications. These scenarios demonstrate how AVS is particularly suited for large companies that require specific continuity, scalability, and modernization solutions within their VMware ecosystem.

Key Benefits of Azure VMware Solution

1. Azure Hybrid Benefit

One of the main benefits of choosing AVS is the Azure Hybrid Benefit. This program allows companies to use their existing Windows Server and SQL Server licenses with Software Assurance to save significantly on costs. This approach not only reduces expenses but also maximizes the investments already made in software licenses, providing a substantial economic advantage over other platforms that do not offer similar options.

2. Free Extended Security Updates (ESU)

Azure also stands out for its offering of free Extended Security Updates (ESU) for Windows Server and SQL Server 2012 and 2012 R2, extending protection up to three years beyond the product’s extended support end date. These updates, available at no additional cost, extend protection for three years beyond the planned end of extended support for these products. This opportunity is particularly relevant for companies that continue to use legacy applications, representing an exclusive advantage over competitors like AWS and Google. ESUs act as a temporary bridge to ensure security during the transition period towards more modern and supported platforms.

3. Integration with other Azure services

Another advantage of Azure VMware Solution (AVS) is its native integration with a wide variety of Azure services, including those related to artificial intelligence through Azure AI. This synergy allows companies to easily integrate advanced AI features into their applications, leveraging Azure’s infrastructure to innovate and enhance their service offerings.

4. Global availability

In terms of geographical availability, AVS has a significantly broader presence compared to competing solutions, with 30 public cloud regions available, including the North Italy region. This number is higher compared to competitors, with VMware Cloud on AWS available in 23 regions and Google Cloud VMware Engine in 19. This extensive network of available regions offers greater flexibility and facilitates better proximity to customers, effectively meeting the local needs of companies.

5. Price protection and savings

Azure promotes the adoption of Azure VMware Solution (AVS) with advantageous price protection policies and saving opportunities. Companies can take advantage of the Reserved Instances option to fix prices for periods of one, three, or five years, thus ensuring predictable costs in the long term. Furthermore, until December 31, 2024, there is a special offer that provides a 20% discount on the purchase of new annual Reserved Instances for the Azure VMware solution. It is important to note that the option for five-year Reserved Instances will only be available until the same date, offering an additional opportunity to plan long-term investments under economically favorable conditions.

Conclusion

Choosing Azure VMware Solution over the offerings of other cloud service providers is not just a matter of comparing technical features but also of evaluating economic benefits, security, integration, and global availability. For companies looking to optimize their VMware investment in the cloud, AVS represents a highly advantageous solution, leveraging the Azure ecosystem to provide superior service. With these strengths, Azure positions itself as a leader in the transition towards VMware-based cloud environments.

Business Continuity and Disaster Recovery (BCDR) Strategies for Azure Stack HCI

Azure Stack HCI is a cutting-edge solution in the hyper-converged infrastructure landscape, designed to offer businesses the flexibility to integrate their on-premise infrastructure with the capabilities of Azure cloud. This platform stands out for its ability to optimize resources, enhance operational efficiency, and ensure simplified management through advanced virtualization, storage, and networking technologies. In an increasingly digitalized context, where operational continuity and rapid response capabilities to potential disasters are essential, Azure Stack HCI emerges as the ideal solution to meet these challenges, ensuring organizations remain resilient, operational, and competitive, even in the face of unforeseen events and calamities. This article aims to explore the main Business Continuity and Disaster Recovery (BCDR) strategies that can be implemented with Azure Stack HCI, highlighting how this platform can be a fundamental element for a robust IT infrastructure.

Overview of Azure Stack HCI

Azure Stack HCI is an innovative solution from Microsoft that allows the implementation of a hyper-converged infrastructure (HCI) in an on-premise environment, while simultaneously providing a strategic connection to Azure services. This platform supports Windows and Linux virtual machines, as well as containerized workloads, along with their storage. As a hybrid product par excellence, Azure Stack HCI enhances integration between on-premise systems and Azure, offering access to various cloud services, including monitoring and management.

This hybrid model simplifies the adoption of advanced scenarios like disaster recovery, cloud backup, and file synchronization, facilitating the expansion of business operations into the cloud as needed. The main advantages of Azure Stack HCI include reduced IT complexity, cost optimization through more efficient resource use, and the ability to rapidly adapt to the continuously evolving business needs.

Figure 1 – Overview of Azure Stack HCI

For a detailed exploration of the Microsoft Azure Stack HCI solution, I invite you to read this article or view this video.

The Importance of Business Continuity and Disaster Recovery

The strategies of Business Continuity and Disaster Recovery are crucial in the context of Azure Stack HCI for several reasons.

Having solid BC and DR strategies ensures that, even in the face of hardware failures, natural disasters, cyberattacks, or other forms of disruptions, critical operations can continue without substantial interruptions. This not only protects the reputation and continuity of the business, but also ensures that critical data is protected and recoverable, minimizing the risk of financial and data loss.

Moreover, in an environment increasingly dependent on data and applications for daily operations, IT resilience becomes a competitive factor. Implementing effective BC and DR strategies in Azure Stack HCI allows demonstrating reliability and resilience to stakeholders, including customers, partners, and employees, strengthening confidence in the operational model.

For these reasons, BC and DR are fundamental elements of the IT strategy in Azure Stack HCI, ensuring that business operations can withstand and quickly recover from disruptions, thus protecting the operational integrity of the organization.

Risk Assessment and Business Impact

In the realm of IT infrastructure management, the ability to anticipate and effectively respond to potential risks is crucial for maintaining business continuity. The optimal adoption of Azure Stack HCI requires a thorough analysis and a well-defined mitigation strategy. In this section, we explore the essential steps for identifying risks, assessing business impact, and establishing recovery priorities, key elements for successfully implementing an effective Business Continuity and Disaster Recovery (BCDR) strategy in the Azure Stack HCI environment.

Risk Identification

Risk assessment for the Azure Stack HCI environment must rely on meticulous analysis to identify potential risks that can threaten the integrity and operational continuity of the infrastructure. These risks can vary from natural disasters such as floods and earthquakes to hardware failures, network disruptions, cyberattacks, and software issues. It is essential to perform a targeted assessment to identify and classify risks, thus creating a solid foundation for strategic planning and mitigation.

Business Impact Analysis

Next, it is necessary to proceed with assessing the impact that each identified risk can have on business operations. This process, known as Business Impact Analysis (BIA), focuses on the extent of disruption each risk can cause, evaluating consequences such as loss of critical data, disruption of essential services, financial impact, and loss of reputation. The goal is to quantify the Maximum Tolerable Downtime (MTD) for each critical business function, in order to establish recovery priorities and the most appropriate response strategies.

Recovery Priorities

Based on the Business Impact Analysis, recovery priorities are established to ensure that resources and efforts are focused on restoring the most critical functions for business operations. This approach ensures that recovery time objectives (RTOs) and recovery point objectives (RPOs) are aligned with business needs and expectations.

Business Continuity and Disaster Recovery Strategies

The Business Continuity strategies for Azure Stack HCI aim to create a highly available and resilient environment, thus ensuring the continuity of business activities. Concurrently, the Disaster Recovery (DR) strategies are designed to ensure a quick and efficient resumption of IT operations following critical events. In the following paragraphs, we explore the key aspects to consider for effectively implementing these strategies.

Redundancy and High Availability

Redundancy and high availability are fundamental components of Business Continuity strategies in Azure Stack HCI. Implementing redundancy means duplicating critical system components, such as servers, storage, and network connections, to ensure that in the event of a component failure, another can take its place without interruption. Azure Stack HCI supports high availability configurations through failover clusters, where computing and storage resources are distributed across multiple nodes. In case of a node failure, workloads are automatically shifted to other available nodes in the cluster, thus maintaining operations without downtime. This configuration not only protects against hardware failures but also ensures resilience against operating system-level disruptions.

Backup and Recovery

Regarding backup and recovery, it is essential to implement a strategy that ensures data protection and the ability to quickly restore data after an interruption. Azure Stack HCI integrates with most backup solutions, ensuring security and reducing the risk of data loss. It is recommended to schedule regular backups, adapting them to the frequency of data changes and specific business needs. Additionally, it is advised to regularly test restores to ensure that data can indeed be recovered within the time specified by the Recovery Time Objective (RTO).

Operational Continuity Testing

To validate the effectiveness of continuity strategies, it is crucial to regularly conduct operational continuity tests. These tests not only include backups and restores but also assess the ability of the infrastructure to function in conditions of partial or total failure. It is important to conduct targeted tests during the initial validation phase of the environment and to repeat them periodically in different scenarios to ensure that redundancy mechanisms function as expected.

Disaster Recovery Sites and Processes

Azure Stack HCI supports various disaster recovery site configurations to increase resilience. On-premise disaster recovery sites can be configured through stretched clusters that distribute the workload across multiple geographic sites, ensuring operational continuity even in the event of a complete failure of one of the sites.

Figure 2 – Comparison of types of stretched clusters

Alternatively, disaster recovery sites on Azure offer the flexibility to utilize cloud capacity for rapid recovery, enabling effective management of Disaster Recovery (DR) with virtual resources that can be quickly scaled.

Figure 3 – Hybrid features of Azure Stack HCI with Azure services

The disaster recovery process in Azure Stack HCI must be designed to ensure a quick and efficient resumption of IT operations after a critical event. This may include configuring failover mechanisms that leverage specific solutions, such as Azure Site Recovery (ASR), to orchestrate the recovery of virtual machines and services. With ASR, recovery can also be tested in a sandbox environment, thus ensuring the integrity of the process without impacting the production environment.

Automation and Documentation

Automation plays a key role in disaster recovery processes for Azure Stack HCI. By using tools such as Azure Site Recovery and Azure Automation, the client can automate the failover and failback process, reducing human error and accelerating recovery times. Automation ensures that each step of the DR plan is executed consistently and in accordance with defined standards.

Concurrently, detailed documentation of all disaster recovery procedures is essential. This should include recovery plans, system configurations, operational instructions, and key contacts. Documentation must be easily accessible and regularly updated to reflect any changes in the infrastructure or procedures. Having comprehensive and up-to-date documentation is crucial for ensuring an effective response during a disaster and for facilitating ongoing reviews and improvements to the DR plan.

Monitoring and Management Tools

The management of Azure Stack HCI is conducted using widely recognized tools such as Windows Admin Center, PowerShell, System Center Virtual Machine Manager, and third-party applications. The integration between Azure Stack HCI and Azure Arc allows for extending cloud management practices to on-premises environments, significantly simplifying use and monitoring. In particular, the Azure Stack HCI Insights solution offers an in-depth view of the health, performance, and utilization of Azure Stack HCI clusters.

Figure 4 – Azure Stack HCI monitoring

These tools provide detailed and simplified management of the platform, including configuration and monitoring of BCDR functions, facilitating daily operations and ensuring a timely response in case of emergencies.

Conclusions

Business Continuity and Disaster Recovery strategies are essential in the context of Azure Stack HCI, which not only protects businesses from interruptions and disasters but also drives innovation and operational efficiency. Integration with Azure services enhances the resilience and risk management of Azure Stack HCI. This platform offers a solid architecture and allows integration with advanced features for backup and recovery, supporting businesses in ensuring data continuity and integrity. Azure Stack HCI thus proves to be not only a modern infrastructure solution but also a pillar for corporate IT resilience.

Strategic Integration Between Azure Stack HCI and Azure Virtual Desktop

In the current context of continuous technological evolution, the importance of resilient, scalable, and secure infrastructure solutions has never been more apparent. Microsoft’s Azure Stack HCI emerges as a key player in this landscape, offering a powerful hybrid platform that bridges on-premises environments and the cloud. With the integration of Azure Virtual Desktop (AVD), this solution becomes even more strategic for companies looking to navigate the complexities in the field of desktop and application virtualization, extending the capabilities of Microsoft’s managed cloud service to the hybrid cloud environment. Through this approach, organizations can now deploy virtual desktops and applications more efficiently, while ensuring low-latency connectivity and access to Azure’s managed services for leading-edge management, security, and scalability. This article will explore in detail the features, benefits, and innovations of Azure Virtual Desktop on Azure Stack HCI, providing a comprehensive overview of how these technologies can transform company IT infrastructures to better face the challenges of the modern work world.

Overview of Azure Stack HCI and Azure Virtual Desktop

What is Azure Stack HCI?

Azure Stack HCI is an innovative solution from Microsoft that enables the implementation of a hyper-converged infrastructure (HCI) for running workloads on-premises while maintaining a strategic connection to Azure services. This system eliminates the need for various traditional hardware components, opting instead for a software solution that integrates computing, storage, and networking into a single platform. This marks an evolution from traditional “three-tier” infrastructures, characterized by network switches, appliances, physical systems with hypervisors, storage fabric, and SAN, to a more simplified and efficient solution. Azure Stack HCI offers an infrastructure powered by a hyper-converged model, which supports both Windows and Linux virtual machines as well as containerized workloads, together with their storage. As a quintessential hybrid product, Azure Stack HCI facilitates the integration between on-premises systems and Azure, allowing access to cloud-based services, monitoring, and management. This gives organizations the agility and benefits typical of public cloud infrastructure, while effectively responding to use cases and regulatory requirements of specialized workloads that need to remain on-premises. Azure Stack HCI thus positions itself as a strategic choice for organizations aiming to combine cloud efficiency with the specific needs of the on-premises environment.

What is Azure Virtual Desktop?

Azure Virtual Desktop is a state-of-the-art VDI (Virtual Desktop Infrastructure) solution, cloud-based, designed to effectively meet the needs of modern work, whether remote or hybrid. Unique in its kind, it is fully optimized to leverage the multi-session capabilities of Windows 11 and Windows 10, ensuring optimal integration and efficiency. Additionally, Azure Virtual Desktop stands out for its robust security features, designed to protect corporate applications and data while ensuring compliance with current regulations. The platform is designed to significantly simplify the deployment and management of the VDI infrastructure, offering complete control over configuration and management. Thanks to its consumption-based pricing structure, it allows for reduced operational costs, leveraging investments and skills already acquired in the field of virtualization, paying only for the resources actually used.

What is Azure Virtual Desktop for Azure Stack HCI?

Azure Virtual Desktop for Azure Stack HCI represents an innovative technological solution that integrates the distinctive benefits of Azure Virtual Desktop and Azure Stack HCI. This integration offers organizations the flexibility to run virtualized desktops and applications securely not only in the cloud but also on-premises. Particularly suitable for entities with specific data residency requirements, latency sensitivity, or data proximity needs, Azure Virtual Desktop for Azure Stack HCI extends the capabilities of the Microsoft Cloud to corporate datacenters, promoting an IT environment more adaptive and responsive to business needs.

Key Features and Benefits

The main features and benefits of this solution include:

  • Performance optimization: enhances the user experience of Azure Virtual Desktop in regions with limited connectivity to the Azure public cloud, offering session hosts in physical proximity to users.
  • Compliance with data locality requirements: allows organizations to meet data residency requirements, keeping the data of applications and users on-premises. This aspect is crucial for companies operating in regulated sectors or with specific data privacy and security needs.
  • Access to legacy resources: facilitates access to legacy applications and data sources by keeping them in the same physical location as virtualized desktops and apps.
  • Full and efficient Windows experience: ensures a smooth and complete user experience thanks to compatibility with Windows 11 and Windows 10 Enterprise multi-session, while optimizing operational costs.
  • Unified management: simplifies the deployment and management of the VDI infrastructure compared to traditional on-premises solutions, using the Azure portal for centralized and integrated control.
  • Optimal network performance: ensures the best connection performance with RDP Shortpath, reducing latency and improving user access to virtualized resources.
  • Simple updates: allows for quick and simple deployment of the latest fully updated images through the use of Azure Marketplace images, thus ensuring that the virtual environment remains secure and up-to-date.

Azure Virtual Desktop for Azure Stack HCI is configured as a highly scalable and secure solution that enables companies to effectively address challenges related to data management, latency, and compliance, promoting an optimized and centrally manageable virtual work environment.

Integration Mechanisms

The main key mechanisms through which AVD integrates with Azure Stack HCI include:

  • Virtual machines as Session Hosts: the virtual machines (VMs) created on Azure Stack HCI act as session hosts for AVD. These VMs are managed just like any Azure VM but are located on-premises.
  • Azure managed components: AVD on Azure Stack HCI uses Azure managed components, such as brokerage and gateway services, while deploying session host pools directly on Azure Stack HCI clusters.
  • System requirements: to implement this configuration, you need to have Azure Stack HCI version 23H2 or higher. Additionally, you must have a Windows image for the VMs and a logical network that supports DHCP on Azure Stack HCI.

Deployment and Management

Here is how the deployment and management of AVD in this hybrid context works:

  • Location definition: deploying on Azure Stack HCI requires defining a custom location that represents the Azure Stack HCI cluster during the creation of resources on Azure. This step is crucial to ensure that resources are correctly associated with the desired physical infrastructure.
  • Configuration of Session Host pools: session host pools can be made up of VMs located in the Azure cloud or on a specific Azure Stack HCI cluster. It is important to note that VMs from both origins cannot be combined within a single pool.
  • Consistent management: the management of session hosts and user identities, which must be hybrid configurations synchronized between AD on-premises and Microsoft Entra ID, remains in line with standard Azure Virtual Desktop practices.

Licensing and Pricing

To implement Azure Virtual Desktop on Azure Stack HCI, it is essential to understand and ensure compliance with the necessary licenses and pricing models. Here are the three main components that influence the cost of Azure Virtual Desktop on Azure Stack HCI:

  1. Infrastructural costs: these costs directly relate to the Azure Stack HCI infrastructure on which Azure Virtual Desktop is run. More information on the Azure Stack HCI cost model can be found in this article.
  2. User access rights: the same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop for Azure Stack HCI. It is important to note that user access pricing for external users is not supported on Azure Virtual Desktop for Azure Stack HCI.
  3. Hybrid service rate: this is an additional rate that applies to each active virtual CPU (vCPU) on Azure Virtual Desktop session hosts operating on Azure Stack HCI. The rate for the hybrid service is $0.01 per vCore per hour of use.

Conclusions

The innovative contribution of Azure Stack HCI, further enhanced by the integration with Azure Virtual Desktop, marks a fundamental turning point for organizations aspiring to an advanced and hybrid IT infrastructure. Azure Stack HCI establishes itself as the backbone of this transformation, offering optimized management of on-premises workloads, together with the flexibility and efficiency characteristic of the cloud. The implementation of Azure Virtual Desktop on Azure Stack HCI proves ideal for organizations that wish to leverage the potential of the cloud, while maintaining the specific needs of on-premises environments. This solution sets a new standard in the sector of hybrid VDI solutions, proposing an effective balance between innovation and customization.