Category Archives: Azure Arc

Azure by your side: new solutions for Windows Server 2012/R2 end of support

In the era of Artificial Intelligence and native services for the cloud, organizations continue to rely on Windows Server as a secure and reliable platform for their mission-critical workloads. However, it is important to note that support for Windows Server 2012/R2 will end on 10 October 2023. After that date, Windows Server 2012/R2 systems will become vulnerable if action is not taken, as they will no longer receive regular security updates. Recently, Microsoft has announced that Azure offers new solutions to better manage the end of support of Windows Server 2012/R2. These solutions will be examined in detail in this article, after a brief summary to set the context.

The impact of end of support for Windows Server 2012 R2: what it means for companies?

Microsoft has announced the end of support for Windows Server 2012 and 2012 R2, fixed for 10 October 2023. This event represents a turning point for many organizations that rely on these servers to access applications and data. But what exactly does end of support mean (EOL) and what are the implications for companies?

Understanding end of support

Microsoft has a lifecycle policy that provides support for its products, including Windows Server 2012 and 2012 R2. End of support refers to when a product is no longer supported by Microsoft, which means no more security updates will be provided, patches or technical support.

Why companies should care

Without regular updates and patches, companies using Windows Server 2012 and 2012 R2 are exposed to security vulnerabilities, such as ransomware attacks and data breaches. Furthermore, using an unsupported product such as Windows Server 2012 or 2012 R2 can lead to non-compliance issues. Finally, outdated software can cause compatibility issues with newer applications and hardware, hampering efficiency and productivity.

An opportunity to review IT strategy

Companies should use the EOL event as an opportunity to review their IT strategy and determine the desired business goals for their technology. In this way, they can align the technology with their long-term goals, leveraging the latest cloud solutions and improving operational efficiency.

The strategies that can be adopted to deal with this situation, thus avoiding exposing your IT infrastructure to security issues, have already been addressed in the article: How the End of Support of Windows Server 2012 can be a great opportunity for CTOs.

About this, Microsoft has introduced two new options, provided through Azure, to help manage this situation:

  • updating servers with Azure Migrate;
  • distribution on Azure Arc-enabled servers of updates deriving from the ESU (Extended Security Updates).

The following paragraphs describe the characteristics of these new options.

Updating Windows servers in end of support phase (EOS) with Azure Migrate

Azure Migrate is a service offered by Microsoft Azure that allows you to assess and migrate on-premises resources, as virtual machines, applications and databases, towards the Azure cloud infrastructure. Recently, Azure Migrate has introduced support for in-place upgrades for Windows Server 2012 and later, when moving to Azure. This allows organizations to move their legacy applications and databases to a fully supported operating system, compatible and compliant as Windows Server 2016, 2019 or 2022.

Key benefits of Azure Migrate's OS update feature

Risk mitigation: Azure Migrate creates a replica of the original server in Azure, allowing the OS to be updated on the replica while the source server remains intact. In case of problems, customers can easily go back to the original operating system.

Compatibility Test: Azure Migrate provides the ability to perform a test migration in an isolated environment in Azure. This is especially useful for OS updates, allowing customers to evaluate the compatibility of their operating system and updated applications without impacting production. This way you can identify and fix any problems in advance.

Reduced effort and downtime: integrating OS updates with cloud migration, customers can significantly save time and effort. With only one additional data, the version of the target operating system, Azure Migrate takes care of the rest, simplifying the process. This integration further reduces downtime of the server and applications hosted on it, increasing efficiency.

No separate Windows licenses: with the Azure Migrate OS update, you do not need to purchase an operating system license separately to upgrade. That the customer uses Azure Hybrid Benefits (AHB) o PAYG, is covered when migrating to an Azure VM using Azure Migrate.

Large-scale server upgrade: Azure Migrate supports large-scale server OS upgrades, allowing customers to upgrade up to 500 server in parallel when migrating to Azure. Using the Azure portal, you will be able to select up to 10 VMs at a time to set up replicas. To replicate multiple VMs you can use the portal and add VMs to be replicated in multiple batches of 10 VMs, or use the Azure Migrate PowerShell interface to configure replication.

Supported OS versions

Azure Migrate can handle:

  • Windows Server 2012: supports upgrading to Windows Server 2016;
  • Windows Server 2012 R2: supports upgrading to Windows Server 2016, Windows Server 2019;
  • Windows Server 2016: supports upgrading to Windows Server 2019, Windows Server 2022;
  • Windows Server 2019: supports upgrading to Windows Server 2022.

Deployment of ESU-derived updates on Azure Arc-enabled servers

Azure Arc is a set of Microsoft solutions that help businesses manage, govern and protect assets in various environments, including on premise, edge e multi-cloud, extending the management capabilities of Azure to any infrastructure.

For organizations unable to modernize or migrate before Windows Server 2012/R2 end of support date, Microsoft has announced Extended Security Updates (ESU) enable Azure Arc. With Azure Arc, organizations will be able to purchase and distribute Extended Security Updates seamlessly (ESU) in on-premises or multicloud environments, direct from the Azure Portal.

To get Extended Security Updates (ESU) for Windows Server 2012/R2 and SQL Server 2012 enable Azure Arc, you need to follow the steps below:

  • Preparing the Azure Arc environment: first of all, you need an Azure environment and a working Azure Arc infrastructure. Azure Arc can be installed on any server running Windows Server 2012/R2 or SQL Server 2012, provided that the connectivity requirements are met.
  • Server registration in Azure Arc: once the Azure Arc environment is set up, you need to register your Windows servers or SQL Server systems in Azure Arc. This process allows systems to become managed resources in Azure, making them eligible for ESUs.
  • Purchase of ESUs: once the servers are registered in Azure Arc, ESUs can be purchased, for each server you want to protect, through Azure.
  • ESU activation: after the purchase of the ESUs, you need to activate them on the servers. This process involves installing a license key and downloading security updates from Windows Update or your local update distribution infrastructure.
  • Installing updates: finally, once the ESUs are activated, you can install security updates on servers. This process can be managed manually or by automating it through update management tools.

Note: ESUs only provide critical and important security updates, they do not include new features or performance improvements. Furthermore, ESUs are only available for a limited time after Microsoft's end of support. Therefore, we recommend that you consider migrating to newer versions of servers to have access to all features, in addition to security updates.

Conclusions

This year, Microsoft celebrates the 30th anniversary of Windows Server, a goal achieved thanks to relentless innovation and customer support. However, customers must commit to keeping their Windows Server systems up-to-date near the end of support. In particular, the end of support for Windows Server 2012 and 2012 R2 poses a significant risk to companies, but it also presents an opportunity to review and improve their IT strategy. Identifying desired business goals, engaging in strategic planning e, if necessary, using these new solutions offered by Azure, companies can ensure a smooth and successful transition, optimizing their IT infrastructure to achieve their long-term goals.

How to prepare your IT environment for new hybrid and multicloud scenarios

Many companies are engaged in the diffusion and adoption of applications that can work in different environments: on-premises, across multiple public clouds and at the edges. Such an approach requires adequate preparation of the corporate IT environment to ensure compliance and efficient management of large-scale server systems, of applications and data, while maintaining high agility. In this article, the main aspects to be taken into consideration for the adoption of hybrid and multicloud technologies are introduced, in order to best meet the business needs.

The reasons that lead to the adoption of hybrid and multicloud solutions

There are many reasons why customers choose to deploy their digital assets in hybrid and multicloud environments. Among the main ones we find:

  • Minimize or remove data lock-in from a single cloud provider
  • Presence of business units, subsidiary companies or acquired companies that have already made choices to adopt different cloud platforms
  • Different regulatory and data sovereignty requirements in different countries
  • Need to improve business continuity and disaster recovery by distributing workloads between two different cloud providers
  • Needs to maximize performance by allowing applications to run close to where users are

What aspects to consider?

There are several options for preparing an IT environment suitable for hosting hybrid and multicloud deployments, reason why before setting up your Azure environment or any other public cloud, it is important to identify how the cloud environment should support your scenario:

Figure 1 – Diagram showing how different customers distribute workloads between cloud providers

In the image above, each dark blue point represents a workload and each blue circle is a business process, supported by a separate environment. Depending on the cloud-mix, a different configuration of the Azure environment may be required:

  • Hybrid-first customer: most of the workloads remain in place, often in a combination of hosting models with traditional and hybrid resources. Some specific workloads are deployed on the edge, in Azure or other cloud service providers.
  • Azure-first customer: most of the workloads reside in Azure. However, some workloads remain local. Furthermore, certain strategic decisions lead some workloads to reside in the edges or in multicloud environments.
  • Multicloud-first customer: most workloads are hosted on a public cloud other than Azure, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). However, some strategic decisions have led some workloads to be placed in Azure or at the edges.

Depending on the hybrid and multicloud strategy you decide to undertake for applications and data, this will have to direct certain choices.

How to prepare the Azure environment

Microsoft Azure is an enterprise-grade cloud service provider and best able to support public environments, hybrid and multicloud.

To prepare an IT environment and make it effective for any hybrid and multicloud deployment, the following key aspects should be considered:

  • Network topology and connectivity
  • Governance
  • Security and compliance
  • Automation disciplines, development experiences and DevOps practices

When dealing with the issue of preparing your IT environment for new hybrid and multicloud scenarios, it is advisable to define the Azure "Landing Zone" which represents, in the cloud adoption journey, the point of arrival. It is an architecture designed to allow you to manage functional cloud environments, contemplating the following aspects:

  • Scalability
  • Security governance
  • Networking
  • Identity
  • Cost management
  • Monitoring

The architecture of the Landing Zone must be defined based on specific business and technical requirements. It is therefore necessary to evaluate the possible implementation options of the Landing Zone, thanks to which it will be possible to meet the deployment and operational needs of the cloud portfolio.

Figure 2 – Conceptual example of an Azure landing zone

What tools to use?

Cloud Adoption Framework

The Cloud Adoption Framework of Microsoft provides a rich set of documentation, guidelines for implementation, best practices and helpful tools to accelerate your cloud adoption journey. Among these best practices, which it is advisable to adopt and which it is advisable to specifically decline for the various customers according to their needs, there is one specific section concerning hybrid and multicloud environments. This section covers the different best practices that can help facilitate various cloud mixes, ranging from environments totally in Azure to environments where the infrastructure at the Microsoft public cloud is not present or is limited.

Azure Arc as an accelerator

Azure Arc consists of a set of different technologies and components that allow you to have a single control mechanism to manage and govern all your IT resources in a coherent way, wherever they are. Furthermore, with Azure Arc-enabled services, you have the flexibility to deploy fully managed Azure services anywhere, on-premises or in other public clouds.

Figure 3 –  Azure Arc overview

TheAzure Arc-enabled servers Landing Zone, present in the Cloud Adoption Framework, allows customers to increase security more easily, governance and compliance status of servers deployed outside of Azure. Together with Azure Arc, services like Microsoft Defender for Cloud, Azure Sentinel, Azure Monitor, Azure Policy and many others can be extended to all environments. For this reason Azure Arc should be considered as an accelerator for your Landing Zones.

Azure Arc Jumpstart has grown a lot and allows you to better evaluate Azure Arc, with over 90 automated scenarios, thousands of visitors per month and a very active open source community sharing their knowledge about Azure Arc. As part of Jumpstart, ArcBox was developed, an automated sandbox environment for everything related to Azure Arc, deployable to customers' Azure subscriptions. As an accelerator for the landing zone of Azure Arc-enabled servers it has been developed ArcBox per IT pro, which serves as a sandbox automation solution for this scenario, with services like Azure Policy, Azure Monitor, Microsoft Defender for Cloud, Microsoft Sentinel and more.

Figure 4 – Architecture of ArcBox per IT pro

Conclusions

The adoption of consistent operating practices across all cloud environments, associated with a common control plan, allows you to effectively address the challenges inherent in hybrid and multicloud strategies. To do this, Microsoft provides various tools and accelerators, one among which is Azure Arc which makes it easier for customers to increase security, the governance and compliance status of IT resources deployed outside of Azure.

How to simplify systems management with Azure Automanage

The adoption of cloud solutions has helped to reduce operating expenses (Opex) and the management costs in numerous areas of IT. In fact,, many systems that previously ran on-premises and were complex to maintain are now simple managed services in the cloud.. At the same time though, the execution of systems located in different environments; and the wide range of new Azure services, can make operational management articulated. Microsoft, to better manage the various services and their configuration, provides the solution Azure Automanage, which appropriately integrated with Azure Arc, allows you to automate various operations during the entire life cycle of the machines, regardless of where they reside. This article lists the characteristics of the solution, showing how Azure Automanage, together with Azure Arc, can facilitate the day-to-day tasks of system administrators and ensure optimal adherence to Microsoft best practices.

Simplify the configuration and management of systems wherever they reside

Azure Automanage Automatically implement best practices in machine management while ensuring security compliance, corporate compliance and business continuity. Furthermore, Azure Arc for servers extends the possibilities offered by Azure in the field of governance and management also to physical machines and virtual systems that reside in environments other than Azure. To learn more about the implementation guidelines, Microsoft's proven best practices and tools designed to accelerate your cloud adoption journey should be referenced Microsoft Cloud Adoption Framework.

Quickly configure Windows and Linux server

By adopting this solution, you can detect, integrate and configure different Azure services during the entire life cycle of the machines, making a distinction between Production environments and DevTest environments. Azure services automatically managed by Azure Automanage and related specifications are available in this Microsoft documentation:

Figure 1 – Overview of services managed by Azure Automanage

The inclusion of machines in the service can take place on a large scale or individually, with the certainty that if the systems do not comply with the best practices imposed, Azure Automanage will be able to detect and correct them automatically.

The service can be activated directly from the Azure portal and requires a few simple steps.

The choice of configuration profiles

Azure Automanage uses configuration profiles to determine which Azure services should be enabled on the selected systems. Two configuration profiles are currently available by default, one for the DevTest environment and one for the Production environment. The two profiles are distinguished by the types of services to be enabled on the different workloads. Furthermore, in addition to the standard profiles it is allowed to configure some custom profiles with a certain subset of preferences regarding the various services.

After you enable the service Azure Automanage The process that leads the machines back to the best practices specified in the chosen configuration profile is started.

The status of the VMs after activation of the service can be of different types, here described.

Azure Automanage also recently introduced new profile customization options and more supported operating systems, including Windows 10/11, Red Hat Enterprise Linux, Canonical Ubuntu and SUSE Linux Enterprise Server.

Configure Windows and Linux servers in Azure environments, hybrid or multi-cloud through Azure Arc

Azure Automanage can be enabled on both Azure VMs and Azure Arc-enabled servers. Furthermore, Azure Automanage for Windows Server offers new features specific to Windows Server Azure Edition, that improve the uptime of Windows Server VMs in Azure and Azure Stack HCI environment. These features include:

  • Hotpatch
  • SMB over QUIC
  • Azure Extended Networking

Advantages of the solution

The adoption of Azure Automanage involves several advantages for the customer that can be summarized in the following points:

  • Cost reduction, automating machine management
  • Optimize workload uptime by performing tasks in an optimized way
  • Control over the implementation of security best practices

Conclusions

Machine life cycle management, especially in heterogeneous and large environments, can be very expensive in terms of time and costs. Furthermore, activities that are repeated frequently can be prone to errors, leading systems to a non-optimal configuration. Thanks to this integration between Azure Automanage and Azure Arc it is possible to simplify and automate all the operations necessary to ensure that the systems adhere to the desired requirements.

Building modern IT architectures for Machine Learning

For most companies, the ability to continuously provide and integrate artificial intelligence solutions within their own applications and business workflows, is considered a particularly complex evolution. In the rapidly evolving artificial intelligence landscape, machine learning (ML) plays a fundamental role together with "data science". Therefore, to increase the successes of certain artificial intelligence projects, organizations must have modern and efficient IT architectures for machine learning. This article describes how these architectures can be built anywhere thanks to the integration between Kubernetes, Azure Arc ed Azure Machine Learning.

Azure Machine Learning

Azure Machine Learning (AzureML) is a cloud service that you can use to accelerate and manage the life cycle of machine learning projects, bringing ML models into a secure and reliable production environment.

Kubernetes as a compute target for Azure Machine Learning

Azure Machine Learning recently introduced the ability to activate a new target for computing: AzureML Kubernetes compute. In fact,, it is possible to use an Azure Kubernetes Service cluster (AKS) existing or an Azure Arc-enabled Kubernetes cluster as a compute target for Azure Machine Learning and use it to validate and deploy ML models.

Figure 1 - Overview on how to take Azure ML anywhere thanks to K8s and Azure Arc

AzureML Kubernetes compute supports two types of Kubernetes clusters:

  • Cluster AKS (in Azure environment). Using an Azure Kubernetes Service managed cluster (AKS), you can get a flexible environment, secure and capable of meeting compliance requirements for ML workloads.
  • Arc-enabled Kubernetes Cluster (in environments other than Azure). Thanks to Azure Arc-enabled Kubernetes it is possible to manage Kubernetes running in different environments from Azure clusters (on-premises or on other clouds) and use them to deploy ML models.

To enable and use a Kubernetes cluster to run AzureML workloads you need to follow the following steps:

  1. Activate and configure an AKS cluster or an Arc-enabled Kubernetes cluster. In this regard it is also recalled the possibility of activate AKS in Azure Stack HCI environment.
  2. Distribute the extension AzureML on the cluster.
  3. Connect the Kubernetes cluster to the Azure ML workspace.
  4. Use the Kubernetes compute target from CLI v2, SDK v2 and the Studio UI.

Figure 2 - Step to enable and use a K8s cluster for AzureML workloads

Infrastructure management for ML workloads can be complex and Microsoft recommends that it be done by the IT-operations team, so that the data science team can focus on the efficiency of the ML models. In light of this consideration, the division of roles can be as follows:

  • The IT-operation Team is responsible for the former 3 steps above. Furthermore, typically performs the following activities for the data science team:
    • Make configurations of aspects related to networking and security
    • Create and manage instance types for different ML workload scenarios in order to achieve efficient use of compute resources.
    • It deals with troubleshooting the workload of Kubernetes clusters.
  • The Data science Team, completed the activation activities in charge of IT-operation Team , can locate a list of compute targets and instance types available in the AzureML workspace. These compute resources can be used for training or inference workloads. The compute target is chosen by the team using specific tools such as AzureML CLI v2, Python SDK v2 or Studio UI.

Usage scenarios

The ability to use Kubernetes as a compute target for Azure Machine Learning, combined with the potential of Azure Arc, allows you to create, train and deploy ML models in any on-premises infrastructure or on different clouds.

This possibility activates different new usage scenarios, previously unthinkable using only the cloud environment. The following table provides a summary of the use scenarios made possible by Azure ML Kubernetes compute, specifying where the data resides, the motivation that drives each usage model and how it is implemented at the infrastructure and Azure ML level.

Table 1 - New usage scenarios made possible by Azure ML Kubernetes compute

Conclusions

Gartner expects that by 2025, due to the rapid spread of AI initiatives, the 70% of organizations will have operationalized IT architectures for artificial intelligence. Microsoft, thanks to the integration between different solutions, offers a series of possibilities to activate flexible and cutting-edge architectures for Machine Learning, an integral part of artificial intelligence.

Datacenter Modernization: a real case with Microsoft solutions

The statistics speak for themselves, beyond the 90% some companies already have or foresee, in the short term, to adopt a hybrid strategy for their IT infrastructure. These data are confirmed by the daily events, where several customers include in their investment plans both the maintenance of workloads on on-premises infrastructures, both the adoption of solutions in the public cloud. At the same time, a process of modernization of applications is supported with the aim of making the most of the potential and innovation offered by these infrastructures. So we live in the era of hybrid cloud and Microsoft offers several interesting solutions to modernize datacenter and easily manage hybrid infrastructure. This article gives a real example of how a customer has embarked on the modernization path of their datacenter thanks to Azure Stack HCI and how, via Azure Arc, was able to extend Azure services and management principles to its on-premises infrastructure as well.

Initial customer request and problems to be solved

The customer in question wanted to activate a new modern and integrated virtualization infrastructure at their datacenter, to allow you to configure quickly, dynamic and flexible application workloads. The infrastructure in use by the customer was not adequate and encountered various problems, including:

  • Non-scalable and inflexible virtualization solution
  • Hardware obsolescence
  • Configurations that did not ensure adequate availability of virtualized systems
  • Performance and stability issues
  • Difficulty in managing the various infrastructure components

Characteristics of the proposed solutions, adopted and benefits obtained

The customer has decided to adopt a hyper-converged infrastructure (HCI), where several hardware components have been removed, replaced by software that can merge layers of compute, storage and network in one solution. In this way it made a transition from a traditional "three tier" infrastructure, composed of network switches, appliance, physical systems with onboard hypervisors, storage fabric and SAN, toward hyper-converged infrastructure (HCI).

Figure 1 - Transition from a "Three Tier" infrastructure to a Hyper-Converged Infrastructure (HCI)

Azure Stack HCI: the complete stack of the Hyper-Converged infrastructure

This was all done by adopting the solution Microsoft Azure Stack HCI, which allows the execution of workloads and an easy connection to Azure of the hyper-converged infrastructure (HCI). The main characteristics of the solution are reported in the following paragraphs.

Choosing and customizing your hardware

The customer was able to customize the hardware solution according to their needs, going to configure the processor, memory, storage and features of network adapters, respecting the supplier's compatibility matrices.

Figure 2 - Hardware composition of the Azure Stack HCI solution

There are several hardware vendors that offer suitable solutions to run Azure Stack HCI and can be consulted by accessing this link. The choice is wide and falls on more than 200 solutions of more than 20 different partners. Azure Stack HCI requires hardware that is specifically tested and validated by various vendors.

Dedicated and specific operating system

The operating system of the solution Azure Stack HCI is a specific operating system with a simplified composition and more up-to-date components than Windows Server. Roles that are not required by the solution are not included in this operating system, but there is the latest hypervisor also used in Azure environment, with software-defined networking and storage technologies optimized for virtualization.

The local user interface is minimal and is designed to be managed remotely.

Figure 3 - Azure Stack HCI OS interface

Disaster recovery and failover of virtual machines

The customer also took advantage of the possibility of creating a stretched cluster to extend their cluster Azure Stack HCI, in the specific case in two different buildings. This functionality is based on storage replication (synchronous in this scenario) contemplating encryption, local site resilience and automatic failover of virtual machines in the event of a disaster.

Figure 4 – Stretched cluster dell’architettura hyper-converged di Azure Stack HCI

Updates of the entire solution stack (full-stack updates)

To reduce the complexity and operational costs of the solution update process, the customer can start in Azure Stack HCI the process that involves the full-stack upgrade (Firmware / driver along with the operating system) directly from Windows Admin Center.

Figure 5 - Solution updates of the Dell EMC branded Azure Stack HCI solution

Azure Hybrid Service: familiarity in management and operation

The customer is able to manage their infrastructure based on Azure Stack HCI in a simple way and without adopting specific software tools, as if it were an extension of the public cloud, thanks to the features mentioned in the following paragraphs.

Native integration in Azure

Azure Stack HCI natively integrates with Azure services and Azure Resource Manager (ARM). No agent is required for this integration, but Azure Arc is integrated directly into the operating system. This allows you to view, direct from the Azure Portal, the cluster Azure Stack HCI on-premises exactly like an Azure resource.

Figure 6 - Azure Stack HCI integration into Azure

By integrating with Azure Resource Manager, the customer can take advantage of the following benefits of Azure-based management:

  • Adopting Standard Azure Resource Manager-Based Constructs (ARM)
  • Classification of Clusters with Tags
  • Organizing Clusters in Resource Groups
  • Viewing all clusters Azure Stack HCI in one centralized view
  • Managing access using Azure Identity Access Management (IAM)

Furthermore, from the Azure Stack HCI resource you can locate, add, modify or remove extensions, thanks to which you can easily access the management features.

Figure 7 - Azure Stack HCI management capabilities

Arc-enabled VM management

In addition to managing the cluster, the customer can also use Azure Arc to provision and manage virtual machines running on Azure Stack HCI, directly from the Azure portal. Virtual machines and their associated resources (images, disks, and network) are projected into ARM as separate resources using a new multi-platform technology called Arc Resource Bridge.

In this way you can:

  • achieve consistent management between cloud resources and Azure Stack HCI resources;
  • automate virtual machine deployments using ARM templates;
  • guarantee self-service access thanks to Azure RBAC support.

Figure 8 - Features provided by Azure Arc integration for Azure Stack HCI VMs

Azure Backup and Azure Site Recovery

Azure Stack HCI supports Azure Backup and Azure Site Recovery. With Microsoft Azure Backup Server (MABS) the customer backs up hosts and active virtual machines in Azure Stack HCI. Furthermore, using Azure Site Recovery it is possible to activate the replication of virtual machines from Azure Stack HCI to Azure, to create specific disaster recovery scenarios.

Infrastructure monitor with Azure Monitor Insights for Azure Stack HCI

Thanks to the solution Azure Stack HCI Insights the customer is able to consult detailed information on integrity, on the performance and use of Azure Stack HCI clusters connected to Azure and registered for related monitoring. Azure Stack HCI Insights stores its data in a Log Analytics workspace, thus having the possibility to use powerful aggregations and filters to better analyze the data collected over time. You have the option of viewing the monitor data of a single cluster from the Azure Stack HCI resource page or you can use Azure Monitor to obtain an aggregate view of multiple Azure Stack HCI clusters with an overview of the health of the cluster, the state of nodes and virtual machines (CPU, memory and storage consumption), performance metrics and more. This is the same data also provided by Windows Admin Center, but designed to scale up to 500 cluster at the same time.

Figure 9 - Azure Monitor Insights control panel for Azure Stack HCI

Azure benefit for Windows Server

Microsoft offers special benefits when deploying Windows Server in Azure environment, and the same benefits are also available on Azure Stack HCI.

Figure 10 – Azure benefit for Windows Server

Azure Stack HCI allows you to:

  • Deploy virtual machines with Windows Server 2022 Azure Datacenter edition, which offers specific features not available in the classic Standard and Datacenter editions. To learn more about the features available in this edition, you can consult this article.
  • Get extended security updates for free, just like in Azure. This is true for both Windows Server 2008 / R2, both for Windows Server 2012 / R2, in addition to the corresponding versions of SQL Server.
  • Obtain the license and activate the Windows Server machines as in Azure. Azure Stack HCI as well as allowing you to use your own Datacenter license to enable automatic activation of virtual machines (Automatic VM Activation – AVMA), provides the option to pay the Windows Server license for guest systems through your Azure subscription, just like in Azure environment.

Dedicated Azure Support Team

Azure Stack HCI is in effect an Azure solution, therefore the customer can take advantage of Azure support with the following characteristics:

  • You can easily request technical support directly from the Azure portal.
  • Support will be provided by a new team of experts dedicated to supporting the solution Azure Stack HCI.
  • You can choose from different support plans, depending on your needs.

Infrastructure innovation and new evolved scenarios

In the Azure Stack HCI environment, in addition to running virtual machines, you can activate Azure Kubernetes Service (AKS) and Azure Virtual Desktop.

Azure Kubernetes Service in Azure Stack HCI

This on-premises AKS implementation scenario allows you to automate the large-scale execution of modern applications based on micro-services. Thanks to Azure Stack HCI, the adoption of these container-based application architectures can be hosted directly in your own datacenter, adopting the same Kubernetes management experience that you have with the managed service present in the Azure public cloud.

Figure 11 - AKS overview on Azure Stack HCI

For more information, you can consult the article Azure Kubernetes Service in an Azure Stack HCI environment.

Azure Virtual Desktop for Azure Stack HCI

In situations where applications are sensitive to latency, such as video editing, or scenarios where users need to take advantage of a legacy system present on-premises that cannot be easily reached, Azure Virtual Desktop adds a new hybrid option thanks to Azure Stack HCI. Azure Virtual Desktop for Azure Stack HCI uses the same cloud management plan as regular Azure Virtual Desktop, but it allows you to create session host pools using virtual machines running on Azure Stack HCI. These virtual machines can run Windows 10 and/or Windows 11 Enterprise multi-session. By placing desktops closer to users, it is possible to enable direct access with low latency and without round trip.

Conclusions

Microsoft operates one of the largest data centers in the world and is making large investments to bring the experience gained and the innovation of the cloud to Azure Stack HCI. This customer, relying on Azure Stack HCI is taking advantage of a subscription service that receives regular feature updates, with the important goal of being able to exploit the technology tested on a large scale in the cloud on-premises. Furthermore, is able to manage the resources of its environment in a unified way and have a continuous innovation of its hybrid infrastructure.

How to extend Azure management principles to VMware infrastructures with Azure Arc

The trend that is frequently found in different business contexts is to resort to hybrid and multi-cloud strategies for their IT environments. All this allows you to embark on a path of digital innovation with great flexibility and agility. To do this in the best possible way, it is appropriate to adopt technologies that make it possible to create new opportunities and at the same time to manage the challenges inherent in these new paradigms.. Microsoft has designed a specific solution and is called Azure Arc. One of the crucial benefits of Azure Arc is to extend Azure management and governance practices also to different environments and to adopt solutions and techniques that are typically used in the cloud environment also for on-premises environments. This article explores how Microsoft has recently improved the integration process of VMware vSphere infrastructures in Azure Arc and what opportunities can be seized from this innovation.

Why adopt a hybrid strategy?

Among the main reasons that lead customers to adopt a hybrid strategy we find:

  • Workloads that cannot be moved to the public cloud due to regulatory and data sovereignty requirements. This is usually common in highly regulated industries such as financial services, healthcare and government environments.
  • Some workloads, especially those residing in the edges, require low latencies.
  • Many companies have made significant investments in the on-premises environment that they want to maximize, therefore the choice falls on modernizing the traditional applications that reside on-premises and the solutions adopted.
  • Ensure greater resilience.

What questions to ask to better leverage and manage hybrid and multi-cloud environments?

In situations where a hybrid or multi-cloud strategy is being adopted, the key questions you should ask yourself to reap the greatest benefits are:

  • How can I view, govern and protect IT assets, regardless of where they are running?
  • There is the possibility of bringing cloud innovation to existing infrastructure as well?
  • How you can modernize local datacenters by adopting new cloud solutions?
  • How to extend processing and artificial intelligence to the edge to unlock new business scenarios?

The answer to all these questions can be… “by adopting Azure Arc!".

Figure 1 – Azure Arc overview

There are many customers who have VMware-based infrastructure and are using Azure services at the same time. Azure Arc extends the possibilities offered in governance and management by Azure also to virtual machines in VMware environments. To further improve this experience of control and management of these resources, a deep integration between Azure Arc and VMware vSphere has been introduced.

Azure Arc-enabled VMware vSphere: how does it work?

Azure Arc-enabled VMware vSphere is a new Azure Arc feature designed for customers with on-premises VMware vSphere environments or those who adopt Azure VMware Solution.

This direct integration of Azure Arc with VMware vSphere requires you to activate a virtual appliance called "Arc bridge". This resource allows you to establish the connection between the VMware vCenter server and the Azure Arc environment.

Thanks to this integration it is possible to onboard in Azure some or all of the vSphere resources managed by your vCenter server such as: resource pool, cluster, host, datastore, network, existing templates and virtual machines.

Figure 2 - VMware vCenter from the Azure portal

Once the onboarding phase is over, new usage scenarios open up that allow you to take advantage of the benefits reported in the following paragraph.

Benefits of Azure Arc-enabled VMware vSphere

Thanks to this new integration it is possible to obtain the following benefits:

  • Run the provisioning of new virtual machines in VMware environments from Azure. The distribution of virtual machines on VMware vSphere can be done from the portal or using ARM templates. The possibility of being able to describe the infrastructure, through Infrastructure as Code processes, consistently across Azure and on-premises environments is very important. In fact,, adopting ARM template, DevOps teams can use CI / CD pipelines to provision systems or to update VMware virtual machines in context with other application updates.

Figure 3 - Provisioning of a VMware VM from the Azure portal

  • Make ordinary maintenance operations on virtual machines directly from the Azure portal such as: stop, start, reboot, resizing, adding or updating disks and managing network cards.
  • Guarantee a self-service access to vSphere resources via Azure Arc. For administrators managing vSphere environments, this means they can easily delegate self-service access to VMware resources, governing and ensuring compliance through advanced controls of Azure governance and Azure RBAC. In fact,, it is possible to assign granular authorizations on computational resources, storage, network and templates.
  • Provide a inventory of virtual machines in distributed vSphere environments.
  • Run and manage on a large scale the’onboarding of vSphere environments in Azure management services such as Azure Monitor Log Analytics and Azure Policy Guest Configuration. This enabling allows you to orchestrate the installation of the specific Azure Arc agent (Connected Machine agent) directly from Azure.
  • Keep changes made directly through vCenter synchronized in Azure, thanks to automatic detection features.

Conclusions

Thanks to this new advanced integration, customers can have the flexibility to innovate, even using their existing VMware environment. Furthermore, through this approach it is possible to have an effective control mechanism to manage and govern all IT resources in a coherent way.

The management of Kubernetes environments with Azure Arc

The principle behind Azure Arc is to extend Azure management and governance practices to different environments and to adopt solutions and techniques, which are typically used in a cloud environment, even for on-premises environments. This article discusses how Azure Arc allows you to deploy and configure Kubernetes applications homogeneously across all environments, adopting modern DevOps techniques.

Thanks to Azure Arc-enabled Kubernetes it is possible to connect and configure Kubernetes clusters located inside or outside the Azure environment. By connecting a Kubernetes cluster to Azure Arc, this:

  • It appears in the Azure portal with an Azure Resource Manager ID and a managed identity.
  • It is inserted within an Azure subscription and a resource group.
  • Allows it to be associated with tags like any other Azure resource.

To connect a Kubernetes cluster to Azure, the agents must be installed on the various nodes. Such agents:

  • They run in the Kubernetes namespace "azure-arc".
  • They manage connectivity to Azure.
  • They collect Azure Arc logs and metrics.
  • They check for configuration requests.

Figure 1 - Agent architecture Azure Arc-enabled Kubernetes

Azure Arc-enabled Kubernetes supports SSL to protect data in transit. Furthermore, to ensure the confidentiality of inactive data, these are stored in an encrypted way in an Azure Cosmos DB database.

Azure Arc agents on Kubernetes systems do not require the opening of inbound ports on firewall systems, but you only need to be enabled to access outbounds to specific endpoints.

For more details on this and for the procedure to follow to connect a Kubernetes cluster to Azure Arc you can consult this official Microsoft documentation.

Supported distributions

Azure Arc-enabled Kubernetes can be enabled with any certified Kubernetes cluster Cloud Native Computing Foundation (CNCF)". In fact,, the Azure Arc team collaborated with leading industry partners to validate compliance of their Kubernetes distributions with Azure Arc-enabled Kubernetes.

Supported scenarios

Enabling Azure Arc-enabled Kubernetes The following scenarios are supported:

  • Connecting Kubernetes clusters running in environments other than Azure, to perform inventory operations, grouping and tagging.
  • Application distribution and configuration management based on GitOps mechanisms. Related to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a repository Git. This declaration is followed by a poll and pull-based deployment of these cluster configurations using an operator. The Git repository can contain:
    • YAML format manifest describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc.
    • Chart Helm for application distribution.

Flux, a popular open source tool from GitOps, can be deployed on the Kubernetes cluster to facilitate the flow of configurations from a Git repository to a Kubernetes cluster.

For more details on the CI / CD workflow using GitOps for Azure Arc-enabled Kubernetes clusters you can refer to this Microsoft documentation.

  • View and monitor cluster environments using Azure Monitor for containers.
  • Threat Protection using Azure Defender for Kubernetes. The extension components collect the Kubernetes audit logs from all the nodes of the cluster control plane and send them to the back-end ofAzure Defender for Kubernetesin the cloud for further analysis. The extension is registered with a Log Analytics workspace that is used for the data pipeline, but the audit logs are not stored in the Log Analytics workspace. The extension allows you to protect Kubernetes clusters located at other cloud providers, but it does not allow you to contemplate their managed Kubernetes services.
  • Apply settings via Azure Policy for Kubernetes.
  • Creation of custom locations used as targets for the deployment of Azure Arc-enabled Data Services, App Services on Azure Arc (which includes web, function, and logic apps) and Event Grid on Kubernetes.

Azure Arc-enabled Kubernetes also supports Azure Lighthouse, which allows service providers to access their tenant to manage subscriptions and resource groups delegated by customers.

Conclusions

Companies that need to operate in a hybrid environment thanks to this technology will be able to minimize the effort of managing containerized workloads, extending services such as Azure Policy and Azure Monitor to Kubernetes clusters located in on-premises environments. Finally, through the GitOps approach, you will be able to simplify updates to cluster configurations in all environments, minimizing the risks associated with configuration problems.

Azure Arc for the management of server systems: benefits and usage scenarios

Heterogeneous infrastructures, applications based on different technologies and solutions located on different public clouds are increasingly common elements in corporate IT environments. These complexities, combined with a continuous evolution of their datacenters bring out more and more the need to visualize, govern and protect IT assets, regardless of where they are running. In Microsoft, this customer need was addressed by designing a solution that allows you to manage complex realities, also offering the possibility of bringing cloud innovation even using existing infrastructures: this solution is called Azure Arc. In particular, Azure Arc for servers extends the possibilities offered by Azure in governance and management also to physical machines and virtual systems that reside in environments other than Azure. In this article we will explore the main benefits and implementation scenarios that can be contemplated by adopting Azure Arc in the management of server systems.

Enabling Azure Arc servers allows you to manage physical servers and virtual machines residing outside Azure, on the on-premises corporate network or at other cloud providers. This management experience, valid for both Windows and Linux systems, is designed to provide consistency with the management methodologies of native virtual machines residing in the Azure environment. In fact, connecting a machine to Azure through Arc is considered in all respects as an Azure resource. Each connected machine has a specific ID, it is included in a resource group and benefits from standard Azure constructs.

Figure 1 – Azure Arc Management Overview

Main usage scenarios

The projection of server resources in Azure using Arc is a useful step to take advantage of the management and monitoring solutions described below.

Visibility and organization

In hybrid and multicloud environments, it can be particularly challenging to get a centralized view of all available resources. Some of these resources are running on Azure, some in a local environment, at branch offices or other cloud providers. By connecting resources to Azure Resource Manager via Azure Arc, it is possible to organize, centrally inventory and manage a wide range of resources, include Windows and Linux servers, server SQL, Kubernetes clusters and Azure services running in Azure and outside Azure. This visibility can be obtained directly from the Azure portal and specific queries can be performed using Azure Resource Graph.

Figure 2 - Azure Arc and resources in the Azure portal

Access management

With Azure Arc for servers it is possible to provide access to systems through Azure role-based access control (Azure RBAC). Furthermore, in the presence of different environments and tenants, Azure Arc also integrates with Azure Lighthouse. This scenario can be of particular interest to providers that offer managed services to multiple customers.

Monitor

Through VM Insights it is possible to consult the main performance data, from the guest operating system. Thanks to the powerful data aggregation and filtering functions, it is possible to easily monitor the performance for a very large number of systems and easily identify those that have performance problems. Furthermore, it is possible to generate a map with the interconnections present between the various components residing on different systems. Maps show how VMs and processes interact with each other and can identify dependencies on third-party services. The solution also allows you to check for connection errors, count connections in real time, network bytes sent and received by processes and latencies encountered at the service level.

Figure 3 – Monitoring: Performance

Figure 4 – Monitoring: Map

Azure Policy guest configurations

Guest Configuration Policies allow you to control settings within a system, both for virtual machines running in Azure environment and for "Arc Connected" machines. Validation is performed by the client and by the Guest Configuration extension as regards:

  • Operating system configuration
  • Configuration or presence of applications
  • Environment settings

At the moment, most of the Azure Guest Configuration Policies only allow you to make checks on the settings inside the machine, but they don't apply configurations. For more information on this scenario, you can consult the article Azure Governance: how to control system configurations in hybrid and multicloud environments.

Inventory

This feature allows you to retrieve inventory information relating to: installed software, files, Registry keys in a Windows environment, Windows Services and Linux Daemons. All this can easily be accessed directly from the portal Azure.

Change Tracking

The functionality ofChange Tracking monitors changes made to systems relatively to Daemons, File, Registry, software and services on Windows . This feature can be very useful in particular for diagnosing specific problems and for enabling alerts in the face of unexpected changes.

Figure 5 – Change Tracking e Inventory

Update Management

The solution ofUpdate Management allows you to have an overall visibility on the compliance of updates for both Windows and Linux systems. The solution is not only useful for consultation purposes, but it also allows you to schedule deployments for installing updates within specific maintenance windows.

Figure 6 – Update Management

Azure Defender
The projection of server resources in Azure using Arc is a useful step to ensure that all the machines in the infrastructure are protected by Azure Defender for Server. Similar to an Azure VM, it will also be necessary to deploy the Log Analytics agent on the target system. To simplify the onboarding process this agent is deployed using the VM extension, and this is one of the advantages of using Arc.

Once the Log Analytics agent has been installed and connected to a workspace used by ASC, the machine will be ready to use and benefit from the various security features offered in the Azure Defender for Servers plan.

Deployment Tools

Deployments can be simplified thanks to the use of Azure Automation State Configuration and of Azure VM extensions. This allows you to contemplate post-deployment configurations or software installation using the Custom Script Extension.

Conclusions

Maintain control and manage the security of workloads running on-premises, in Azure and on other cloud platforms it can be particularly challenging. Thanks to Azure Arc for Servers it is possible to easily extend the typical Azure management and monitoring services to workloads residing outside the Azure environment. Furthermore, Azure Arc allows you to obtain detailed information and organize various IT resources in a single centralized console, useful for effectively managing and controlling your entire IT environment.

How to extend Azure Security Center protection to all resources through Azure Arc

Azure Security Center (ASC) was originally developed with the intention of becoming the reference tool for protecting resources in the Azure environment. The much felt need of customers to protect the resources located in environments other than Azure has led to an evolution of the solution that, thanks to integration with Azure Arc, allows you to extend the protection and security management tools to any infrastructure. This article explains how Azure Security Center and Azure Arc allow you to protect non-Azure resources located on-premises or on other cloud providers, as virtual machines, Kubernetes services and SQL resources.

The adoption of Azure Defender using the principles of Azure Arc

Azure Arc allows you to manage workloads residing outside Azure, on the on-premises corporate network or at another cloud provider. This management experience is designed to provide consistency with native Azure management methodologies.

Thanks to the fact thatAzure Security Center and Azure Arc can be used jointly, you have the ability to offer advanced protection for three different scenarios:

Figure 1 - Protection scenarios

By enabling the Azure Defender protection of workloads at the subscription level in the Azure Security Center, it is also possible to consider the resources and workloads residing in hybrid and multicloud environments, all in an extremely simple way thanks to Azure Arc.

Azure Defender for Arc-enabled server systems

By connecting a server machine to Azure via Arc, it is considered to all intents and purposes as an Azure resource. Each connected machine has a specific ID, is included in a resource group and benefits from standard Azure constructs such as Azure Policies and tagging. This applies to both Windows and Linux systems.

To offer this experience, the installation of the specific Azure Arc agent is required on each machine that is planned to connect to Azure ("Azure Connected Machine").

The Azure Arc Connected Machine agent consists of the following logical components:

  • TheHybrid Instance Metadata service (HIMDS) that manages the connection to Azure and the Azure identity of the connected machine.
  • TheGuest Configurationagent that provides in-guest policy and guest configuration features.
  • TheExtension Manageragent that manages installation processes, uninstalling and updating machine extensions.

Figure 2 – Azure Arc Agent Components

The Connected Machine agent requires secure outbound communication to Azure Arc on TCP port 443.

This agent provides no other features and does not replace the Azure Log Analytics agent, which remains necessary when you want to proactively monitor the operating system and workloads running on the machine.

For more information about installing Azure Arc, seethis official Microsoft document.

Azure Arc-enabled servers can benefit from several Azure Resource Manager-related features such as Tags, Policies and RBAC, as well as some features related to Azure Management.

Activating Azure Defender for Server with Azure Arc

The projection of server resources in Azure using Arc is a useful step to ensure that all the machines in the infrastructure are protected by Azure Defender for Server. Similar to an Azure VM, it will also be necessary to deploy the Log Analytics agent on the target system. To simplify the onboarding process this agent is deployed using the VM extension, and this is one of the advantages of using Arc.

Once the Log Analytics agent has been installed and connected to a workspace used by ASC, the machine will be ready to use and benefit from the various security features offered in the Azure Defender for Servers plan.

For each resource, it is possible to view the status of the agent and its current security recommendations:

Figure 3 – Azure Arc Connected Machine in ASC

In case there is a need to onboard a non-Azure server in Azure Defender with an operating system version not yet supported by the Azure Arc agent, however, it is possible to perform onboarding by installing only the Log Analytics agent on the machine.

The icons in the Azure portal allow you to easily distinguish the different resources:

Figure 4 - Icons of the different resources present in ASC

 

Azure Defender for Arc-enabled Kubernetes resources

Azure Defender for Kubernetes also allows you to protect clusters located on-premises with the same threat detection features offered for Azure Kubernetes Service clusters (AKS).

For all Kubernetes clusters other than AKS, is necessary connect the cluster environment to Azure Arc. Once the cluster environment is connected, Azure Defender for Kubernetes can be activated as cluster extension on Azure Arc-enabled Kubernetes resources.

Figure 5 - Interaction between Azure Defender for Kubernetes and the Kubernetes cluster enabled for Azure Arc

The extension components collect the Kubernetes audit logs from all the nodes of the cluster control plane and send them to the back-end of Azure Defender for Kubernetes in the cloud for further analysis. The extension is registered with a Log Analytics workspace that is used for the data pipeline, but the audit logs are not stored in the Log Analytics workspace.

The extension also allows you to protect Kubernetes clusters located at other cloud providers, but it does not allow you to contemplate their managed Kubernetes services.

Azure Defender for Arc-enabled SQL Server resources

Azure Defender for SQL allows you to constantly monitor SQL Server implementations for known threats and vulnerabilities. These features are also usable not only for virtual machines in Azure, but also for SQL Server activated in an on-premises environment and in multicloud deployment. Azure Arc-enabled SQL Servers are also part of Azure Arc for servers. To enable Azure services, the’SQL Server instance must be registered with Azure Arc using the Azure portal and a special registration script. After registration, the instance will be represented on Azure as a resource SQL Server – Azure Arc. The properties of this resource reflect a subset of the SQL Server configuration settings.

Figure 6 - Diagram illustrating the Azure Arc architecture for SQL Server resources


Conclusions

Manage security and maintain control of workloads running on-premises, in Azure and on other cloud platforms it can be particularly challenging. Thanks to Azure Arc, it is possible to easily extend Azure Defender coverage to workloads residing outside the Azure environment. Furthermore, Azure Security Center allows you to obtain detailed information on the security of your hybrid environment in a single centralized console, useful for effectively controlling the security of your IT infrastructure.

Azure Governance: how to control system configurations in hybrid and multicloud environments

There are several companies that are investing in hybrid and multicloud technologies to achieve high flexibility, that enables you to innovate and meet changing business needs. In these scenarios, customers face the challenge of using IT resources efficiently, in order to best achieve your business goals, implementing a structured IT governance process. This can be achieved more easily if you have solutions that, in a centralized way, allow you to inventory, organize and enforce control policies on your IT resources wherever you are. Azure Arc solution involves different technologies with the aim of supporting hybrid and multicloud scenarios, where Azure services and management principles are extended to any infrastructure. In this article we will explore how, thanks to the adoption of the Azure Guest Configuration Policy it is possible to control the configurations of systems running in Azure, in on-premises datacenters or other cloud providers.

The principle behind Azure Arc

The principle behind Azure Arc is to extend Azure management and governance practices to different environments and to adopt typically cloud solutions, as DevOps techniques (infrastructure as code), also for on-premises and multicloud environments.

Figure 1 – Azure Arc overview

Enabling systems to Azure Arc

Enabling Azure Arc servers allows you to manage physical servers and virtual machines residing outside Azure, on the on-premises corporate network or at another cloud provider. This applies to both Windows and Linux systems. This management experience is designed to provide consistency with Azure native virtual machine management methodologies. In fact, connecting a machine to Azure through Arc is considered in all respects as an Azure resource. Each connected machine has a specific ID, is included in a resource group and benefits from standard Azure constructs such as Azure Policies and tagging.

To offer this experience, the installation of the specific Azure Arc agent is required on each machine that is planned to connect to Azure ("Azure Connected Machine"). The following operating systems are currently supported:

  • Windows Server 2008 R2, Windows Server 2012 R2 or higher (this includes core servers)
  • Ubuntu 16.04 and 18.04 LTS (x64)
  • CentOS Linux 7 (x64)
  • SUSE Linux Enterprise Server (SLES) 15 (x64)
  • Red Hat Enterprise Linux (RHEL) 7 (x64)
  • Amazon Linux 2 (x64)
  • Oracle Linux 7

The Azure Arc Connected Machine agent consists of the following logical components:

  • TheHybrid Instance Metadata service (HIMDS) that manages the connection to Azure and the Azure identity of the connected machine.
  • The Guest Configuration agent that provides in-guest policy and guest configuration features.
  • TheExtension Manager agent that manages installation processes, uninstalling and updating machine extensions.

Figure 2 – Azure Arc Agent Components

The Connected Machine agent requires secure outbound communication to Azure Arc on TCP port 443.

This agent provides no other features and does not replace the Azure Log Analytics agent, which remains necessary when you want to proactively monitor the operating system and workloads running on the machine.

For more information about installing Azure Arc, see this official Microsoft document.

Azure Arc-enabled servers can benefit from several Azure Resource Manager-related features such as Tags, Policies and RBAC, as well as some features related to Azure Management.

Figure 3 – Azure Management for all IT resources

Guest Configuration Policy di Azure

Guest Configuration Policies allow you to control settings within a machine, both for virtual machines running in Azure environment and for "Arc Connected" machines. Validation is performed by the client and by the Guest Configuration extension as regards:

  • Operating system configuration
  • Configuration or presence of applications
  • Environment settings

At the moment, most of the Azure Guest Configuration Policies only allow you to make checks on the settings inside the machine, but they don't apply configurations. The exception is a built-in time zone configuration policy operating system for Windows machines.

Requirements

Before you can check the settings inside a machine, through guest configuration policies, you must:

  • Enable a’extension on the Azure VM, required to download assigned policy assignments and corresponding configurations. This extension is not required for "Arc Connected" machines as it is included in the Arc agent.
  • Make sure that the machine has a system-managed identity, used for the authentication process when reading and writing to the guest configuration service.

Operation

Azure provides built-in specification platform Initiatives and a large number of Guest Configuration Policy, but you can also create custom one both in Windows environment, both in Linux environment.

Guest Configuration policy assignment works the same way as standard Azure Policies, so you can group them into initiative. Specific parameters can also be configured for Guest Configuration Policies and there is at least one parameter that allows you to include Azure Arc-enabled servers. When you have the desired policy definition, it is possible to assign it to a subscription and possibly in a more limited way to a specific Resource Group. You also have the option of excluding certain resources from the application of the policy.

Following the assignment, it is possible to assess the compliance status in detail directly from the Azure portal.

Inside the machine, the Guest Configuration agent uses local tools to audit the configurations:

The Guest Configuration agent checks for new or modified guest policy assignments each 5 minutes and once the assignment is received the settings are checked at intervals of 15 minutes.

The Cost of the Solution

The cost of Azure Guest Configuration Policies is based on the number of servers registered to the service and which have one or more guest configurations assigned. Any other type of Azure Policy that is not based on guest configuration is offered at no additional cost, including virtual machine extensions to enable services such as Azure Monitor and Azure Security Center or auto tagging policies. The billing is distributed on an hourly basis and also includes the change tracking features present through Azure Automation. For more details on costs please visit the Microsoft's official page.

Conclusions

IT environments are constantly evolving and often have to deliver business-critical applications based on different technologies, active on heterogeneous infrastructures and which in some cases use solutions provided in different public clouds. The adoption of a structured IT governance process is easier also thanks to the Guest Configuration Policies and the potential of Azure Arc, that allow you to more easily control and support hybrid and multicloud environments.