The Dlsv5 and Dldsv5 VM series are ideal for workloads that require less RAM per vCPU than standard general purpose VM sizes. Target workloads include web servers, gaming, video encoding, AI/ML, batch processing and more. These VM series can potentially improve price-performance and reduce the cost of running workloads that do not require more memory per vCPU. The new VMs feature sizes with and without local temporary storage.
Networking
Azure Firewall enhancements for troubleshooting network performance and traffic visibility (preview)
Microsoft Azure Firewall now offers new logging and metric enhancements designed to increase visibility and provide more insights into traffic processed by the firewall. IT security administrators may use (in preview) a combination of the following to root cause application performance issues:
Application Gateway v2 is introducing a collection of new capabilities to further enable you to control network exposure using Application Gateway v2 skus:
private IP only frontend configuration (elimination of Public IP);
enhanced control over Network Securtiy Groups:
eliminate GatewayManager service tag requirement;
enable definition of Deny All Outbound rule;
enhanced control over Route Table rules:
forced Tunelling Support (learning of 0.0.0.0/0 route via BGP);
route Table rule of 0.0.0.0/0 next hop Virtual Appliance.
Storage
Azure File Sync agent v16
The Azure File Sync agent v16 release has finished flighting and is now available on both Microsoft Update and the Microsoft Download Center.
Improvements and issues that are fixed:
improved Azure File Sync service availability:
Azure File Sync is now a zone-redundant service which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully leverage this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see: Azure Storage redundancy
immediately run server change enumeration to detect files changes that were missed on the server:
Azure File Sync uses the Windows USN journal feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files will not sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don’t want to wait until the next server change enumeration job runs, you can now use the Invoke-StorageSyncServerChangeDetection PowerShell cmdlet to immediately run server change enumeration on a server endpoint path;
bug fix for the PowerShell script FileSyncErrorsReport.ps1;
miscellaneous reliability and telemetry improvements for cloud tiering and sync.
More information about this release:
this release is available for Windows Server 2012 R2, Windows Server 2016, Windows Server 2019 and Windows Server 2022 installations;
the agent version for this release is 16.0.0.0;
installation instructions are documented in KB5013877.
Azure Files NFS: nconnect support
Azure Files NFS v4.1 share now support nconnect option. Nconnect is a client-side Linux mount option that increases performance at scale. With nconnect, the NFS mount uses more TCP connections between the client and the Azure Files service for NFSv4.1. Using Nconnect can improve a client’s throughput/IOPS upto 4X and reduce TCO by upto 70%. There is no additional billing cost associated to using this feature. This feature is available to all existing and new shares.
Azure Premium SSD v2 Disk Storage in new regions
Azure Premium SSD v2 Disk Storage is now available in East US 2, North Europe, and West US 2 regions. This next-generation storage solution offers advanced general-purpose block storage with the best price performance, delivering sub-millisecond disk latencies for demanding IO-intensive workloads at a low cost. It is well-suited for a wide range of enterprise production workloads, including SQL Server, Oracle, MariaDB, SAP, Cassandra, MongoDB, big data analytics, gaming on virtual machines, and stateful containers.
Hyperconverged infrastructure (HCI) are increasingly popular as they allow you to simplify the management of the IT environment, reduce costs and scale easily when needed. Azure Stack HCI is the Microsoft solution that allows you to create a hyper-converged infrastructure for the execution of workloads in an on-premises environment and which provides a strategic connection to various Azure services to modernize your IT infrastructure. Properly configuring Azure Stack HCI networking is critical to ensuring security, application reliability and performance. In this article, the fundamentals of configuring Azure Stack HCI networking are explored, learning more about available networking options and best practices for networking design and configuration.
There are different network models that you can take as a reference to design, deploy and configure Azure Stack HCI. The following paragraphs show the main aspects to consider in order to direct the possible implementation choices at the network level.
Number of nodes that make up the Azure Stack HCI cluster
A single Azure Stack HCI cluster can consist of a single node and can scale up to 16 nodes.
If the cluster consists of a single server at the physical level it is recommended to provide the following network components, also shown in the image:
single TOR switch (L2 or L3) for north-south traffic;
two-four teamed network ports to handle management and computational traffic connected to the switch;
Furthermore, optionally it is possible to provide the following components:
two RDMA NIC, useful if you plan to add a second server to the cluster to scale your setup;
a BMC card for remote management of the environment.
Figure 1 – Network architecture for an Azure Stack HCI cluster consisting of a single server
If your Azure Stack HCI cluster consists of two or more nodes you need to investigate the following parameters.
Need for Top-Of-Rack switches (TOR) and its level of redundancy
For Azure Stack HCI clusters consisting of two or more nodes, in production environment, the presence of two TOR switches is strongly recommended, so that we can tolerate communication disruptions regarding north-south traffic, in case of failure or maintenance of the single physical switch.
If the Azure Stack HCI cluster is made up of two nodes, you can avoid providing a switch connectivity for storage traffic.
Two-node configuration without TOR switch for storage communication
In an Azure Stack HCI cluster that consists of only two nodes, to reduce switch costs, perhaps going to use switches already in possession, storage RDMA NICs can be connected in full-mesh mode.
In certain scenarios, which include for example branch office, or laboratories, the following network model can be adopted which provides for a single TOR switch. By applying this pattern, you get cluster-wide fault tolerance, and is suitable if interruptions in north-south connectivity can be tolerated when the single physical switch fails or requires maintenance.
Figure 2 – Network architecture for an Azure Stack HCI cluster consisting of two servers, without storage switches and with a single TOR switch
Although the SDN services L3 are fully supported for this scheme, routing services such as BGP will need to be configured on the firewall device that sits on top of the TOR switch, if this does not support L3 services.
If you want to obtain greater fault tolerance for all network components, the following architecture can be provided, which provides two redundant TOR switches:
Figure 3 – Network architecture for an Azure Stack HCI cluster consisting of two servers, without storage switches and redundant TOR switches
The SDN services L3 are fully supported by this scheme. Routing services such as BGP can be configured directly on TOR switches if they support L3 services. Features related to network security do not require additional configuration for the firewall device, since they are implemented at the virtual network adapter level.
At the physical level, it is recommended to provide the following network components for each server:
two-four teamed network ports, to handle management and computational traffic, connected to the TOR switches;
two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each cluster node must have a redundant connection to the other cluster node;
as optional, a BMC card for remote management of the environment.
In both cases the following connectivities are required:
Networks
Management and computational
Storage
BMC
Network speed
At least 1 GBps,
10 GBps recommended
At least 10 GBps
Tbd
Type of interface
RJ45, SFP+ or SFP28
SFP+ or SFP28
RJ45
Ports and aggregation
Twofour ports in teaming
Two standalone ports
One port
Two or more node configuration using TOR switches also for storage communication
When you expect an Azure Stack HCI cluster composed of more than two nodes or if you don't want to preclude the possibility of being able to easily add more nodes to the cluster, it is also necessary to merge the traffic concerning the storage from the TOR switches. In these scenarios, a configuration can be envisaged where dedicated network cards are maintained for storage traffic (non-converged), as shown in the following picture:
Figure 4 – Network architecture for an Azure Stack HCI cluster consisting of two or more servers, redundant TOR switches also used for storage traffic and non-converged configuration
At the physical level, it is recommended to provide the following network components for each server:
two teamed NICs to handle management and computational traffic. Each NIC is connected to a different TOR switch;
two RDMA NICs in standalone configuration. Each NIC is connected to a different TOR switch. SMB multi-channel functionality ensures path aggregation and fault tolerance;
as optional, a BMC card for remote management of the environment.
These are the connections provided:
Networks
Management and computational
Storage
BMC
Network speed
At least 1 GBps,
10 GBps recommended
At least 10 GBps
Tbd
Type of interface
RJ45, SFP+ or SFP28
SFP+ or SFP28
RJ45
Ports and aggregation
Two ports in teaming
Two standalone ports
One port
Another possibility to consider is a "fully-converged" configuration of the network cards, as shown in the following image:
Figure 5 – Network architecture for an Azure Stack HCI cluster consisting of two or more servers, redundant TOR switches also used for storage traffic and fully-converged configuration
The latter solution is preferable when:
bandwidth requirements for north-south traffic do not require dedicated cards;
the physical ports of the switches are a small number;
you want to keep the costs of the solution low.
At the physical level, it is recommended to provide the following network components for each server:
two teamed RDMA NICs for traffic management, computational and storage. Each NIC is connected to a different TOR switch. SMB multi-channel functionality ensures path aggregation and fault tolerance;
as optional, a BMC card for remote management of the environment.
These are the connections provided:
Networks
Management, computational and storage
BMC
Network speed
At least 10 GBps
Tbd
Type of interface
SFP+ or SFP28
RJ45
Ports and aggregation
Two ports in teaming
One port
SDN L3 services are fully supported by both of the above models. Routing services such as BGP can be configured directly on TOR switches if they support L3 services. Features related to network security do not require additional configuration for the firewall device, since they are implemented at the virtual network adapter level.
Type of traffic that must pass through the TOR switches
To choose the most suitable TOR switches it is necessary to evaluate the network traffic that will flow from these network devices, which can be divided into:
management traffic;
computational traffic (generated by the workloads hosted by the cluster), which can be divided into two categories:
standard traffic;
SDN traffic;
storage traffic.
Microsoft has recently changed its approach to this. In fact,, TOR switches are no longer required to meet every network requirement regarding various features, regardless of the type of traffic for which the switch is used. This allows you to have physical switches supported according to the type of traffic they carry and allows you to choose from a greater number of network devices at a lower cost, but always of quality.
In this document lists the required industry standards for specific network switch roles used in Azure Stack HCI implementations. These standards help ensure reliable communication between nodes in Azure Stack HCI clusters. In this section instead, the switch models supported by the various vendors are shown, based on the type of traffic expected.
Conclusions
Properly configuring Azure Stack HCI networking is critical to ensuring that hyper-converged infrastructure runs smoothly, ensuring security, optimum performance and reliability. This article covered the basics of configuring Azure Stack HCI networking, analyzing the available network options. The advice is to always carefully plan the networking aspects of Azure Stack HCI, choosing the most appropriate network option for your business needs and following implementation best practices.
In March there were several news announced by Microsoft regarding Azure management services. In this series of articles, published on a monthly basis, major announcements are listed, accompanied by the necessary references to be able to conduct further studies on.
The following diagram shows the different areas related to management, which are covered in this series of articles:
Monitor
Azure Monitor
Ingestion client libraries
Microsoft announces the initial release of the Azure Monitor Ingestion client libraries for .NET, Java, JavaScript e Python. Libraries allow you to:
Upload custom logs to a Log Analytics workspace.
Modernize security standards by requiring Azure Active Directory token-based authentication.
Complete Azure Monitor Query libraries, used to query logs in a Log Analytics workspace.
Collecting Syslog from AKS nodes using Azure Monitor Container Insights(preview)
Customers can now use Azure Monitor Container Insights to collect Syslog from their Azure Kubernetes Service cluster nodes (AKS). In combination with SIEM systems (Microsoft Sentinel) and monitor tools (Azure Monitor), syslog collection tracks security and health events of IaaS and containerized workloads.
The Azure Monitor for Prometheus managed service now supports querying PromQL
Thanks to Azure Workbooks support for Azure Monitor Prometheus managed service, users are provided with the ability to use Prometheus workbooks to run PromQL queries in the portal. Furthermore, users have the benefit of creating custom reports for Prometheus workbooks.
Azure Monitor supports Availability Zones in new regions
Azure Monitor continues to expand its availability zone support by adding three regions: Canada Central, France Central and Japan East.
Azure Monitor alerts support cloning
When viewing the details of an alert rule in the Azure portal, a new option is now available “duplicate”, which allows you to duplicate the alert rule. When selecting this option for an existing alert rule, the rule creation wizard starts, pre-populated with the original alert rule configuration, while allowing you to make changes.
Configure
Azure Automation
Announced the retirement of the agent-based Hybrid Worker (Windows and Linux) for the 31 August 2024
Azure Automation is deprecating the agent-based Hybrid Runbook Worker (Windows and Linux) and this will definitely happen on 31 August 2024. You must migrate to extension-based Hybrid Workers by that date (Windows and Linux).
The main advantages of the extension-based Hybrid Runbook Worker are:
uses system-assigned managed identities, so you don't need to manage certificates for authentication;
offers automatic updating of minor versions;
simplify hybrid worker management at scale with native integration with Azure Resource Manager and governance with Azure Policy.
Migrating authentication from Run As accountto Managed Identityin ASR
It is now possible to migrate the authentication type of accounts, moving to managed identities, using Azure Site Recovery from the Azure portal. Authentication of runbooks via Run As accounts will be deprecated on 30 September 2023. Before that date, runbooks need to be migrated to enable the use of Managed Identities.
Govern
Azure Cost Management
Updates related toMicrosoft Cost Management
Microsoft is constantly looking for new methodologies to improve Microsoft Cost Management, the solution to provide greater visibility into where costs are accumulating in the cloud, identify and prevent incorrect spending patterns and optimize costs . Inthis article the latest improvements and updates concerning this solution are reported.
Azure Arc
Improved Azure Arc integration with Datadog
Microsoft is improving the ability to observe and manage IT infrastructure thanks to the integration of Microsoft Azure Arc with Datadog. Based on the consolidated collaboration, Microsoft is integrating Datadog with Azure Arc natively, to meet Datadog customers, providing rich insights from Azure Arc-enabled resources directly into Datadog dashboards. Customers can monitor real-time data during cloud migrations and performance of applications running in both public cloud and hybrid or multicloud environments.
Secure
Microsoft Defender for Cloud
New features, bug fixes and deprecated features of Microsoft Defender for Cloud
Microsoft Defender for Cloud development is constantly evolving and improvements are being made on an ongoing basis. To stay up to date on the latest developments, Microsoft updates this page, this provides information about new features, bug fixes and deprecated features. In particular, this month the main news concern:
availability of a new Defender for Storage plan, which includes near real-time scanning for malware and detection of threats to sensitive data;
data-aware security posture (preview);
new experience for managing Azure default security policies;
Defender per CSPM (Cloud Security Posture Management) is now available (GA);
ability to create custom security standards and recommendations in Microsoft Defender for Cloud;
Microsoft Cloud Security Benchmark (MCSB) version 1.0 is now available (GA);
some regulatory compliance standards are now available in government clouds;
new preview recommendation for Azure SQL Servers;
new notice in Defender for Key Vault.
Protect
Azure Backup
Immutable vaults for Azure Backup
Immutable vaults are now also available for production environments and offer greater security for backups, ensuring that recovery points created once cannot be deleted before they expire. Azure Backup prevents any operation on immutable vaults which could lead to backup data loss. Furthermore, you can lock immutable vault ownership to make it irreversible. This helps protect your backups from threats such as ransomware attacks and malicious actors, preventing operations such as deleting backups or reducing retention in backup policies.
Backup per Azure Kubernetes Service (preview)
Organizations using Azure Kubernetes Services (AKS) increasingly run stateful applications on their clusters, deploying workloads such as Apache Kafka-based messaging queues and databases such as Postgres and MongoDB. With data storage within the cluster, backup and recovery become a major concern of IT managers. Make sure Kubernetes backup capabilities are scalable, flexible and purpose-built for Kubernetes is central to an overall data protection plan. Azure Backup introduced now Backup for AKS. This solution simplifies the backup and recovery of containerized applications and data and allows customers to configure a scheduled backup for both cluster state and application data. Backup for AKS is aligned with the Container Storage Interface (CSI) to offer Kubernetes-aware backup capabilities. The solution allows customers to unlock different scenarios, such as data backup for application security and regulatory requirements, cloning of development/test environments and rollback management.
Azure Backup allows you to keep backups in vaults for Azure Blob and for Azure File (preview)
Azure Backup now supports transferring Azure Blob and Azure File backups to vaults. A vault is a logical entity that stores backups and recovery points created over time. In this regard, you can define a backup schedule for creating recovery points and specify retention settings that determine how long backups will be stored in the vault. Backups in the vault are isolated from the source data and allow you to tap into the data even if the source data has been compromised, performing resets.
Listed below are some of the main features that can be achieved by placing backups in vaults:
Offsite copy of data: allows you to restore mission-critical data from backups, regardless of the state of the source data.
Long-term retention of backup data, which helps you meet compliance requirements, particularly in the financial and healthcare sectors, with strict guidelines on the data retention period.
Recovery in alternate location: allows you to restore data to an alternate account if the source storage account is compromised or create different copies of your data for testing or development purposes.
Centralized management through the backup center: backups in vaults can be monitored and analyzed at scale alongside other protected workloads using Azure Backup.
Safe backups. The built-in security features of Azure Backup, such as multi-user authorization (MUA) for critical backup operations, data encryption and role-based access control (RBAC), help protect the backups in the vault and meet your backup security needs.
Azure Site Recovery
Improved the ability to rename network interfaces and disks of protected virtual machines
ASR introduces a new, easier way to name and rename network interfaces (NIC) and the virtual machine disks in the recovery service vaults.
Migrate
Azure Migrate
New Azure Migrate releases and features
Azure Migrate is the service in Azure that includes a large portfolio of tools that you can use, through a guided experience, to address effectively the most common migration scenarios. To stay up-to-date on the latest developments in the solution, please consult this page, that provides information about new releases and features. In particular, This month, the biggest news is support for web app discovery and assessment for Azure app service for Hyper-V and physical servers.
Azure Database Migration
Offline Azure SQL Database migrations with the Azure SQL Migration extension
Offline migrations of SQL Server databases running on-premises, on Azure virtual machines or any virtual machine running in the cloud (private, public) to Azure SQL Database it is possible to do it through the Azure SQL Migration extension. The new migration feature of the Azure SQL Migration extension for Azure Data Studio provides an end-to-end experience to modernize SQL Server on Azure SQL Database. The extension allows you to prepare for the migration with actions to remediate any blockages and allows you to obtain recommendations to adequately size the Azure SQL Database targets, including hardware configuration in the Hyperscale service tier.
Evaluation of Azure
To test for free and evaluate the services provided by Azure you can access this page.
Azure VMware Solution: Azure Hybrid Benefit for SQL Server
Azure Hybrid Benefit (AHB) for SQL Server is now available in Azure VMware Solution (AVS). With AHB for SQL Server on Azure VMware Solution, you can take advantage of the unlimited virtualization licensing capability included with the SQL Server Software Assurance. To this end, you can configure and enable VM-Host placement policies via the Azure portal and apply Azure Hybrid Benefit.
Networking
Azure Firewall Basic
Azure Firewall Basic is a new SKU for Azure Firewall designed for small and medium-sized businesses. Azure Firewall Basic can be deployed inside a virtual network or a virtual hub. This gives businesses the flexibility to choose the deployment option that best meets their needs.
Seamless integration with other Azure security services
Simple setup and easy-to-use
Setup in just a few minutes
Automate deployment (deploy as code)
Zero maintenance with automatic updates
Central management via Azure Firewall Manager
Cost-effective
Designed to deliver essential, cost-effective protection of your resources within your virtual network
Pricing and billing for Azure Firewall Basic with secured virtual hub will be effective starting May 1, 2023.
Azure Virtual Network Manager
Azure Virtual Network Manager (AVNM) is now generally available. AVNM is a highly scalable and available network management solution that allows you to simplify network management across subscriptions globally. Using its centralized network management capabilities, you can manage your network resources at scale from a single plane of glass.
Key features of Azure Virtual Network Manager include:
global management of virtual network resources across regions, subscriptions, and tenants;
automated management and deployment of virtual network topology to create hub and spoke*;
high-priority security rule enforcement at scale to protect your network resources*;
safe deployment of network configurations across desired regions.
*The mesh topology and security admin rule features remain in public preview and will become generally available soon
Azure Traffic Manager: reserved namespaces for subdomains
Azure Traffic Manager has added functionality for reserving domain labels for traffic manager profiles. Any customer requesting a traffic manger profile of the form label1.trafficmanager.net will have “label1” label reserved for the tenant and another user will not be able to create a new traffic manager profile with this name or subdomains below it. For example if a user creates a profile names label1.trafficmanager.net then “label1” and all labels of form “<labelN>….<lable2>.<label1>.trafficmanager.net” will be reserved for the subscription. With these enhancements, once a namespace is created by a customer under trafficmanager.net domain, it will not be available for any other tenant. This enhancement ensures that customers have full control over the labels tree used in their traffic manager profiles and enables customers better manage their namespace without having to worry about a specific name/label being in use by other tenants.
Illumio for Azure Firewall (preview)
Microsoft partnered with Illumio, the leader in Zero Trust Segmentation, to build Illumio for Azure Firewall, an integrated solution that brings the benefits of Zero Trust Segmentation to Azure Firewall.
Illumio for Azure Firewall uses the Azure platform to protect your resources across your Azure virtual networks and at your Azure perimeter. It enables organizations to understand application traffic and dependencies and apply consistent protection across your environment – limiting exposure, containing breaches, and improving efficiency. Illumio for Azure Firewall also helps simplify Zero Trust Segmentation by enhancing visibility, streamlining policy management, and providing scalable security.
Key benefits:
Reduce security risks with a single view of your east-west and north-south traffic based on Azure Firewall flow data within your Azure subscriptions.
Gain a holistic view of your application traffic with real-time visibility of interactions and dependencies across your environment.
Easily deploy and configure Azure application-based polices within the Illumio platform.
Deploy Azure Firewall policies confidently with policies that automatically scale along with your applications.
Avoid application downtime by understanding the impact of Azure Firewall policies before they are enforced.
Works with all 3 SKUs of Azure Firewall – Basic, Standard, and Premium – to meet the needs of any organization.
Accelerated Connections for Network Virtual Appliances now in Azure Marketplace (preview)
Accelerated Connections is a new product that enhances Accelerated Networking enabled vNICs, enabling customer flexibility in selecting the best option of CPS capabilities suited to match their Azure implementation. This offering will enable you to achieve the first bare-metal-like performance levels for connections per second (CPS) in Azure.
Storage
Ephemeral OS disks supports encryption at host using customer managed keys
Ephemeral OS disks can be encrypted at host using platform managed keys or customer managed keys. The default is platform managed keys. This feature would enable our customers to meet your organization’s compliance needs.
Azure Ultra Disk Storage in Brazil Southeast, South Africa North and UAE North
Azure Ultra Disk Storage is now available in one zone in Brazil Southeast, South Africa North and UAE North region. Azure Ultra Disk Storage offers high throughput, high IOPS and consistent low latency disk storage for Azure Virtual Machines (VMs). Ultra Disk Storage is well suited for data-intensive workloads such as SAP HANA, top-tier databases and transaction-heavy workloads.
Encryption scopes on hierarchical namespace enabled storage accounts
Encryption scopes introduce the option to provision multiple encryption keys in a storage account with hierarchical namespace. Using encryption scopes, you now can provision multiple encryption keys and choose to apply the encryption scope either at the container level (as the default scope for blobs in that container) or at the blob level. The capability is available for REST, HDFS, NFSv3 and SFTP protocols in an Azure Blob / Data Lake Gen2 storage account. The key that protects an encryption scope may be either a Microsoft-managed key or a customer-managed key in Azure Key Vault. You can choose to enable automatic rotation of a customer-managed key that protects an encryption scope. When you generate a new version of the key in your Key Vault, Azure Storage will automatically update the version of the key that is protecting the encryption scope, within a day.
Performance Plus for Azure Disk Storage (preview)
Azure Disk Storage now offers a new feature called Performance Plus, which enhances the IOPS and throughput performance of Standard HDD, Standard SSD, and Standard HDD disks that are sized 1TB or larger. Performance Plus is offered for free and is available to use through deployments on Azure Command-Line Interface (CLI) and PowerShell.
The adoption of cloud computing is becoming more widespread, but managing and controlling cloud resources can be a daunting challenge for organizations. In this context, Microsoft's Azure Policies represent a fundamental tool for cloud governance, able to help companies define, apply and enforce security and compliance policies in a consistent and automated manner. This article will explore the importance of Azure Policies in managing cloud services, illustrating the benefits of using this solution and some more common use cases. Furthermore, some useful tips for defining effective policies and for integrating Azure Policies into the overall cloud governance strategy will be presented.
The common need and possible approaches
The common requirement is to standardize, and in some cases impose, how resources are configured in the cloud environment. All this is done to obtain specific environments that meet compliance regulations, monitor security, resource costs and standardize the design of the different architectures.
Getting this result is not easy, especially in complex environments where you can find different Azure subscriptions on which different groups of operators develop and operate.
These goals can be achieved with a traditional approach, which provides for a block of operators in direct access to cloud resources (through the portal, API or cli):
Figure 1 – Traditional approach
However, this type of traditional approach is not very flexible, because it involves a loss of agility in controlling the deployment of resources.
In this regard, it is instead recommended to use a mechanism that is provided natively by the Azure platform, which allows you to pilot governance processes to achieve the desired control, but without impacting the speed, fundamental element in operations in modern IT with resources in the cloud:
Figure 2 – Modern approach with Azure Policy
What can be achieved thanks to Azure Policies
By activating the Azure Policy it is possible:
activate and carry out real-time evaluation of the criteria present in the policies;
evaluate policy compliance periodically or upon request;
activate operations for real-time remediation, also for existing resources.
All this translates into the ability to apply and enforce policy compliance on a large scale and its remediation actions.
How the Azure Policy mechanism works
The working mechanism of the Azure Policy is simple and integrated into the platform. When a request is made for an Azure resource configuration using ARM, this is intercepted by the layer containing the engine that performs the evaluation of policy. This engine makes an assessment based on active Azure policies and establishes the legitimacy of the request.
Figure 3 – Working principle of Azure Policy in creating resources
The same mechanism is then repeated periodically or upon specific request to evaluate the compliance status of existing resources.
Figure 4 – Working principle of Azure Policy in resource control
Azure already has many built-in policies ready to apply, or you can configure them to suit your needs. The definition of the Azure Policy is made in JSON and follows a well defined structure, described inthis Microsoft's document. You also have the possibility of creatingInitiatives, they are a collection of multiple policies.
When you have the desired policy definition, you can assign it to a Management Group, to a subscription and possibly in a more limited way to a specific Resource Group. The same goes for Initiatives. You also have the ability to exclude certain resources from applying the policy if necessary.
Following the assignment, you can evaluate the State of compliance in detail and if it is necessary apply remediation actions.
Use cases for Azure policies
The main areas that can be governed by appropriately adopting the Azure Policies are reported:
financial: resources deployed in Azure for which a consistent metadata strategy needs to be applied to achieve effective cost mapping;
data location: sovereignty requirements that require data to reside in certain geographic locations;
unnecessary expenses: resources that are no longer used or that have not been properly disposed of resulting in unnecessary expenses for the company;
management inefficiencies: an inconsistent resource naming and tagging strategy can make troubleshooting and routine maintenance demands of existing architectures difficult;
business interruption: SLAs are required to ensure that systems are built in accordance with business requirements. Therefore, architectures must be designed according to SLAs and must be investigated if they do not meet them.
Conclusions
In the context of Cloud Technical Governance it is essential to define and apply rules that make it possible to ensure that Azure resources always comply with the defined company standards. Thanks to the use of Azure Policies, also increasing the complexity and quantity of services, you can always ensure advanced control of your Azure environment.
Solutions related to the public cloud in recent years have registered considerable interest from many companies, attracted by the possibilities offered and the relative benefits. In fact,, among the main characteristics of the public cloud we find dynamism and speed of provisioning, which can be a great vector of innovation for organizations in the IT field. However, if you decide to apply procedures and practices already consolidated in the on-premise world also to cloud environments, you risk making serious mistakes. The cloud is by nature different and, applying the same processes of the on-premise environment, you are likely to have the same results, the same problems, almost similar implementation times and even higher costs. It is therefore essential to implement a process of Cloud Technical Governance through which to ensure effective and efficient use of IT resources in the cloud environment, in order to best achieve their goals. In particular, Governance of the Azure environment is made possible by a series of solutions specially designed to allow management and constant control of the various Azure resources on a large scale. This article will show some of the main Microsoft solutions to consider to better define and manage the governance of services in the Azure environment and beyond.
Public cloud: a double-edged sword
Talking about public clouds today means referring to resources and services that a company can hardly do without, but in some respects it can be a double-edged sword.
What are the main features and potential strengths, they can hide pitfalls if not governed properly:
The Self-service delega, this means the possibility of delegating the creation of resources to several working groups, greatly increases the agility and speed of provisioning, but at the same time it could lead to a total lack of control if this is not done in a correct and controlled way.
In the public cloud, almost everything is pay-per-consumption. If we combine this feature with the adoption of uncontrolled self-service delegations, where everyone creates resources without an appropriate government, the result can lead to very high and unnecessary costs.
When we talk about public cloud we also know that flexibility and scalability they are two great elements of strength and value, but this flexibility, the fact of being able to adopt hundreds of solutions, operating according to self-service logic, combined with hybrid connectivity environments must also focus our attention on new potential security threats.
Although Azure, as well as major public clouds, has a very large number of certifications, it introduces solutions based on new technologies which may be difficult to reconcile with corporate compliance requirements.
Adopt the cloud with proper Technical Governance
In the light of these considerations, the advice is to adopt solutions in the public cloud to remain competitive in this ever-changing digital world, but with the appropriate practices of Cloud Technical Governance that help the company mitigate risk and create guardrails. Governance policies within an organization, if properly managed, they also act as an early warning system to detect potential problems.
When it comes to cloud governance there are several disciplines that emerge. Thecost management it is one of the fundamental subjects that absolutely must be treated and managed. To this are added equally important arguments, as the definition of security and compliance baselines, the identity management, theacceleration of deployment processes and the standardization of created resources.
Therefore, declining the concept of governance for an ICT system in the cloud means defining, implement and continuously verify all those rules that make it:
with predictable costs;
secure according to the guidelines defined by corporate security at any level, not necessarily technical:
supportable by all working groups involved in the implementations;
subject to audit in terms of compliance with current and company regulations.
The main Microsoft tools for Governance
Cloud governance can be associated with a trip, where Microsoft provides several platform tools to make it run smoothly. The following paragraphs show some of the main solutions to be taken into consideration to implement functional governance.
Cloud Adoption Framework di Azure
From a design point of view, Microsoft provides the Cloud Adoption Framework di Azure, a set of documentation and tools that guide in the best practices of implementations of solutions in the Azure environment. Among these best practices, that it is good to adopt commonly and that it is appropriate to decline specifically for the various customers based on their needs, there is also a specific section for governance. This can be seen as a starting point for applying these practices in detail.
Figure 1 – Design and standardization: Cloud Adoption Framework for Azure
Azure Policy
Azure Policy, natively integrated into the platform, are a key element for governance as they allow you to control the environment and obtain consistency with respect to the activated Azure resources.
Azure Policies allow you to manage:
compliance:
enable native or custom policies for all resource types;
real-time policy assessment and enforcement:
periodic and upon request conformity assessment;
large-scale distribution:
application of policies to Management Group with control over the whole organization;
applying multiple policies and aggregating policy states through initiatives;
exclusion scope;
Policy as Code con Azure DevOps.
remedies and automations:
correction of existing assets to scale;
automatic remediation upon implementation;
activation of alerts when a resource is not compliant.
Defender for Cloud
The Microsoft Defender for Cloud solution provides a set of features that cover two important pillars of security for modern architectures that adopt cloud components: Cloud Security Posture Management (CSPM) e Cloud workload protection (CWP).
Figure 2 – The security pillars covered by Microsoft Defender for Cloud
WithinCloud Security Posture Management (CSPM) Defender for Cloud can provide the following features:
visibility: to assess the current security situation;
guida all’hardening: to be able to improve security efficiently and effectively.
Thanks to a continuous assessment, Defender for Cloud is able to continuously discover new resources that are distributed and evaluate if they are configured according to security best practices. If not,, assets are flagged and you get a priority list of recommendations on what to fix to improve their security. As regards the scopeCloud Workload Protection (CWP), Defender for Cloud delivers security alerts based onMicrosoft Threat Intelligence. Furthermore, includes a wide range of advanced and intelligent protections for workloads, provided through specific Microsoft Defender plans for the different types of resources present in the subscriptions and in hybrid and multi-cloud environments.
Microsoft Cost Management
To face the important challenge of being able to always keep under control and optimize the expenses to be incurred for the resources created in the cloud environment, the main tool is Microsoft Cost Management, that allows you to:
Monitor cloud spending: the solution tracks the use of resources and allows you to manage costs, also on AWS and GCP, with a single, unified vision. This allows access to a series of operational and financial information and to make decisions with the right awareness.
Increase accountability: allows you to increase the responsibility of the various company areas through budgets, using cost allocation and with chargeback policies.
Optimize costs: through the application of industry best practices
Microsoft Sustainability Manager
Today, an efficient and effective use of IT resources must also take into consideration the environmental impact and energy consumption. Microsoft Sustainability Manager is a Microsoft Cloud for Sustainability solution that unifies data to better monitor and manage the environmental impact of resources. Regardless of the stage you are in to achieve the zero emissions goal, this solution makes it possible to document and support the process for reducing emissions. In fact,, the solution allows you to:
gain the visibility needed to promote sustainability;
simplify data collection and emissions calculations;
analyze and report more efficiently the environmental impact and progress of a company in terms of sustainability.
Not just Azure, but a governance for all IT assets
In situations where a hybrid or multi-cloud strategy is being adopted, the question arises: “as you can view, govern and protect IT assets, regardless of where they are running?”
The answer to this question can be: “adopting Azure Arc”.
In fact,, the underlying principle of Azure Arc is to extend Azure management and governance practices to different environments and to adopt typically cloud solutions, even for on-premises environments.
Figure 3 – Azure Arc overview
To achieve this, Microsoft has decided to extend the modelAzure Resource Manager so that we can also support hybrid environments, thus facilitating the implementation of the control features present in Azure on all the infrastructure components.
Conclusions
To ensure effective use of the public cloud, it is important to adopt the right cloud governance practices that help mitigate risks and protect the company from improper use of IT resources. There are many disciplines to consider and the governance of your IT environment needs to extend across all resources, regardless of where they are. Microsoft offers a number of tools and solutions to address the governance challenge, however, a lot of experience is needed to implement established and reliable processes.
Azure VMware Solution in Microsoft Azure Government (preview)
Azure VMware Solution is a fully managed service in Azure that customers can use to extend their on-premises VMware workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Azure VMware Solution is already available in Azure commercial for any customer, including public sector organizations. With this launch, Microsoft is extending the same benefits of Azure VMware Solution to Azure Government, where US Government customers and their partners can meet their security and compliance needs.
Spot Priority Mix
Spot Priority Mix is a new feature for Virtual Machine Scale Sets (VMSS) with Flexible Orchestration Mode enabled. With Spot Priority Mix, customers can now mix spot and standard virtual machines in their Flexible scale set, providing the high availability of standard virtual machines and the cost savings of Spot virtual machines. This feature also allows customers to autoscale their scale set with a percentage split of Spot and standard virtual machines, providing even more flexibility and cost optimization. With Spot Priority Mix, customers can specify a base number of standard virtual machines and a percentage split of spot and standard virtual machines to be used when the scale set capacity is above the base number of standard virtual machines. This allows customers to ensure that their critical workloads are always running on standard virtual machines, while taking advantage of the cost savings offered by spot virtual machines for non-critical, interruptible workloads.
Networking
Azure Network Watcher: new enhanced connection troubleshoot
As customers bring sophisticated, high-performance workloads into Azure, there is a critical need for increased visibility and control over the operational state of complex networks running these workloads. One such day-to-day common occurring scenario is connectivity.
Although Microsoft Azure Network Watcher provides numerous specialized standalone tools to diagnose and troubleshoot connectivity cases. These tools include:
IP Flow Verify – helping detect blocked traffic due to network security group (NSG) rules restriction
Next Hop – determine intended traffic as per the rules of the effective route
Port Scanner – helping determine any port blocking traffic.
With a one-stop solution to all disjointed operations and actionable insights at the fingertips, the new comprehensive and improved Network Watcher connection troubleshoot aims to reduce mean time to resolution and improve your experience.
New features:
Unified solution for troubleshooting all NSG, user defined routes, and blocked ports
Actionable insights with step-by-step guide to resolve issues
Inability to open a socket at the specified source port
No servers listening on designated destination ports
Misconfigured or missing routes
Scale improvements and metrics enhancements on Azure’s regional WAF
You can now do more with less using the increased scale limits for Azure’s regional Web Application Firewall (WAF) running on Application Gateway. These increased scale limits allow you greater flexibility, and scale, when configuring your WAF to meet the needs of your applications and network. Application Gateway v2 WAF enabled SKUs running Core Rule Set (CRS) 3.2 or higher now supports a higher number of frontend ports, HTTP load-balancing rules, backend HTTP settings, SSL certificates, number of sites, and redirect configurations. The regional WAF also increased the number of HTTP listeners from 40 to 200. You can leverage the new metrics for Azure’s regional v2 WAF when you use CRS 3.2 or higher, or if your WAF has bot protection and geo-filtering enabled. The regional WAF now allows you to filter the metrics total requests, managed rule matches, custom rule matches, and bot protection matches by the dimensions policy name, policy scope and ruleset name, in addition to the already existing dimensions that the WAF supports.
Azure Virtual Network Manager (AVNM) event logging is now available for public preview. AVNM is a highly scalable and available network management solution that allows you to simplify network management across subscriptions globally. With this new feature, you can monitor changes in network group membership by accessing event logs. Whenever a virtual network is added to or removed from a network group, a corresponding log is emitted for that specific addition or removal. You can view and interact with these logs using Azure Monitor’s Log Analytics tool in the Azure Portal, or you can store them in your storage account, or send them to an event hub or partner solution.
Storage
More transactions at no additional cost for Azure Standard SSD
Microsoft has made changes to the billable transaction costs per hour that can result in additional cost savings. The total cost of Azure Standard SSD storage depends on the size, number of disks, and the number of transactions. Any transactions that exceed the maximum hourly limit will not incur additional charges. New prices took effect on March 6th, 2023.
Customer Initiated Storage Account Conversion
Microsoft is now supporting the self-service ability to convert storage accounts from non-zonal redundancy (LRS/GRS) to zonal redundancy (ZRS/GZRS). You can now save time by initiating a storage account conversion directly through Azure Portal rather than creating a support ticket. Converting your storage account to zonal redundancy allows you to increase your intra-regional resiliency and availability.
Online live resize of persistent volumes
Live resizing capability allows you to dynamically scale up your persistent volumes without application downtime. Previously, in order to resize the disk, you had to scale down your deployment to zero pods, wait several minutes for the disk to detach, update your persistent volume claim, and then scale back up the deployment. With Live resize of persistent volumes, you can just modify your persistent volume claim directly, avoiding any application downtime.
Azure Ultra Disk Storage in the China North 3 Azure region
Azure Ultra Disk Storage is now available in the China North 3 Azure region. Azure Ultra Disk Storage offers high throughput, high input/output operations per second (IOPS), and consistent low latency disk storage for Azure Virtual Machines. Ultra Disk Storage is well-suited for data-intensive workloads such as SAP HANA, top-tier databases, and transaction-heavy workloads.
Azure Archive Storage now available in West US 3
Azure Archive Storage provides a secure, low-cost means for retaining rarely accessed data including backup and archival storage. Now, Azure Archive Storage is available in West US 3.
During the month of February some news regarding the Azure management services were announced. This article provides an overview of the month's top news, so that we can stay up to date on these topics and have the necessary references to conduct further insights.
The following diagram shows the different areas related to management, which are covered in this series of articles:
Govern
Azure Cost Management
Updates related toMicrosoft Cost Management
Microsoft is constantly looking for new methodologies to improve Microsoft Cost Management, the solution to provide greater visibility into where costs are accumulating in the cloud, identify and prevent incorrect spending patterns and optimize costs . Inthis article some of the latest improvements and updates regarding this solution are reported.
Secure
Microsoft Defender for Cloud
New features, bug fixes and deprecated features of Microsoft Defender for Cloud
Microsoft Defender for Cloud development is constantly evolving and improvements are being made on an ongoing basis. To stay up to date on the latest developments, Microsoft updates this page, this provides information about new features, bug fixes and deprecated features. In particular, this month the main news concern:
Improved experience for creating and managing private endpoints for Recovery Services vaults
Azure Backup allows you to use private endpoints to perform backups and restores securely, using private IPs of virtual networks. Azure Backup recently introduced several enhancements that provide an easier experience for creating and using private endpoints for Recovery Service vaults. The main improvements made as part of this update are as follows:
Ability to create private endpoints without managed identities
Use fewer private IPs per vault
You no longer need to create separate private endpoints for blob and queue services
Azure Site Recovery
New Update Rollup
For Azure Site Recovery was released theUpdate Rollup 66 that solves several issues and introduces some improvements. The details and the procedure to follow for the installation can be found in the specific KB.
Migrate
Azure Migrate
New Azure Migrate releases and features
Azure Migrate is the service in Azure that includes a large portfolio of tools that you can use, through a guided experience, to address effectively the most common migration scenarios. To stay up-to-date on the latest developments in the solution, please consult this page, that provides information about new releases and features. In particular, this month the main news concerns the discovery and assessment support for SQL Server Always On failover cluster instances and Always On availability groups.
Azure Database Migration
Database migrations with login and TDE
The new feature of the Azure SQL Migration extension makes the post database migration experience smoother. In fact,, you can have instance-level object migration support, such as SQL and Windows logins, the permissions, server roles and updated user mapping of previously migrated databases.
Furthermore, you can now perform TDE-enabled database migrations with a wizard that automates the backup process, copying and reconfiguring database encryption keys for Azure SQL Managed Instance targets.
Evaluation of Azure
To test for free and evaluate the services provided by Azure you can access this page.
Create disks from CMK-encrypted snapshots across subscriptions and in the same tenant
To ease manageability, Microsoft makes disks encrypted with customer-managed keys (CMK) more flexible by allowing creation of disks and snapshots from CMK-encrypted source across subscriptions.
Incremental snapshots for Premium SSD v2 Disk Storage (preview)
Incremental snapshots for Premium SSD v2 Disk Storage in the US East and West Europe Azure region are available. This new capability is particularly important to customers who want to create a backup copy of their data stored on disks to recover from accidental deletes, or to have a last line of defense against ransomware attacks, or to ensure business continuity. You can now create incremental snapshots for Premium SSD v2 Disk Storage on Standard HDD. Additionally, snapshot resources can be used to store incremental backups of your disk, create or recover to new disks, or download snapshots to on-premises locations. This new feature adds an extra layer of data protection and flexibility for users.
Azure Managed Lustre (preview)
Azure Managed Lustre is a managed, pay-as-you-go file system purpose-built for high-performance computing (HPC) and AI workloads. This high-performance distributed parallel file system delivers hundreds of GBps storage bandwidth and solid-state disk latency and integrates fully with Azure services such as Azure HPC Compute, Azure Kubernetes Service, and Azure Machine Learning.
Use this system to:
Simplify operations
Reduce setup costs
Eliminate complex maintenance
Azure NetApp Files updates (preview)
Azure NetApp Files volume user and group quotas: in some scenarios you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume and group quotas you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
You can now create Azure NetApp Files large volumes between 100TiB to 500TiB in size.
Azure NetApp Files now supports smaller 2TiB capacity pool sizes, lowered from 4TiB, when used with volumes using standard network features.
Azure NetApp Files volumes now support encryption with customer-managed keys (CMK), using Azure Key Vault for key storage, to enable an extra layer of security for data at rest.
In an era where companies increasingly depend on computer systems for their functioning, data protection and business continuity are elements that must necessarily be taken into consideration. Unforeseen events such as natural disasters, hardware failures, cyber attacks and human errors can cause disruption of IT services, resulting in significant financial losses. This is where the Disaster Recovery plan comes into play (DR), that allows companies to quickly restore IT services and minimize the impact of unexpected events on the business. For large companies with heterogeneous and complex IT environments, it can be particularly challenging to activate a Disaster Recovery plan. This article explains how Azure VMware Solution (AVS), thanks to its characteristics, can be the ideal solution for developing a Disaster Recovery plan quickly and easily.
The importance of a DR plan in the company
The presence of a good Disaster Recovery strategy may seem obvious, but many companies continue to neglect its importance. Among the main factors to be considered for the DR we find:
Business continuity: DR plan allows companies to quickly restore IT systems, minimizing the impact of unforeseen events and ensuring business continuity.
Minimization of financial losses: IT service outages can cause significant financial loss. The DR plan allows you to minimize these losses, restoring IT systems as quickly as possible.
Regulatory compliance: many regulations require companies to have a DR plan in place to protect data and ensure business continuity.
Customer trust: business continuity is an important factor in customer trust. A DR plan can demonstrate to customers that the company can handle unexpected events and ensure continuity of services.
Challenges to face in the activation of a DR plan
The importance is understood, however, it is true that companies often find themselves facing various challenges when they have to activate a Disaster Recovery plan (DR). Some of the more common challenges are:
Recovery site availability: usually Disaster Recovery (DR) is activated at a dedicated recovery site separate from the corporate headquarters. This recovery site may be located in a different geographical area to provide greater protection against catastrophic events that could affect the geographical area where the company headquarters is located. The recovery site must be adequate, equipped and configured to support critical business operations, so that these can be restored as quickly as possible.
Recovery times: the time it takes to restore IT systems is one of the biggest challenges in the event of a service outage. Businesses must do everything possible to reduce downtime and restore IT services as quickly as possible.
Data access: in the event that the IT service disruption is caused by a natural disaster, a cyber attack or human error, access to data may be compromised. It is important that businesses protect their data and that backups are kept in a safe place, to ensure the recovery of information.
Staff training: company personnel must be adequately trained to be able to manage recovery procedures effectively. This requires an investment in staff training and development.
Introduction to the adoption of Azure
Microsoft Azure was designed from the ground up to help customers reduce costs, complexity and to improve the reliability and efficiency of your IT environment.
Figure 1 – The comprehensive approach to building an infrastructure designed for different workloads
There is no one-size-fits-all way to adopt cloud solutions, but it makes sense to give customers the ability to embrace the cloud at their own pace, in some cases even adopting the same technological solutions that they are currently using in their on-premises environment. Provide platform symmetry (on-premises – cloud), where appropriate, it is useful for addressing workload migration scenarios, but also to activate Disaster Recovery plans.
In this article it will be considered Azure VMware Solution (AVS) the designed service, built and supported by Microsoft, and approved by VMware, which allows customers to use physical VMware vSphere clusters hosted in Azure.
Azure VMware Solution: why use it for Disaster Recovery
Azure VMware Solution is a service that allows the provisioning and execution of an environment VMware Cloud Foundation full on Azure. VMware Cloud Foundation is VMware's hybrid cloud platform for managing virtual machines and orchestrating containers, where the entire stack is based on a hyper-converged infrastructure (HCI).
Figure 2 – Azure VMware Solution overview
This architecture model ensures consistent infrastructure and operation across any private and public cloud, including Microsoft Azure. The solution Azure VMware allows customers to adopt a full set of VMware features, with the guarantee of holding the validation "VMware Cloud Verified". This solution helps to achieve consistency, performance and interoperability for existing VMware workloads, without sacrificing speed, scalability and availability of Azure global infrastructure. Among the main scenarios of adoption of Azure VMware Solution we find the Disaster recovery.
Talking to enterprise customers, we see a variety of drivers driving the adoption of a solution such as Azure VMware Solution to activate an effective DR strategy:
Speed: AVS allows you to implement DR plans quickly and efficiently thanks to a hybrid cloud architecture, virtual machine replication and advanced automation features you can adopt. These elements allow companies to reduce the time required to activate a DR plan and to restore critical operations in the event of a disaster.
Costs and complexity: Azure VMware Solution can help reduce the cost of setting up a disaster recovery site (DR). In fact,, AVS enables companies to extend their on-premises VMware solutions to Azure, creating a hybrid cloud DR environment that offers flexibility and scalability. Instead of purchasing expensive hardware and infrastructure for a separate DR site, companies can use Azure as a recovery site and pay only for the cloud resources they actually use while enabling DR. This allows companies to reduce the initial costs of DR activation and to simplify the IT infrastructure with consequent benefits also from the point of view of maintenance. Furthermore, thanks to AVS it is possible to resize the infrastructure dynamically, based on your needs, and ensure greater operational efficiency.
People, processes and tools: AVS lets you leverage your existing investments in skills and tools to manage your on-premises VMware environments. To implement disaster recovery plans using Azure VMware Solution, it is possible to adopt native VMware solutions or third-party solutions. In fact,, Microsoft, in order to guarantee its customers the opportunity to make the most of the investments made in skills and technologies, has collaborated with some of the main partners in the sector, to ensure integration and support. For more information on this, you can consult the article "Disaster recovery with Azure VMware Solution – Cloud Community".
Conclusions
Azure VMware Solution represents an ideal solution to address Disaster Recovery cases (DR), for enterpise realities, thanks to its flexibility, scalability and reliability. Using this solution, companies can create environments in Azure that are compatible and integrated with on-premises VMware infrastructure, ensuring business continuity and emergency recovery in the event of a disaster. Furthermore, the solution allows you to simplify and automate DR management, reducing costs and increasing recovery speed. Therefore, if you are looking for a solution to implement efficient and effective DR plans, Azure VMware Solution is definitely a solution to consider.