Category Archives: Datacenter Management

Azure Management services: what's new in January 2023

The new year started with several announcements from Microsoft regarding news related to Azure management services. The monthly release of this summary allows you to have an overall overview of the main news of the month, in order to stay up to date on these topics and have the necessary references to conduct further exploration.

The following diagram shows the different areas related to management, which are covered in this series of articles:

Figure 1 – Management services in Azure overview

Monitor

Azure Monitor

Certificate the IT Service Management Connector (ITSMC) with ServiceNow Tokyo version (preview)

The IT Service Management Connector (ITSMC) is certified on the Tokyo version of ServiceNow. This connector provides a two-way connection between Azure Monitor and ServiceNow, useful to help you track and fix problems faster.

Govern

Azure Cost Management

Management of billing accounts for EA customers

For Enterprise Agreement customers (EA) “indirect” the ability to manage your billing accounts directly from Cost Management and Billing has been introduced. All relevant information regarding department, account and subscription are available directly from the Azure portal. Furthermore, from the same point it is possible to view the properties and manage the policies of the indirect EA enrollments.

Updates related toMicrosoft Cost Management

Microsoft is constantly looking for new methodologies to improve Microsoft Cost Management, the solution to provide greater visibility into where costs are accumulating in the cloud, identify and prevent erroneous spending patterns and optimize costs . Inthis article some of the latest improvements and updates regarding this solution are reported.

Azure Arc

Active Directory Connector for Arc-enabled SQL MI

Azure Arc-enabled data services introduced Active Directory support (AD) for the management of Identity and Access Management (IAM). Indeed, the Arc-enabled SQL Managed instance can use an Active Directory domain (AD) existing on-premises for authentication. To facilitate this, Azure Arc-enabled data services introduce a new Custom Resource Definition (CRD) native Kubernetes called Active Directory Connector. This provides Azure Arc-enabled SQL Managed Instances running on the same data controller the ability to perform Active Directory authentication.

View SQL Server databases using Azure Arc (preview)

Today, customers and partners manage a large number of databases. For each of these databases, it is essential to be able to create an accurate mapping of the configurations. This may be for inventory or reporting purposes. Centralizing database inventory in Azure using Azure Arc allows you to create a unified view of all your databases in one place, regardless of the infrastructure in which they are located: in Azure, in the data center, at edge sites or even other clouds.

Secure

Microsoft Defender for Cloud

New features, bug fixes and deprecated features of Microsoft Defender for Cloud

Microsoft Defender for Cloud development is constantly evolving and improvements are being made on an ongoing basis. To stay up to date on the latest developments, Microsoft updates this page, this provides information about new features, bug fixes and deprecated features. In particular, this month the main news concern:

  • the endpoint protection component (Microsoft Defender for Endpoint) it is now accessible on the Settings and monitors page;
  • new version of the recommendation to find missing system updates;
  • cleanup of deleted Azure Arc machines in linked AWS and GCP accounts.

Protect

Azure Backup

Updates and improvements regarding SAP HANA

The following updates and improvements have been made recently to Azure Backup for SAP HANA, the certified solution Backint for protecting SAP HANA databases residing in Azure virtual machines:

  • Long-term retention for backups “adhoc”: it is now possible to provide customized retention for backups that occur on demand, outside the scheduled policies.
  • Partial restore-as-files: Azure Backup for HANA allows recovery points to be restored as a file. If you download the entire chain for one recovery point and want to repeat the operation for another adjacent recovery point, you don't need to download the entire chain again. It is also possible to restore only the files you want.
  • Integration with native clients and with other tools: previously, for certain scenarios, it was necessary to deactivate backint before the request and reactivate it afterwards, thereby increasing the RPO. With the improvements introduced, these additional steps are no longer necessary and it will be sufficient to activate the requests from the native clients or from the other tools used.

Azure Site Recovery

Ability to use Azure Backup Center for ASR monitor

Azure Backup Center is the point of reference for those who use the native backup features of the Azure platform and allows them to govern, to monitor, manage and analyze backup tasks. Microsoft has extended its capabilities by including monitor capabilities for Azure Site Recovery, which:

  • Viewing the inventory of replicated items, from a single view, for all vaults.
  • Consultation through a control panel of all the replication jobs.

Azure Backup Center supports ASR replication scenarios involving Azure virtual machines, VMware and physical machines.

Migrate

Azure Migrate

New Azure Migrate releases and features

Azure Migrate is the service in Azure that includes a large portfolio of tools that you can use, through a guided experience, to address effectively the most common migration scenarios. To stay up-to-date on the latest developments in the solution, please consult this page, that provides information about new releases and features. In particular, this month the main news concern:

  • Possibility to plan savings with the ASP savings option (Azure Savings Plan for compute) with the Azure Migrate business case and assessment.
  • Support for exporting the business case report to an .xlsx workbook from the portal.

Evaluation of Azure

To test for free and evaluate the services provided by Azure you can access this page.

Azure IaaS and Azure Stack: announcements and updates (January 2023 – Weeks: 03 and 04)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

Azure

Compute

Classic VM retirement: extending retirement date to September 1st 2023

Microsoft is providing an extended migration period for IaaS VMs from Azure Service Manager to Azure Resource Manager. To avoid service disruption, plan and migrate IaaS VMs from Azure Service Manager to Resource Manager 1 September 2023. There are multiple steps to this transition, so we recommend that you plan your migration promptly to avoid potential system interruption.

Networking

Application security groups support for private endpoints

Private endpoint support for application security groups (ASGs) is now available. This feature enhancement will allow you to add granular controls on top of existing network security group (NSG) rules by attaching an ASG to the private endpoint network interface. This will increase segregation within your subnets without losing security rules. In order to leverage this feature, you will need to set a specific subnet level property, called PrivateEndpointNetworkPolicies, to enabled on the subnet containing private endpoint resources.

Storage

5 GB Put Blob

Azure Storage is announcing the general availability of 5 GB Put Blob. This allows you to upload nearly 20x the previous limit of Put Blob uploads while increasing the maximum size of Put Blob from 256 MiB to 5000 MiB.

Mount Azure Storage as a local share in App Service Windows Code

Mounting Azure Storage File share as a network share in Windows code (non-container) in App Service is now available.

Incremental snapshots for Ultra Disk Storage (preview)

The preview of incremental snapshots for Ultra Disk in the Sweden Central and US West 3 Azure region is available. This new capability is particularly important to customers who want to create a backup copy of their data stored on disks to recover from accidental deletes, or to have a last line of defense against ransomware attacks, or to ensure business continuity. You can now create incremental snapshots for Ultra Disk on Standard HDD. Additionally, snapshot resources can be used to store incremental backups of your disk, create or recover to new disks, or download snapshots to on-premises locations.

Azure Stack

Azure Stack HCI

Software Defined Networking (SDN) with WAC v2211

In this article there are all new features and improvements for SDN in Windows Admin Center 2211 (WAC) for Azure Stack HCI.

The calculation of the energy consumption and environmental impact of Microsoft's public cloud

After the Paris Agreement, with increased attention on climate change and measures taken by governments to reduce carbon emissions, the environmental impact of IT systems is increasingly in the spotlight. Several studies have shown that the cloud also offers significant benefits in terms of sustainability and provides companies with the possibility of reducing the environmental impact of IT services, thus contributing to a more sustainable future. To evaluate the real impact, it is advisable to apply measurements and controls. This article describes the methodology designed to calculate the carbon emissions associated with the use of Microsoft Azure resources.

Microsoft provides tools to monitor and manage the environmental impact of carbon emissions, based on the methodology described in this article, which is constantly evolving and improving. Such tools, specific to the Azure cloud, allow to:

  • Get the visibility you need to promote sustainability, taking into account both emissions and carbon use.
  • Simplify data collection and emissions calculations.
  • Analyze and report more efficiently the environmental impact and progress of a company in terms of sustainability.

This methodology used by Microsoft is constantly updated to include science-based approaches as they become available and relevant for assessing the carbon emissions associated with the Azure cloud.

Standards used for calculation

Microsoft shares its greenhouse gas emissions (GHG) into three categories (scope), sticking to Greenhouse Gas Protocol, a globally recognized standard for the methodology for calculating and reporting greenhouse gas emissions (GHG).

Scope 1: direct emissions – emissions deriving from combustion and industrial processes

Greenhouse gas emissions in this category include emissions from the combustion of diesel and emissions from the use of refrigerants for cooling data centers.

Scope 2: indirect emissions – emissions resulting from electricity consumption, heat or steam

Greenhouse gas emissions in this category include emissions from the consumption of electricity used to power Microsoft data centers.

Scope 3: other indirect emissions – the emissions generated during the production phase and at the end of the product life cycle

Greenhouse gas emissions include emissions from the extraction of raw materials, from component assembly and end-of-life management of hardware devices (for example: recycling, landfill or compost), such as servers and network equipment, used in Microsoft data centers.

Figure 1 – Examples of types of scope carbon emissions 1, 2 and 3 in the Microsoft cloud

In this context, it should be borne in mind that the 2020 Microsoft has reaffirmed its commitment to integrating sustainability into all of its businesses. Indeed, announced an ambitious goal and plan to reduce and ultimately eliminate carbon emissions. Under this plan, Microsoft has set itself the goal of becoming a company “carbon neutral” by 2030, and is adopting various strategies to reduce its carbon emissions, including the purchase of renewable energy sources, optimizing the energy efficiency of its data centers and supporting the transition to a low-carbon economy.

Normative

Microsoft bases its calculation methodology also relying on widely accepted ISO standards in the industry:

  • Carbon emissions related to materials are based on ISO standard 14067:2018 (Greenhouse gases – Product carbon footprint – Quantification requirements and guidelines).
  • Operational emissions are based on ISO standard 14064-1:2006 (Greenhouse gases – Part 1: Organization-wide specifications and guidelines for quantifying and reporting GHG emissions and removals).
  • Verification and validation are based on the ISO standard 14064-3:2006 (Greenhouse gases – Part 3: Specifications with guidance on validating and verifying greenhouse gas claims).

Calculation methodologies

Scope 1 and 2

Greenhouse gas emissions related to the use of electricity for scopes 1 and 2 are usually divided into categories such as Storage, Compute and Network. The quantification of the emissions of these scopes is based on the time of use of the individual categories. The methodology used to calculate emissions in Scope 1 and 2 is generally based on a lifecycle analysis present in a Microsoft study, available at this address. This methodology for the Scope 2 includes calculation of energy impact and carbon emissions for each specific data center, considering factors such as data center and server efficiency, the emission factors, renewable energy purchases and infrastructure energy usage over time.

Scope 3

Calculation of emissions relating to the Scope 3 is summarized in the following figure:

Figure 2 – Methodology for calculating emissions relating to the Scope 3

It starts with the assessment of the life cycle of the materials used in the data center infrastructure and the related carbon emissions are calculated. This sum is then segmented based on customer usage of each data center.

This methodology for emissions related to the Scope 3 calculates the energy and carbon footprint for each data center over time, taking into consideration the following:

  • The most common materials used for the construction of the IT infrastructure used in data centers.
  • The main components that make up the cloud infrastructure.
  • The complete list of all assets in Microsoft data centers.
  • Carbon factors for cloud infrastructure at all stages of the lifecycle (extraction of raw materials, component assembly, use and disposal at the end of the life cycle).

Validation of the Microsoft methodology for scope 3 is published at this link.

Common definitions

This section contains definitions of the most frequently used terms relating to the impact of emissions:

  • mtCO2e: is the unit of measurement used to express the impact of greenhouse gas emissions on the global greenhouse effect. It takes into account not only carbon dioxide emissions (CO2), but also of other greenhouse gases such as methane (CH4), nitrous oxide (N2O) and fluorinated gases (F-gases). mtCO2e is used to measure global greenhouse gas emissions and to set emissions reduction targets.
  • Carbon emissions (mtCO2e) from Azure: carbon emissions (mtCO2e) for the Azure cloud refer to the amount of greenhouse gases, mainly carbon dioxide (CO2), emitted into the atmosphere due to the use of Microsoft Azure cloud computing services. This value includes Microsoft Scopes (1, 2 it's the 3).
  • Carbon intensity (mtCO2e/usage): the carbon intensity index provides a ratio between carbon dioxide emissions and another variable. For Green SKU, this is the total carbon dioxide equivalent emissions per hours of use, measured in mtCO2e/hour. The purpose of this index is to provide visibility into carbon emissions related to the use of Azure services.
  • Carbon emissions expected at the end of the year (mtCO2e): Projected end-of-year cumulative carbon emissions allocation based on current year's cloud resource usage projection and previous year's trends.

Conclusions

To identify the benefits to the IT environment of deploying applications on Azure, it is important to educate customers about the environmental impact of their IT assets and provide them with the tools to govern that impact. This must be done with the intention of improving, setting specific and realistic sustainability objectives. Such an approach benefits both the business and society.

Azure IaaS and Azure Stack: announcements and updates (January 2023 – Weeks: 01 and 02)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

Azure

Storage

Azure Ultra Disk Storage in Switzerland North and Korea South

Azure Ultra Disk Storage is now available in one zone in Switzerland North and with Regional VMs in Korea South. Azure Ultra Disk Storage offers high throughput, high IOPS, and consistent low latency disk storage for Azure Virtual Machines (VMs). Ultra Disk Storage is well-suited for data-intensive workloads such as SAP HANA, top-tier databases, and transaction-heavy workloads.

Azure Active Directory authentication for exporting and importing Managed Disks

Azure already supports disk import and export locking only from a trusted Azure Virtual Network (VNET) using Azure Private Link. For greater security, the integration with Azure Active Directory (AD) to export and import data to Azure Managed Disks is available. This feature enables the system to validate the identity of the requesting user in Azure AD and verify that the user has the required permissions to export and import that disk.

Migrating to Azure: from motivations to a successful business case

Moving to the cloud can definitely lead to cost savings, more effective use of resources and improved performance. However, the question to ask yourself before tackling a migration path is: “why move to the cloud?”. The answer to this question is not trivial and often coincides with: “our board of directors (or the CIO) told us to move to the cloud”. In the face of a response of this type it is appropriate to turn on an alarm bell as it could be more difficult to achieve the expected results. This article discusses some of the reasons behind migrating to the cloud that can help drive more successful business outcomes, and what elements and tools to consider to support building a complete business case..

Motivations for moving to the cloud

The motivations that can drive the business transformations supported by the adoption of the cloud can be different. To help generate ideas about what motivations may be relevant, I report the following table, where there is a subdivision between the main classifications:

Critical Business Events Migration Innovation
Data center exit

Mergers, acquisitions or divestments

Reduction of capital expenses

End of support for mission critical technologies

Respond to regulatory compliance changes

New data sovereignty requirements

Reduce outages and improve the stability of your IT environment

Report and manage the environmental impact of your business

Cost savings

Reduction of technical or vendor complexity

Optimization of internal operations

Increase business agility

Preparation for new technical capabilities

Scalability to meet market demands

Scalability to meet geographic needs

Integration of a complex IT portfolio

Preparation for new technical capabilities

Creation of new technical capabilities

Scalability to meet market demands

Scalability to meet geographic needs

Improved customer experience and engagement

Processing of products or services

Market disruption with new products or services

Democratization and/or self-service environments

Table 1 – Top reasons for adopting the cloud

It is likely that different motivations for cloud adoption will apply at the same time and fall into different classifications..

To guide the development of your cloud adoption strategy it is recommended to use the predominant classification between: critical business events, migration and innovation. These motivations must then be shared and discussed with the stakeholders, corporate executives and leaders. In this way it is possible to favor the successful adoption of cloud solutions within the company.

How to accelerate migration

Often the migration it is the first step that leads to the adoption of cloud solutions. In this regard it is useful to follow the "Migrate" methodology defined in the Cloud Adoption Framework, which outlines the strategy to perform a cloud migration.

This guide, after aligning stakeholders on motivations and expected business outcomes advises clients to establish the right partnerships to get the necessary support throughout the entire migration process.

The next step involves data collection and an analysis of the assets and workloads to be migrated. This step must lead to the development of a business case regarding cloud migration with the aim of ensuring that all stakeholders are aligned on a simple question: “based on available data, cloud adoption is a wise business decision?".

If so, you can continue with the next steps detailed in the guide and which provide:

  • Creating a migration plan
  • The preparation of the necessary skills
  • The activation and configuration of the Landing Zone
  • The migration of the first workloads to Azure
  • The implementation of cloud governance and operations management

Creating Business Cases: key elements, tools and calculators

A business case provides an overall view on the technical and financial timing of the analyzed environment. The development of a business case must necessarily include the creation of a financial plan that takes into account the technical aspects and is in line with business results.

There are several key components to consider when making a business case, among these we find:

  • Scope of the environment
  • Basic financial data
  • On-premises cost scenario: needs to predict what on-premises costs will be if you don't migrate to the cloud.
  • Azure cost scenario: cost forecast in case of cloud migration.
  • Migration Timeline

A business case is not just a momentary view, but it must be a plan covering a defined time period. As a last step, it is useful to compare the cloud environment with an on-premises scenario or with the status quo, so you can evaluate the data benefits of migrating to the cloud.

To support the preparation of a business case for cloud migration you need to use tools and calculators. Microsoft provides several, described in the following paragraphs.

Azure Migrate

Azure Migrate is the service in Azure that includes a large portfolio of tools that you can use, through a guided experience, to address effectively the most common migration scenarios.

Azure Migrate recently introduced the feature for creating Business case which helps build propositions to understand how Azure can drive the most value. Indeed, this solution allows you to evaluate the return on investment regarding the migration of server systems to Azure, of SQL Server deployments and ASP.NET web applications running in a VMware environment. The business case can be easily created and can provide useful elements to evaluate:

  • Your on-premises total cost of ownership compared to Azure.
  • Information based on resource usage, to identify ideal servers and workloads for the cloud and recommendations for right sizing in Azure.
  • The benefits for migration and modernization, including the end of support for Windows and SQL versions.
  • The long-term savings of moving from a capital expenditure model to an operating expenditure model, paying only for what you use.

Azure Total Cost of Ownership (TCO) Calculator

The Azure TCO calculator can be used to estimate the cost savings that can be achieved by migrating workloads to Azure. Entering the details of the on-premise infrastructure (server, database, storage and networking, as well as the licensing assumptions and costs) the calculator is able to match Azure services by showing a high level TCO comparison. However, the results of the Azure TCO calculator should be considered carefully, as by adopting Azure optimization measures can be taken and therefore it may not be exhaustive.

Azure Pricing Calculator

The Azure Pricing Calculator can be used to estimate monthly costs for Azure solutions.

Azure VM cost estimator

This is a Power BI model that helps you estimate Azure savings, compared to the pay-as-you-go rate, adopting the offers and benefits of Azure for virtual machines, such as Azure Hybrid Benefit and the reserved instances.

Conclusions

Identifying the motivations, conducting an assessment and building a business case are essential elements to build a functional cloud adoption strategy and to adopt a successful migration plan.

Azure IaaS and Azure Stack: announcements and updates (December 2022 – Weeks: 51 and 52)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

During these two weeks of holidays, there were no notable news related to these areas.

We look forward to 2023 for lots of news!

I wish everyone a happy 2023!

Azure Management services: what's new in December 2022

In December, several news regarding Azure management were announced by Microsoft services. The release of this summary, which occurs on a monthly basis, want to provide an overview of the main news of the month, in order to stay updated on these topics and have the necessary references to conduct further investigations.

The following diagram shows the different areas related to management, which are covered in this series of articles:

Figure 1 – Management services in Azure overview

Monitor

Azure Monitor

Azure Monitor Agent: IIS logs and custom logs

The Azure Monitor agent allows you to collect text files and IIS logs and merge them into a Log Analytics workspace. In this regard, a new feature has been introduced to allow the collection of text logs generated in the application environment, exactly as it happens for Internet Information Service logs (IIS).

Azure Monitor Logs: custom log API and ingestion-time transformation

A new set of features is now available in Azure Monitor that allows you to fully customize the shape of the data that flows into your workspace, plus a new API for custom data merging. Thanks to these new features, it is possible to envisage customized transformations to the data at the time of ingestion. These transformations can be used to set up the extraction of fields during ingestion, obfuscate sensitive data, proceed to remove unnecessary fields or to delete complete events (useful for example to contain costs). Furthermore, it is possible to completely customize the data sent to the new API for custom logs. As well as being able to specify a transformation on the data sent to the new API, you can also explicitly define the schema of your custom table (including dynamic data structures) and leverage AAD authentication and ARM RBAC management.

Configure

Azure Automation

Extension for the Hybrid Runbook Worker

The User Hybrid Worker extension was announced in Azure Automation, which is based on the virtual machine extensions framework and offers an integrated installation experience. There is no dependency on the Log Analytics agent and workspace, and authentication is via System-assigned managed identities, eliminating the need to manage certificates. Furthermore, ensures automatic minor version upgrades by default and simplifies small-scale management of Hybrid Workers through the Azure portal, cmdlet PowerShell, Azure CLI, Bicep, ARM templates and the REST API.

Govern

Azure Cost Management

Use tag inheritance for cost management (preview)

Tag inheritance was announced in a public preview, which allows you to automatically apply subscription and resource group tags to child resources. This mechanism simplifies cost management pipelines.

Updates related toMicrosoft Cost Management

Microsoft is constantly looking for new methodologies to improve Microsoft Cost Management, the solution to provide greater visibility into where costs are accumulating in the cloud, identify and prevent erroneous spending patterns and optimize costs . Inthis article the main improvements and updates of this solution are reported for the year 2022.

Azure Arc

Azure Arc enabled Azure Container Apps (preview)

Azure Container Apps enables developers to quickly build and deploy microservices and containerized applications. Deploying an Arc extension on Azure Arc enabled Kubernetes cluster, IT administrators gain control of the underlying hardware and environment, enabling high productivity of Azure PaaS services within a hybrid environment. The cluster can be on-premise or hosted in a third-party cloud. This approach allows developers to leverage the functionality and productivity of Azure Container Apps anywhere, not only in Azure environment. While, IT administrators can maintain corporate compliance by hosting applications in hybrid environments.

Server Azure Arc enabled in Azure China

Azure Arc-enabled servers are now also operable in two regions of Azure China: Est China 2 and North China 2.

Secure

Microsoft Defender for Cloud

New features, bug fixes and deprecated features of Microsoft Defender for Cloud

Microsoft Defender for Cloud development is constantly evolving and improvements are being made on an ongoing basis. To stay up to date on the latest developments, Microsoft updates this page, this provides information about new features, bug fixes and deprecated features.

Protect

Azure Backup

Recovery of Azure virtual machines Cross Zonal

Azure Backup exploits the potential of Zonal Redundant Storage (ZRS), which stores three replicas of backup data in different Availability Zones, synchronously. This allows recovery points stored in the Recovery Services Vault to be used with ZRS storage even if the backup data in one of the Availability Zones is unavailable, ensuring data availability within a region.

The Cross Zonal Restore option can be considered when:

  • Zone-wide availability of backup data is critical, and backup data downtime is unacceptable. This allows you to restore Azure virtual machines and disks to any zone of your choice in the same region.
  • Backup data resilience is needed along with data residency.

Azure Kubernetes Service (AKS) Backup (private preview)

For the Azure Backup service, the private preview of AKS Backup was announced. Using this feature it is possible:

  • Back up and restore containerized applications, both stateless and stateful, running on AKS clusters
  • Back up and restore data stored on persistent volumes attached to clusters.
  • Perform backup orchestration and management from the Backup Center.

Azure Site Recovery

Increased the churn limit (preview)

Azure Site Recovery (ASR) increased the data churn limit by approx 2,5 times, bringing it to 50 MB/s per disk. This way you can configure disaster recovery (DR) for Azure VMs with a data churn of up to 100 MB/s. This allows you to enable DR for IO intensive workloads. This feature is only available for Azure-to-Azure replication scenarios.

New Update Rollup

For Azure Site Recovery was released theUpdate Rollup 65 that solves several issues and introduces some improvements. The details and the procedure to follow for the installation can be found in the specific KB.

Migrate

Azure Migrate

New Azure Migrate releases and features

Azure Migrate is the service in Azure that includes a large portfolio of tools that you can use, through a guided experience, to address effectively the most common migration scenarios. To stay up-to-date on the latest developments in the solution, please consult this page, that provides information about new releases and features. The main news of this month are described in detail in the following paragraphs.

Software inventory and agentless dependency analysis

Azure Migrate agentless software inventory and dependency analysis is now available for Hyper-V VMs, for bare-metal servers and for servers running on other public clouds such as AWS and GCP. It is therefore possible to inventory the applications, the roles and features installed on those systems. Furthermore, you can run dependency analysis on discovered Windows and Linux servers without installing any agents. Thanks to these features it is possible to build migration plans to Azure more effectively, going to group the servers related to each other.

Building a business case with Azure Migrate (preview)

Azure Migrate's business case feature helps you build business propositions to understand how Azure can drive the most value. Indeed, this solution allows you to understand the return on investment regarding the migration of server systems to Azure, of SQL Server deployments and ASP.NET web applications running in the VMware environment . The business case can be created with just a few clicks and can help you understand:

  • Total cost of ownership on-premises vs Azure and annual cash flow.
  • Resource utilization-based insights to identify ideal servers and workloads for the cloud and recommendations for right sizing in Azure.
  • Benefits for migration and modernization, including the end of support for Windows and SQL versions.
  • Long-term savings by moving from a capital expenditure model to an operating expenditure model, paying only for what you use.

Evaluation of Azure

To test for free and evaluate the services provided by Azure you can access this page.

The cost model for Azure Stack HCI (12/2022)

Technologies from different vendors are available on the market that allow you to build hyper-converged infrastructures (HCI). Microsoft in this sector offers an innovative solution called Azure Stack HCI, deployed as an Azure service, that allows you to achieve high performance, with advanced security features and native integration with various Azure services. This article describes how much you need to invest to get the Azure Stack HCI solution and what aspects you can consider to structure the cost model as you like..

Premise: OPEX vs CAPEX

The term CAPEX (contraction from CAPital EXpenditure, ie capital expenditures) indicates the cost of developing or providing durable assets for a product or system.

Its counterpart, operational expenditure or OPEX (from the English term OPerational EXpenditure) is the cost of managing a product, a solution or a system. These are also called costs O&M (Operation and Maintenance) or operating and management costs.

CAPEX costs usually require a budget and a spending plan. Also for these reasons, companies generally prefer to incur OPEX costs, as they are easier to plan and manage.

Clarify these concepts, now let's see the Azure Stack HCI cost model and how to get a totally OPEX model.

Hardware costs

In order to activate the Azure Stack HCI solution, it is necessary to have on-premise hardware to run the dedicated operating system of the solution and to run the various workloads. There are two possibilities:

  • Azure Stack HCI Integrated Systems: determined by the vendor, offer specially structured and integrated systems for this solution, that provide an appliance-like experience. These solutions also include integrated support, jointly between the vendor and Microsoft.
  • Azure Stack HCI validated nodes: implementation takes place using hardware specifically tested and validated by a vendor. In this way you can customize the hardware solution according to your needs, going to configure the processor, memory, storage and features of network adapters, but respecting the supplier's compatibility matrices. There are several hardware vendors that offer suitable solutions to runAzure Stack HCI and can be consulted by accessingthis link. Most implementations are done in this way.

Figure 1 - Hardware deployment scenarios

Also for the hardware it is possible to make some evaluations to adopt a cost model based on rental. Indeed, major vendors such as HPE, Dell and Lenovo, are able to offer the necessary hardware in "infrastructure as-a-service" mode, through a payment model based on use.

Azure costs

Despite being running on premise, Azure Stack HCI provides for billing based on Azure subscription, just like any other service in Microsoft's public cloud.

Azure Stack HCI offers a free trial period that allows you to evaluate the solution in detail. The duration of this period is equal to 60 days and starts from when you complete the registration of the cluster environment in Azure.

At the end of the trial period, the model is simple and costs “10 € / physical core / month"*. The cost is therefore given by the total of physical cores present in the processors of the Azure Stack HCI cluster. This model does not provide for a minimum or a maximum on the number of physical cores licensed, much less limits on the activation duration.

Financial benefits for customers with a Software Assurance agreement

Customers who have Windows Server Datacenter licenses with active Software Assurance, can activate’Azure Hybrid Benefit also for Azure Stack HCI cluster. To activate this benefit, at no additional cost, you will need to exchange a Windows Server Datacenter core license with Software Assurance for an Azure Stack HCI physical core. This aspect allows to zero the Azure costs for the Azure Stack HCI host fee and provides the right to run an unlimited number of Windows Server guest virtual machines on the Azure Stack HCI cluster.

Furthermore, Azure Hybrid Benefits can also be activated for Azure Kubernetes Service (AKS). In this case, Windows Server StandardDatacenter licenses with active Software Assurance are required, or the presence of a Cloud Solution Provider subscription (CSP). Each Windows Server core license entitles you to use an AKS virtual core.

In the following image it is summarized as, customers with Software Assurance, can use Azure Hybrid Benefit to further reduce costs in the cloud, in on-premises datacenters and peripheral offices.

Figure 2 – What is included in the Azure Hybrid Benefit for customers in Software Assurance

Specifically for customers with a Software Assurance agreement, the adoption of Azure Stack HCI translates into a drastic reduction in the costs of modernizing the virtualization environment, making this solution even more competitive from a cost point of view compared to competitors on the market. To consult in detail the licensing requirements you can refer to this document.

Costs for guest VMs

The Azure costs listed in the previous paragraph do not include the operating system costs for guest machines running in the Azure Stack HCI environment. This aspect is also common to other HCI platforms, like Nutanix and VMware vSAN.

The following image shows how the licensing of guest operating systems can take place:

Figure 3 – Licensing of guest operating systems

Costs for Windows Server virtual machines

There are mainly two options for licensing Windows Server guest machines in Azure Stack HCI:

  • Buy Windows Server licenses (CAPEX mode), Standard or Datacenter, which include the right to activate the OS of guest virtual machines. The Standard Edition may be suitable if the number of Windows Server guest machines is limited, while if there are several Windows Server guest systems, it is advisable to evaluate the Datacenter Edition which gives the right to activate an unlimited number of virtualized Windows Server systems.
  • Pay for the Windows Server license for guest systems through your Azure subscription, just like in Azure environment. Choosing this option will incur a cost (OPEX) bet a “€22.4 / physical core / month ”* to be able to activate an unlimited number of Windows Server guest systems in the Azure Stack HCI environment.

*Costs estimated for the West Europe region and subject to change. For more details on the costs of Azure Stack HCI you can consult the Microsoft's official page.

Charges for other workloads running on Azure Stack HCI

The result we intend to pursue with the Azure Stack HCI infrastructure is to be able to run in an on-premises environment, not just virtual machines, but the same Microsoft public cloud workloads. To achieve this Microsoft is bringing the most popular Azure workloads to Azure Stack HCI and the following cost considerations apply to each of them:

  • Azure Kubernetes Service: the configuration of the K8s Arc enabled cluster is free **.
  • Azure Arc-enabled data services:
    • For SQL Server, customers can purchase SQL Server licenses in CAPEX mode or, who already has SQL licenses, can use Azure Hybrid Benefit for Azure Arc-enabled SQL Managed Instance, without having to pay the SQL license again.
    • If you want to switch to an OPEX model, you can obtain Microsoft SQL Server licenses through Microsoft's Azure Arc-enabled data services **.
  • Azure Virtual Desktop:
    • User access rights for Azure Virtual Desktop. The same licenses that grant access to Azure virtual desktops in the cloud also apply to Azure Virtual Desktop in Azure Stack HCI.
    • Azure Virtual Desktop Hybrid Service Fee. This fee is charged for each virtual CPU (vCPU) used by Azure Virtual Desktop session hosts running in Azure Stack HCI environment.

**For more details on Azure Arc costs you can consult this page.

Support costs

Azure Stack HCI, being in effect an Azure solution, is covered by Azure support with the following features:

  • A choice is provided between several Azure support plans, depending on your needs. Basic support is free, but in certain scenarios it is recommended that you at least consider Standard support, which provides a fixed monthly cost.
  • Technical support is provided by a team of experts dedicated to supporting the Azure Stack HCI solution and can be easily requested directly from the Azure portal.

Conclusions

Azure Stack HCI allows you to bring cloud innovation into your data center and at the same time create a strategic link to Azure. In the era of hybrid datacenters, a solution like Azure Stack HCI, allows you to structure the cost model at will and to have maximum flexibility. There are several vendors on the market offering solutions to build hyper-converged infrastructures (HCI) hybrid, and Azure Stack HCI can be very competitive, not only from the point of view of functionality, but also from the point of view of costs.

Azure IaaS and Azure Stack: announcements and updates (December 2022 – Weeks: 49 and 50)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

Azure

Compute

Azure Dedicated Host: Restart

Azure Dedicated Host gives you more control over the hosts you deployed by giving you the option to restart any host. When undergoing a restart, the host and its associated VMs will restart while staying on the same underlying physical hardware. With this new capability, now generally available, you can take troubleshooting steps at the host level.

New Memory Optimized VM sizes (preview)

The new E96bsv5 and E112ibsv5 VM sizes part of the Azure Ebsv5 VM series offer the highest remote storage performances of any Azure VMs to date. The new VMs can now achieve even higher VM-to-disk throughput and IOPS performance with up to 8,000 MBps and 260,000 IOPS. This enables you to run data intensive workloads more efficiently and process more data on fewer vCPUs, potentially optimizing infrastructure and licensing costs.

Networking

Feature enhancements to Azure Web Application Firewall (WAF)

Azure’s global Web Application Firewall (WAF) running on Azure Front Door, and Azure’s regional WAF running on Application Gateway, now support additional features that help organizations improve their security posture and make it easier to manage logging across resources:

  • SQL injection (SQLi) and cross site scripting (XSS) detection queries: new Azure WAF analytics SQLi and XSS detection rule templates simplify the process of setting up automated detection and response with Microsoft’s security incident & event management (SIEM) service: Microsoft Sentinel.
  • Azure policies for WAF logging: the regional WAF on Application Gateway and the global WAF running on Azure Front Door now have built-in Azure policies requiring resource logs and metrics. This allows organizations to enforce standards for WAF deployments to collect logs and metrics for further analysis and insights related to security events.

In addition, Azure regional WAF on Application Gateway now has:

  • Increased exclusion limit: CRS 3.2 or greater ruleset now supports exclusions limit up to 200, a 5x increase from older versions; allowing for greater customization on how the WAF handles managed rulesets.
  • Bot Manager ruleset exclusion rules: exclusions are extended to Bot Manager Rule Set 1.0. Learn more: WAF exclusions.
  • Uppercase transform on custom rules: you can now handle case sensitivity when creating custom WAF rules using uppercase transform in addition to the lowercase transform.

Storage

Azure NetApp Files cross-zone replication (preview)

The cross-zone replication feature allows you to replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone (AZ) to another in the same region. It uses a combination of the SnapMirror® technology used with cross-region replication and the new availability zone volume placement feature, to replicate data in-region; only changed blocks are sent over the network in a compressed, efficient format. It helps you protect your data from unforeseeable zone failures, without the need for host-based data replication. This feature minimizes the amount of data required to replicate across the zones, therefore limiting data transfers required and also shortens the replication time, so you can achieve a smaller restore point objective (RPO). Cross-zone replication doesn’t involve any network transfer costs, and hence it is highly cost-effective.

How to simplify systems management with Azure Automanage

The adoption of cloud solutions has helped to reduce operating expenses (Opex) and the management costs in numerous areas of IT. Indeed, many systems that previously ran on-premises and were complex to maintain are now simple managed services in the cloud.. At the same time though, the execution of systems located in different environments; and the wide range of new Azure services, can make operational management articulated. Microsoft, to better manage the various services and their configuration, provides the solution Azure Automanage, which appropriately integrated with Azure Arc, allows you to automate various operations during the entire life cycle of the machines, regardless of where they reside. This article lists the characteristics of the solution, showing how Azure Automanage, together with Azure Arc, can facilitate the day-to-day tasks of system administrators and ensure optimal adherence to Microsoft best practices.

Simplify the configuration and management of systems wherever they reside

Azure Automanage Automatically implement best practices in machine management while ensuring security compliance, corporate compliance and business continuity. Furthermore, Azure Arc for servers extends the possibilities offered by Azure in the field of governance and management also to physical machines and virtual systems that reside in environments other than Azure. To learn more about the implementation guidelines, Microsoft's proven best practices and tools designed to accelerate your cloud adoption journey should be referenced Microsoft Cloud Adoption Framework.

Quickly configure Windows and Linux server

By adopting this solution, you can detect, integrate and configure different Azure services during the entire life cycle of the machines, making a distinction between Production environments and DevTest environments. Azure services automatically managed by Azure Automanage and related specifications are available in this Microsoft documentation:

Figure 1 – Overview of services managed by Azure Automanage

The inclusion of machines in the service can take place on a large scale or individually, with the certainty that if the systems do not comply with the best practices imposed, Azure Automanage will be able to detect and correct them automatically.

The service can be activated directly from the Azure portal and requires a few simple steps.

The choice of configuration profiles

Azure Automanage uses configuration profiles to determine which Azure services should be enabled on the selected systems. Two configuration profiles are currently available by default, one for the DevTest environment and one for the Production environment. The two profiles are distinguished by the types of services to be enabled on the different workloads. Furthermore, in addition to the standard profiles it is allowed to configure some custom profiles with a certain subset of preferences regarding the various services.

After you enable the service Azure Automanage The process that leads the machines back to the best practices specified in the chosen configuration profile is started.

The status of the VMs after activation of the service can be of different types, here described.

Azure Automanage also recently introduced new profile customization options and more supported operating systems, including Windows 10/11, Red Hat Enterprise Linux, Canonical Ubuntu and SUSE Linux Enterprise Server.

Configure Windows and Linux servers in Azure environments, hybrid or multi-cloud through Azure Arc

Azure Automanage can be enabled on both Azure VMs and Azure Arc-enabled servers. Furthermore, Azure Automanage for Windows Server offers new features specific to Windows Server Azure Edition, that improve the uptime of Windows Server VMs in Azure and Azure Stack HCI environment. These features include:

  • Hotpatch
  • SMB over QUIC
  • Azure Extended Networking

Advantages of the solution

The adoption of Azure Automanage involves several advantages for the customer that can be summarized in the following points:

  • Cost reduction, automating machine management
  • Optimize workload uptime by performing tasks in an optimized way
  • Control over the implementation of security best practices

Conclusions

Machine life cycle management, especially in heterogeneous and large environments, can be very expensive in terms of time and costs. Furthermore, activities that are repeated frequently can be prone to errors, leading systems to a non-optimal configuration. Thanks to this integration between Azure Automanage and Azure Arc it is possible to simplify and automate all the operations necessary to ensure that the systems adhere to the desired requirements.