Category Archives: Azure Governance

The new solution to manage and govern system updates

Common cybersecurity practices provide, among the many tricks, the timely application of software updates. In fact,, this activity is also of fundamental importance to eliminate the vulnerabilities that allow the implementation of specific cyber attacks on company systems. To facilitate the application of patches, related to the operating system, to the machines of your infrastructure, Microsoft recently announced the availability of a new solution called "Update management center". This article reports the characteristics and peculiarities of this solution that helps simplify the management of updates and achieve compliance with regard to these aspects related to security.

What allows you to do this solution?

Update management center is the new solution that helps to centrally manage and govern the updates of all the machines present in your infrastructure. In fact,, by means of this solution it is possible:

  • Check update compliance for your entire fleet of machines.
  • Instantly distribute critical updates to protect your systems or plan installation within a defined maintenance window.
  • Take advantage of the different patching options, like Automatic VM guest patching in Azure, hot patching, and maintenance schedules defined by the customer.

To date, Update management center is able to manage and govern updates on:

  • Windows and Linux operating systems.
  • Machines residing in Azure, locally and on other cloud platforms, thanks to Azure Arc.

The following diagram illustrates how Update management center performs the evaluation and application of updates on all Azure systems and Arc-enabled servers, both Windows and Linux.

Figure 1 - Update management center overview

Update Management Center is based on a new Azure extension designed to provide all the features necessary to interact with the operating system as regards the evaluation and application of updates. This extension is automatically installed at the start of any operation of Update Management Center. The distribution of the extension is supported on Azure virtual machines or on Arc-enabled servers and is installed and managed using:

  • The Windows agent or the Linux agent for Azure virtual machines.
  • The Azure Arc agent for non-Azure physical computers or servers (bot Linux, and Windows).

The installation and configuration of the extension is managed by the solution and no manual intervention is required, as long as the Azure VM Agents or Agents for Azure Arc are functional. The extension ofUpdate management center runs code locally on the computer to interact with the operating system and allows you to:

  • Retrieve evaluation information about the status of system updates, specified by the Windows Update agent or by the Linux package manager*.
  • Start the download and installation of approved updates from the Windows Update client or from the Linux package manager.
  • Get all the information on the results of installing updates, which are reported inUpdate management center from the extension and are available for analysis via the Azure Resource Graph. The visualization of the evaluation data can be consulted for the last seven days and the results regarding the installation of updates are available for the last thirty days.

* The machines deliver notifications on updates based on the origin with which they are configured for synchronization. Windows Update Agent (WUA) on Windows machines it can be configured to reference Windows Server Update Services (WSUS) or to Microsoft Update. Linux machines can be configured to reference a local or public YUM or APT package repository.

Benefits of the solution

Update management center works without the need for onboarding, as it is a solution that is natively based on the Azure Compute platform and Azure Arc-enabled servers. This solution will soon take the place of Update Management of Azure Automation, removing any dependency on Azure Automation and Log Analytics.

The main strengths of the new solution are summarized in the following paragraphs.

Centralized visibility of updates

Thanks to this solution it is possible to consult centrally, direct from the Azure Portal, the state of compliance with respect to the updates requested and distributed on the various systems.

Native integration and zero onboarding

Being a solution created as a native feature of the Azure platform, there is no dependency on Log Analytics and Azure Automation. Furthermore, the solution supports full integration with Azure Policy.

Integration with Azure roles and identities

The solution allows for granular access control at the resource level. Everything is based on Azure Resource Manager and therefore allows the use of RBAC and ARM-based roles in Azure.

High flexibility in managing updates

The ability to automatically check for missing or on-demand updates, as well as the ability to act by installing updates immediately or to schedule them for a later date are elements that guarantee high flexibility. Furthermore, it is allowed to keep the systems updated by adopting new techniques, such as automatic VM guest patching in Azure and hotpatching.

Integration with other solutions

In this context it is worth considering that Microsoft offers, in addition to this solution, also other features to manage updates for Azure virtual machines.  These features should be considered as an integral part of your overall update management strategy. Among the various features we find:

  • Automatic OS image upgrade
  • Automatic VM guest patching
  • Automatic extension upgrade
  • Hotpatch
  • Maintenance control
  • Scheduled events

To learn more about all these solutions, you can consult the Microsoft's official documentation.

Conclusions

This new feature, fully integrated into the Azure platform and able to exploit the potential of Azure Arc, it allows you to keep all the systems of your infrastructure up-to-date in a simple way, direct and with very little administrative effort. Furthermore, guarantees total visibility on update compliance for both Windows and Linux systems, fundamental element to increase the security posture of your infrastructure.

Principles and techniques for governing and optimizing Azure costs

What are the main features and potential strengths of the cloud can in some circumstances hide pitfalls if not properly governed. Cost management is one of the fundamental disciplines in cloud governance. This article summarizes the principles and techniques that can be adopted to better manage and minimize the costs to be incurred for the resources created in the Azure environment..

The cloud cost optimization process is certainly a theme very much felt by several customers to the point that, for the sixth consecutive year, appears to be the leading cloud initiative according to the Flexera report:

Figure 1 – Top cloud initiatives for the year 2022

Principles to better manage costs

To better manage Azure costs, it is important to take into consideration the principles described in the following paragraphs.

Design

Only thanks to a structured design process, which includes a careful analysis of business requirements, you are able to customize the use of solutions in the cloud environment. It is therefore important to define the infrastructure to be implemented and how it is used, all through a design process to maximize the efficiency of resources located in the Azure environment.

Visibility

Having tools that allow you to have global visibility and allow you to receive notifications about Azure costs is an important aspect to consider.

Responsibility

A good practice is to attribute the costs of cloud resources, within its own business organization, to ensure that the people responsible are aware of the costs attributable to their working group. This allows you to fully understand the organization's Azure expenses. To do this, you should organize your Azure resources to maximize your understanding of cost allocation.

Optimization

Periodic review processes should act on Azure resources with the goal of reducing spending where possible. Thanks to the set of information available, it is possible to easily identify underutilized resources, remove waste and maximize cost savings opportunities.

Iteration

IT staff should be continuously involved in the iterative cost optimization processes of Azure resources, as it appears to be a key principle for a responsible governance process of the cloud environment.

Techniques to optimize costs

Regardless of the tools and solutions used, to optimize Azure costs you can adopt the following policies:

  • Turn off unused resources since the cost of the various Azure services is calculated based on the use of resources. For resources that do not need continuous execution and that allow, without loss of configurations or data, shutdown or suspension, you can use automation that, based on a default scheduling, optimizes the use and consequently the costs of the same.
  • Size resources appropriately consolidating workloads and addressing underutilized resources.
  • For resources in the Azure environment that are being used continuously, you can evaluate the activation of Reservations. The Azure Reservation allow you to achieve cost savings up to 72% compared to the pay-as-you-go price , simply committing to payment, for one or three years, for the use of Azure resources. The payment can take place in advance or on a monthly basis, at no additional cost. The purchase of these reservations can be made directly from the Azure portal and is contemplated for customers who have the following types of subscriptions: Enterprise agreement, Pay-As-You-Go and Cloud Solution Provider (CSP).  Azure Reserved Instances not only allow you to obtain an important benefit in terms of costs, but they also guarantee to reserve computational resources for their own workloads. One might think that the resources in Azure are almost infinite, but in reality this is not the case. You tend to always have the opportunity to draw on new resources, but in special situations (see for example in the initial period of COVID-19) these may be in short supply and, only in the face of having purchased resources in this mode, you can be sure that the resources necessary for the execution of your workloads are reserved.
  • To reduce Azure costs, it is also possible to evaluate the adoption of’Azure Hybrid Benefit. The savings is given from the fact that Microsoft allows you to pay only the cost of Azure infrastructure, while licensing for Windows Server or SQL Server is covered by an existing Software Assurance or subscription agreement. This benefit is applicable to both the Standard version and the Datacenter version of Windows Server .

Figure 2 – Cost structure for a Windows VM

The Azure Hybrid Benefit can also be used for Azure SQL Database, for SQL Servers installed on Azure virtual machines and for SQL Managed Instance. These benefits facilitate the migration to cloud solutions (180 days of dual use right) and help to maximize the investments already made in terms of SQL Server licenses. For more information on how you can use the Azure Hybrid Benefit for SQL Server you can view FAQ in this document. Furthermore, this advantage also applies to RedHat and SUSE Linux subscriptions.

The Azure Hybrid Benefit can be used in conjunction with the Azure Reserved VM Instance, allowing overall savings that can reach 80% (in the case of purchase of Azure Reserved Instance for 3 years).

Figure 3 – Percentages of savings by adopting RIs and Azure Hybrid Benefit

  • Evaluate the adoption of new serverless technologies and apply optimizations in the architectures by selecting the Azure service most suitable for the specific application.
  • Dynamically allocate and de-allocate resources to meet performance needs. In this case we are talking about “autoscaling” which is the process of dynamically allocating resources to meet performance requirements. With the increase in the volume of work, an application may require additional resources to maintain desired performance levels and meet SLAs (Service Level Agreement). From the moment the demand decreases and the additional resources are no longer needed, they can be deallocated to minimize costs. The autoscaling process takes advantage of the elasticity of cloud-hosted environments while reducing management overhead.
  • For test and development environments it is possible to evaluate the adoption of DevTest subscriptions, which allow you to get considerable discounts on Azure rates. These subscriptions can be activated as part of an Enterprise Agreement.

Conclusions

The use of a methodological approach to cloud cost management and the adoption of the right techniques are fundamental aspects that allow you to better address the cost governance challenge. Taking into consideration the principles and techniques given in this article, you have the ability to optimize expenses and maximize your investment in the cloud.

How to deal with the migration of the datacenter to Azure

The adoption of solutions and services in the public cloud is rapidly and steadily increasing and this increase is mainly due to the fact that many organizations have realized that moving existing workloads to Azure can bring significant benefits. These include the ability to rapidly deploy applications allowing you to benefit from an infrastructure present on a global scale, the reduction of maintenance requirements and costs and performance optimization. In this article we will examine the main aspects to consider in order to adopt a strategic approach regarding the migration of your IT infrastructure to Azure and how, thanks to this approach, you can take advantage of all the benefits of Microsoft's public cloud.

Main triggers for migration

Among the main aspects that lead customers to face a migration of their workloads to cloud solutions we find:

  • the deadlines of the data center contracts in use;
  • the need to quickly integrate new acquisitions;
  • the urgent need for skills and resources;
  • the need to keep the software and hardware in use updated;
  • the willingness to respond effectively to potential security threats;
  • compliance needs (e.g.. GDPR);
  • the need to innovate their applications and make them available faster;
  • the end of the software support purpose for certain products and the need to obtain extended security updates free of charge, also for Windows Server 2008/R2, both for Windows Server 2012 / R2, in addition to the corresponding versions of SQL Server.

Figure 1 – Main triggers for migration

Finding yourself in at least one of these triggers that could initiate a migration process is very common. To undertake this migration in the best possible way, it is necessary to take into consideration what is reported in the following paragraphs.

The path of adopting cloud solutions

In the path of adoption of cloud solutions defined in the Microsoft Cloud Adoption Framework for Azure six main actions emerge that should be considered:

  • Strategy definition: definition of the business justification and the expected results.
  • Plan: aligning the cloud adoption plan to business results, through:
    • Inventory of digital assets: cataloging of workloads, applications, data sources, virtual machines and other IT resources and assessments to determine the best way to host them in the cloud.
    • Create a cloud adoption plan by prioritizing workloads based on their business impact and technical complexity.
    • Definition of skills and support needs, to ensure that the company is prepared for change and new technologies.
  • Ready: preparation of the cloud environment.
  • Adopt: implementation of desired changes in IT and business processes. Adoption can take place through:
    • Migration: focuses on moving existing on-premises applications to the cloud based on an incremental process.
    • Innovation: focuses on the modernization of digital assets to drive business and product innovation. Modern approaches to implementation, operations and infrastructure governance make it possible to quickly bridge the gap between development and operations.
  • Govern: evaluation and implementation of best practices in governance.
  • Manage: implementation of operational guidelines and best practices.

Azure Landing Zone

Regardless of the migration strategy that you decide to adopt, it is advisable to prepare the Landing Zone, which represents, in the cloud adoption journey, the destination in the Azure environment. It is a horizontally scalable architecture designed to allow the customer to manage functional cloud environments, while maintaining best practices for security and governance. The architecture of the Landing Zone must be defined on the basis of business requirements and the necessary technical requirements.

Figure 2 – Conceptual example of an Azure landing zone

There are several options to implement the Landing Zone, thanks to which it will be possible to meet the deployment and operational needs of the cloud portfolio.

A structured and methodological approach and migration strategies

There are several paths for adopting solutions in Azure. To best address each of these paths, it is recommended to develop a complete business case and project plan in advance, containing information on benefits and costs (TCO) of moving workloads to the Microsoft Azure cloud, as well as recommendations on how to optimize the use model of Microsoft Azure services.

Figure 3 – Paths of adoption of cloud solutions

Based on the company's cloud strategy and general business objectives, it is advisable to examine the distribution and use of the workloads in use and evaluate their "cloud-ready" status to determine the best options and the most appropriate methods (lift&shift, refactor, rearchitect and rebuild) with a view to consolidation and migration to Microsoft Azure cloud services.

Figure 4 – Possible migration strategies

* These migration strategies are reported by Gartner research. Gartner also defines a fifth strategy called " Replace". 

The following paragraphs describe the main migration strategies that can be useful.

Rehost application (i.e., lift & shift)

It involves redistributing an existing application on a cloud platform without modifying its code. The application is migrated “as well as”, which provides basic cloud benefits without facing high risk and without incurring the costs of making changes to the application code.

This migration technique is usually used when:

  • You need to quickly move applications from on-premise to the cloud
  • When application is necessary, but the evolution of its capabilities is not a corporate priority
  • For applications that have already been designed to take advantage of Azure IaaS scalability
  • In the presence of specific application or database requirements, that can only be satisfied using IaaS virtual machines in an Azure environment.
Example

Moving a line of business application on board virtual machines residing in the Azure environment.

Refactor application (i.e., repackaging)

This migration strategy involves minimal changes to the application code or configuration changes, necessary to optimize the application for Azure PaaS and make the most of the cloud.

This migration technique is usually used when:

  • You want to leverage an existing code base
  • Code portability is an important element
  • The application can be easily packaged to run in an Azure environment
  • The application must be more scalable and rapidly deployable
  • We want to promote business agility through continuous innovation (Devops)
Example

An existing application is refactored by adopting services such as App Service or Container Services. Furthermore, SQL Server is refactored to Azure SQL Database.

Rearchitect application

The re-design technique consists of modifying or extending the architecture and code base of the existing application, optimizing it for the cloud platform and thus ensuring better scalability.

This migration technique is usually used when:

  • The application needs major revisions to incorporate new features or to work more effectively on a cloud platform
  • They want to leverage the investments made in existing applications
  • There is a need to minimize the management of VMs in the cloud
  • You intend to meet your scalability requirements in a cost-effective way
  • We want to promote business agility through continuous innovation (Devops)
Example

Breakdown of a monolithic application into microservices in an Azure environment that interact with each other and are easily scalable.

Rebuild application

The rebuild of the application from scratch involves the use of cloud-native technologies on Azure PaaS.

This migration technique is usually used when:

  • Rapid development is needed and the existing application is too limiting in terms of functionality and duration
  • We want to take advantage of the new innovations present in the cloud as serverless, Artificial intelligence (AI) e IoT
  • You have the skills to build new cloud-native applications
  • We want to promote business agility through continuous innovation (Devops)
Example

Creation of green field applications with innovative cloud native technologies, such as Azure Functions, Logic Apps, Cognitive Service, Azure Cosmos DB and more.

Azure Cloud Governance

In the successful adoption of services in the public cloud, as well as requiring a structured and methodological approach, it becomes essential to adopt a precise strategy to migrate not only applications, but also governance and management practices, adapting them appropriately.

For this reason it becomes essential to put in place a process of Cloud Technical Governance through which it is possible to guarantee the Customer an effective and efficient use of IT resources in the Microsoft Azure environment, in order to achieve their goals. To do this, it is necessary to apply controls and measurements to help the client mitigate risks and create boundaries. Governance policies, within the customer's environment, they will also act as an early warning system to detect potential problems.

Conclusions

The digital transformation process that affects companies often involves the migration of workloads hosted in their data centers to the cloud to obtain better results in terms of governance, security and cost efficiency. The innovation given by the migration to the cloud frequently becomes a business priority, for this reason it is advisable to adopt your own structured approach, to address various migration scenarios, which allows to reduce complexity and costs.

Azure Governance: how to control system configurations in hybrid and multicloud environments

There are several companies that are investing in hybrid and multicloud technologies to achieve high flexibility, that enables you to innovate and meet changing business needs. In these scenarios, customers face the challenge of using IT resources efficiently, in order to best achieve your business goals, implementing a structured IT governance process. This can be achieved more easily if you have solutions that, in a centralized way, allow you to inventory, organize and enforce control policies on your IT resources wherever you are. Azure Arc solution involves different technologies with the aim of supporting hybrid and multicloud scenarios, where Azure services and management principles are extended to any infrastructure. In this article we will explore how, thanks to the adoption of the Azure Guest Configuration Policy it is possible to control the configurations of systems running in Azure, in on-premises datacenters or other cloud providers.

The principle behind Azure Arc

The principle behind Azure Arc is to extend Azure management and governance practices to different environments and to adopt typically cloud solutions, as DevOps techniques (infrastructure as code), also for on-premises and multicloud environments.

Figure 1 – Azure Arc overview

Enabling systems to Azure Arc

Enabling Azure Arc servers allows you to manage physical servers and virtual machines residing outside Azure, on the on-premises corporate network or at another cloud provider. This applies to both Windows and Linux systems. This management experience is designed to provide consistency with Azure native virtual machine management methodologies. In fact, connecting a machine to Azure through Arc is considered in all respects as an Azure resource. Each connected machine has a specific ID, is included in a resource group and benefits from standard Azure constructs such as Azure Policies and tagging.

To offer this experience, the installation of the specific Azure Arc agent is required on each machine that is planned to connect to Azure ("Azure Connected Machine"). The following operating systems are currently supported:

  • Windows Server 2008 R2, Windows Server 2012 R2 or higher (this includes core servers)
  • Ubuntu 16.04 and 18.04 LTS (x64)
  • CentOS Linux 7 (x64)
  • SUSE Linux Enterprise Server (SLES) 15 (x64)
  • Red Hat Enterprise Linux (RHEL) 7 (x64)
  • Amazon Linux 2 (x64)
  • Oracle Linux 7

The Azure Arc Connected Machine agent consists of the following logical components:

  • TheHybrid Instance Metadata service (HIMDS) that manages the connection to Azure and the Azure identity of the connected machine.
  • The Guest Configuration agent that provides in-guest policy and guest configuration features.
  • TheExtension Manager agent that manages installation processes, uninstalling and updating machine extensions.

Figure 2 – Azure Arc Agent Components

The Connected Machine agent requires secure outbound communication to Azure Arc on TCP port 443.

This agent provides no other features and does not replace the Azure Log Analytics agent, which remains necessary when you want to proactively monitor the operating system and workloads running on the machine.

For more information about installing Azure Arc, see this official Microsoft document.

Azure Arc-enabled servers can benefit from several Azure Resource Manager-related features such as Tags, Policies and RBAC, as well as some features related to Azure Management.

Figure 3 – Azure Management for all IT resources

Guest Configuration Policy di Azure

Guest Configuration Policies allow you to control settings within a machine, both for virtual machines running in Azure environment and for "Arc Connected" machines. Validation is performed by the client and by the Guest Configuration extension as regards:

  • Operating system configuration
  • Configuration or presence of applications
  • Environment settings

At the moment, most of the Azure Guest Configuration Policies only allow you to make checks on the settings inside the machine, but they don't apply configurations. The exception is a built-in time zone configuration policy operating system for Windows machines.

Requirements

Before you can check the settings inside a machine, through guest configuration policies, you must:

  • Enable a’extension on the Azure VM, required to download assigned policy assignments and corresponding configurations. This extension is not required for "Arc Connected" machines as it is included in the Arc agent.
  • Make sure that the machine has a system-managed identity, used for the authentication process when reading and writing to the guest configuration service.

Operation

Azure provides built-in specification platform Initiatives and a large number of Guest Configuration Policy, but you can also create custom one both in Windows environment, both in Linux environment.

Guest Configuration policy assignment works the same way as standard Azure Policies, so you can group them into initiative. Specific parameters can also be configured for Guest Configuration Policies and there is at least one parameter that allows you to include Azure Arc-enabled servers. When you have the desired policy definition, it is possible to assign it to a subscription and possibly in a more limited way to a specific Resource Group. You also have the option of excluding certain resources from the application of the policy.

Following the assignment, it is possible to assess the compliance status in detail directly from the Azure portal.

Inside the machine, the Guest Configuration agent uses local tools to audit the configurations:

The Guest Configuration agent checks for new or modified guest policy assignments each 5 minutes and once the assignment is received the settings are checked at intervals of 15 minutes.

The Cost of the Solution

The cost of Azure Guest Configuration Policies is based on the number of servers registered to the service and which have one or more guest configurations assigned. Any other type of Azure Policy that is not based on guest configuration is offered at no additional cost, including virtual machine extensions to enable services such as Azure Monitor and Azure Security Center or auto tagging policies. The billing is distributed on an hourly basis and also includes the change tracking features present through Azure Automation. For more details on costs please visit the Microsoft's official page.

Conclusions

IT environments are constantly evolving and often have to deliver business-critical applications based on different technologies, active on heterogeneous infrastructures and which in some cases use solutions provided in different public clouds. The adoption of a structured IT governance process is easier also thanks to the Guest Configuration Policies and the potential of Azure Arc, that allow you to more easily control and support hybrid and multicloud environments.

Cloud Governance: how to control cloud costs through budgets

In the public cloud, the simplicity of delegation and the consumer-related cost model exposes companies to a risk of loss of control over them. Always having a supervision on the expenses to be incurred for the resources created in the cloud environment therefore becomes an aspect of fundamental importance to implement an effective governance process. The solutionAzure Cost Management provides a comprehensive set of cloud cost management features, including the ability to set up budgets and expense alerts. This article describes how to best use budgets to proactively control and manage cloud service costs.

Budgets are spending thresholds that can be set in the solution Azure Cost Management + Billing, capable of generating notifications when they are reached. Cost and resource utilization data are generally available within 20 hours and budgets are evaluated against these costs each 12-14 hours.

The procedure for setting budgets from the Azure portal involves the following steps.

Figure 1 – Add a budget from Cost Management

Figure 2 – Parameters required when creating budgets

During the budget configuration phase, you must first assign the scope. Depending on the type of Azure account, you can select the following scopes:

  • Azure role-based access control (Azure RBAC)
    • Management groups
    • Subscription
  • Enterprise Agreement
    • Billing account
    • Department
    • Enrollment account
  • Individual agreements
    • Billing account
  • Microsoft Customer Agreement
    • Billing account
    • Billing profile
    • Invoice section
    • Customer
  • AWS scopes
    • External account
    • External subscription

For more information about the use of scopes, see this Microsoft's document.

To create a budget that aligns with the billing period, you can select a reset period for the month, quarter or year of billing. If, on the other hand, you intend to create a budget aligned to the calendar month, you must select a reset period monthly, quarterly or yearly.

Later, it is possible to set the expiration date from which the budget becomes invalid and its cost evaluation is interrupted.

Based on the fields you choose when you define your budget, a chart is shown to help you set the spending threshold to be used. By default, the suggested budget is based on the higher expected cost that could be incurred in future periods, but the budget amount can be changed to suit your needs.

After you set up your budget, you are prompted to configure your alerts. Budgets require at least one cost threshold (% budget) and an email address to use for notifications.

Figure 3 – Configure alerts and e-mail addresses to use for notifications

For a single budget, you can include up to five thresholds and five email addresses. When a budget threshold is reached, email notifications are normally sent within an hour of the evaluation.

When creating or editing a budget, but only if the scope defined for the same is a subscription or a resource group, you can configure it to invoke an Action Group. TheAction Group allows you to customize notifications to suit your needs and can perform various actions when the budget threshold is reached, including:

  • Voice call or text message (for enabled countries)
  • Sending an email
  • Calling a webhook
  • Sending data to ITSM
  • Recalling a Logic App
  • Sending a push notification on mobile app of Azure
  • Running a runbook of Azure Automation

Figure 4 – Associating an Action Group when a threshold is reached

After you finish creating a budget, you can view it in the respective section.

Figure 5 – Budget created and its percentage of usage

The visualization of the budget with respect to the expenditure trend is one of the first actions that is generally taken into consideration in the cost analysis phase.

Figure 6 – View budget in cost analysis

When a certain threshold is reached in a budget, in addition to the notifications you set, an alert is also generated in the Azure portal.

Figure 7 – Alert generated when a certain threshold is reached

When the budget thresholds that you create are exceeded, notifications are triggered, but none of the cloud resources are changed and as a result consumption is not interrupted.

Integration with Amazon Web Services (AWS) Cost and Usage report (CUR) you can monitor and control AWS costs in Azure Cost Management and define budgets for AWS resources too.

The Cost of the Solution

You can use Azure Cost Management for free, in all its features, for the Azure environment. As for the management of AWS costs is expected, in the final release, a charge equal to 1% of total spend managed for AWS. For more details on the cost of the solution you can consultthe pricing of Cost Management.

Conclusions

Cost control is a key component to maximize the value of your cloud investment. By using budgets, you can easily activate an effective mechanism to proactively control and manage the costs of cloud services located on both Microsoft Azure and Amazon Web Services (AWS).

Azure Arc: new features to manage systems in hybrid environments

The complexity of IT environments is constantly expanding to the point of having reality with applications based on different technologies, active on heterogeneous infrastructures and perhaps using solutions in different public clouds. The need greatly felt by customers is to be able to adopt a solution that, in a centralized way, invent it, organize and enforce control policies on their IT resources wherever they are. Microsoft's response to this need is Azure Arc, the solution involving different technologies with the aim of developing new hybrid scenarios, where Azure services and management principles are extended to any infrastructure. This article lists new features that were recently introduced to extend the management capacity of hybrid environments.

The servers enabled for the Azure Arc solution can already benefit from various features related to Azure Resource Manager such as Tags, Policies and RBAC, as well as some features related to Azure Management.

Figure 1 – Azure Management for all IT resources

Thanks to the new update that was recently announced you can use new extensions, calls Azure Arc Extensions, to expand functionality and further extend Azure management and governance practices to different environments. This allows to adopt more and more typically cloud solutions, as DevOps techniques (infrastructure as code), even for on-premises environments.

Azure Arc Extensions

The Azure Arc Extensions are applications that allow you to make configurations and perform post-deployment automation tasks. These extensions can be run directly from the Azure command line, PowerShell or Azure portal.

The following Azure Arc Extensions are currently available and can be deployed on Azure Arc-enabled servers.

Custom Script Extensions for Windows and Linux Systems

With this extension, you can perform post-provisioning tasks of the machine to perform customizations of your environment. By adding this extension, you can download custom scripts, for example from Azure Storage, and run them directly on the machine.

Figure 2 – Custom Script Extensions, for Windows systems enabled for Azure Arc, from the Azure Portal

When deploying the Custom Script Extension, you can add the file that contains the script to run and optionally add its parameters. For Linux Systems, this is a shell script (.sh), while for Windows is a Powershell script (.ps1).

Desired State Configuration extension on Windows and Ubuntu systems (DSCForLinux)

Desired State Configuration (DSC) is a management platform that you can use to manage your IT and development infrastructure with a view to "configuration as code".

DSC for Windows provides new Windows PowerShell cmdlets and resources that you can use to declaratively specify how you want to configure your software environment. It also provides a useful tool for maintaining and managing existing configurations. This extension works like the extension for virtual machines in Azure, but it is designed to be deployed on Azure Arc-enabled servers.

Figure 3 – Powershell Desired State Configuration, for Windows systems enabled for Azure Arc, from the Azure Portal

The extension DSCForLinux allows you to install the OMI agent and DSC agent on Azure Arc-enabled Ubuntu systems. The DSC extension allows you to perform the following actions:

  • Register the VM with an Azure Automation account to extract (Pull) configurations (Register ExtensionAction).
  • Deploy MOF configurations (Push ExtensionAction).
  • Apply the MOF meta configuration to the VM to configure a pull server to extract the node configuration (Pull ExtensionAction).
  • Install custom DSC modules (Install ExtensionAction).
  • Remove custom DSC modules (Remove ExtensionAction).

 

OMS Agent for Linux – Microsoft Monitoring Agent

The installation of this agent allows you to collect the monitor data from the guest operating system and the application workloads of the systems and send them to a Log Analytics workspace. This agent is used by several Azure management solutions, including Azure Monitor, Azure Security Center, and Azure Sentinel. Although today it is possible to monitor non-Azure VMs even without Azure Arc, the use of this extension allows you to automatically detect and manage agents in VMs. Once integrated, Azure Arc-enabled servers will fit perfectly into existing Azure portal views along with virtual machines in Azure and Azure scale sets.

After you deploy the Azure Arc agent on the systems, you can install the Microsoft Monitoring Agent (MMA) using this extension, simply by adding the Log Analytics workspace ID and its key.

Figure 4 – Microsoft Monitoring Agent extension for Azure Arc from the Azure portal

Thanks to the availability of these new extensions, Azure Arc-enabled servers also have features such as Update Management, Inventory, Change Tracking and Monitor.

Update Management

The Update Management solution allows you to have an overall visibility into update compliance for both Windows and Linux systems. The search panel can quickly identify missed updates and provide the ability to schedule deployments for update installation within a specific maintenance window.

Inventory

This feature allows you to retrieve inventory information relating to: installed software, files, Windows Registry keys, Windows Services and Linux Daemons.

Change Tracking

Change Tracking feature allows you to track system changes to Daemons, File, Registry, software and services on Windows . This feature can be very useful to diagnose specific problems and to enable alerts against unexpected changes.

Conclusions

Thanks to the availability of these new extensions, you can take advantage of greater functionality, in governance and management typical of Azure, also for hybrid cloud environments. This is an important evolution of this solution, at the moment still in preview, which is soon destined to be further enriched with important new features.

Azure Monitor: how to enable the monitor service for virtual machine through Azure Policy

The service that allows you to monitor virtual machines has been made available in Azure Monitor, called Azure Monitor for VMs. This service allows you to analyze system performance data and makes a map that identifies all dependencies of virtual machines and their processes. The recommended way to enable this solution for different systems is through Azure Policy adoption. This article describes the steps to take to activate it using this method, taking up various concepts related to Azure governance.

Key Features of Azure Monitor for VMs

Azure Monitor for VMscan be used on Windows and Linux virtual machines, regardless of the environment in which they reside (Azure, on-premises or at other cloud providers) and includes the following areas:

  • Performance: shows summary details of performance, from the guest operating system. The solution has powerful data aggregation and filtering capabilities that enable you to meet the challenge of monitoring performance for a very large number of systems. This allows you to easily monitor the resource usage status of all VMs and easily identify those that have performance issues.
  • Maps: generates a map with the interconnections between the various components that reside on different systems. Maps show how VMs and processes interact with each other and can identify dependencies on third-party services. The solution also allows you to check for connection errors, count connections in real time, network bytes sent and received by processes and latencies encountered at the service level.

Enabling through Azure Policy

The Azure Policy allow to apply and force compliance criteria and related remediation actions on a large scale. To enable this feature automatically on virtual machines in your Azure environment and achieve a high level of compliance, it is recommended that you use Azure Policies. Using Azure Policy, you can:

  • Deploy the Log Analytics agent and Dependency agent.
  • Having a report on the status of compliance.
  • Start remediation actions for non-compliant VMs.

One requirement to check before activating is the presence of the solution VMInsights in the Azure Monitor Log Analytics workspace that will be used to store monitor data.

Figure 1 – Configuring the Analytics log workspace

Selecting the desired workspace triggers the installation of the solution VM Insights which allows you to collect performance counters and metrics for all virtual machines connected to that workspace.

To activate Azure Monitor for VMs policy just select the relevant onboarding tile on the main screen of the solution.

Figure 2 – Selecting Azure Policy as a enable method

The following blade will show the coverage status of the service and provide the ability to assign policies for its activation.

Figure 3 – Assigning the Initiative at the Management Group level

The Azure Management Groups, organize different subscriptions into logical containers, on which define, implement and verify government policies needed.

The Initiatives, which are a set of multiple Azure Policy, can be assigned at the Resource Group level, Subscription or Management Group. It is also possible to exclude certain resources from the application of policies.

In this regard, the policies for enabling Azure Monitor for VMs are grouped into a single "initiative", "Enable Azure Monitor for VMs" that includes the following policies:

  • Audit Dependency agent deployment – VM image (OS) unlisted
  • Audit Log Analytics agent deployment – VM image (OS) unlisted
  • Deploy Dependency agent for Linux VMs
  • Deploy Dependency agent for Windows VMs
  • Deploy Log Analytics agent for Linux VMs
  • Deploy Log Analytics agent for Windows VMs

This Initiative is recommended to be assigned at the Management Group level.

Figure 4 – Configuring the association

Among the parameters you are prompted to specify the Log Analytics workspace and optionally you can specify any remediation tasks.

Following the assignment, you can evaluate the State of compliance in detail and if it is necessary apply remediation actions.

Figure 5 – Verification of Initiative compliance status

Once the enable process is complete, you can analyze the system performance data and the maps created to identify all the dependencies of the virtual machines and their processes.

Figure 6 – Performance collected for systems

Figure 7 – Map with the interconnections between various systems

Figure 8 – Map showing connection details

An effective method to make these data easily accessible and to analyze them in a simple way is the use of Workbooks, interactive documents that allow you to better interpret information and do in-depth analysis. In this document of Microsoft you can consult the list of related Workbooks included in Azure Monitor for VMs and how to create your own custom.

Conclusions

This article demonstrates how you can enable the solution Azure Monitor for VMs thanks to the adoption of the Azure Policy in a simple way, fast and effective. The solution provides very useful information that typically needs to be collected on different systems in your environment. Increasing the complexity and amount of services on Azure makes it essential to adopt tools like Azure Policy, to have effective governance policies. Furthermore, with the introduction of Azure Arc it will be possible to extend these Azure management and governance practices to different environments, thus facilitating the implementation of features present in Azure on all infrastructure components.

Azure Governance: Azure Blueprints overview

IT governance enables you to create a process through which you can ensure that your business companies can efficiently use their IT resources, with the aim of being able to effectively reach their goals. Governance in the Azure Environment is made possible by a set of services specifically designed to enable large-scale management and control of various Azure resources. These tools include Azure Blueprints which allows for the design and creation of new components in Azure, fully complying with company specifications and standards. This article provides an overview of the solution to provide the necessary elements for its use.

Azure Blueprints allows Cloud Architects and Cloud Engineers, responsible for building architectures in Azure, to define and implement a set of Azure resources in a repeatable way, with the certainty of adhering to the standards, models and company-defined requirements. Azure Blueprints also allows you to quickly release new environments, adopting integrated components and accelerating development time and the delivery.

The main strengths of the solution Azure Blueprints can be summarized as follows.

Simplify the creation of Azure environments

  • Centralize the creation of new Azure environments using templates.
  • Allows you to add resources, policies and roles.
  • Allows you to track project updates through versioning.

Azure Blueprints through a declarative model allows you to orchestrate the deployment of various resource templates and other Azure artifacts. The service Azure Blueprints is based and supported by Azure Cosmos DB. Blueprint objects are replicated to multiple Azure regions, thus obtaining a low-latency, high availability and consistent access to them, regardless of the region in which the resources are deployed.

It allows to enforce compliance

  • Enables developers to create fully governed environments through self-service methodologies.
  • Provides the ability to centrally create multiple Azure environments and subscriptions.
  • Leverage integration with Azure Policies and devOps lifecycle.

Allows you to control locks on resources

  • It ensures that the base resources can not be modified.
  • It manages lock centrally.
  • It allows you to update the resources locked by means of changes to the definition of the blueprint model.

How to use Azure BluePrints

The article shows the steps to follow in order to adopt the solution Azure Blueprints.

Figure 1 - How Azure Blueprint works

The first steps include the creation of a blueprint that can be done via Azure portal, PowerShell or REST API.

Figure 2 - Initial screen of Bluprints in the Azure portal

By starting the creation process from the Azure portal it is possible to start from a blank blueprint or use some available examples.

Figure 3 - Creation of the blueprint by the Azure portal

The blueprint consists of different artifacts like: Role Assignments, Policy Assignments, Azure Resource Manager templates and Resource Group. After the creation you must publish the blueprint (at the end of creation will be in the draft state) specifying versioning. Azure Blueprints is very useful for companies that use the infrastructure-as-code model as it contemplates the processes of continuous integration and continuous deployment.

Only after publishing a blueprint you can assign it to one or more Azure subscriptions, specifying the lock type according to the following states:

  • Don’t Lock: means that resources created by Blueprints will not be protected.
  • Do Not Delete: means that resources can be changed, but not removed.
  • Read Only: the allocation results in locked and the resulting resources can not be modified or removed, even from subscription owners. In this case, it should specify that not all Azure resources support the lock and that the allocation of the lock can take up to 30 minutes to be effective.

Figure 4 – Blueprint Assignment

During the Blueprint assignment, you will also be prompted for the parameters to deploy the resources.

Figure 5 – Blueprint parameter request example

An interesting aspect of the solution Azure Blueprints is that the blueprints you create maintain a relationship with the assigned resources and can be monitored and audited, this is not possible using simple ARM templates and policies.

Conclusions

Azure Blueprints is a service that provides those involved to realize Azure architectures in the ability to define a set of resources into easily repeatable manner, in compliance with corporate standards and your organization's requirements. By adopting the blueprint you can rapidly build and deploy new environments, contemplating a series of integrated components. This allows not only to distribute consistent environments, but to do so in agile, enabling organizations to accelerate the development and delivery of solutions in Azure.

Azure Arc: a new approach to hybrid environments

The use of hybrid architectures in enterprise reality is more and more predominant, they allow you to continue to benefit from investments made in your on-premises environment and, at the same time, use the innovation introduced by the cloud. The adoption of hybrid solutions is a winner if it takes into account a shared policy for distribution, component management and security. Without consistency in the management of different environments, the costs and complexities are likely to grow exponentially. Microsoft has decided to respond to this need with the solution Azure Arc, involving a range of technologies with the aim of developing new hybrid scenarios, where Azure services and management principles are extended to any infrastructure. This article presents the approach adopted by Azure Arc for hybrid environments.

The complexity of IT environments is constantly expanding to the point where we find reality with applications based on different technologies, active on heterogeneous infrastructures and maybe that adopt solutions in different public cloud. The need for customers is to be able to adopt a solution that centrally allows them to inventory, organize and enforce control policies on their IT resources wherever they are.

The principle behind Azure Arc is to extend Azure management and governance practices to different environments and to adopt typically cloud solutions, as DevOps techniques (infrastructure as code), even for on-premises environments.

Figure 1 – Azure Arc overview

To achieve this, Microsoft has decided to extend the model Azure Resource Manager so that we can also support hybrid environments, this makes it easier to implement the security features in Azure on all infrastructure components.

Figure 2 – Azure Management for all resources

Azure Arc consists of a set of different technologies and components that allows you to:

  • Manage applications in Kubernetes environments: it provides the ability to deploy and configure Kubernetes applications in a consistent manner across all environments, adopting modern DevOps techniques.
  • Allow Azure data services to run on any infrastructure: everything is based on the adoption of kubernetes and allows achieving more easily meet compliance criteria, to improve the security of data and to have considerable flexibility in deployment time. At the time the services covered are Azure SQL Database and Azure Database for PostgreSQL.
  • Organize, manage and govern all server systems: Azure Arc extends Azure governance and management capabilities to physical machines and virtual systems in different environments. This solution is specifically called Azure Arc for servers.

Figure 3 – Azure Arc Technologies

Azure Arc involves the use of specific Resource Provider for Azure Resource Manager and the installation of Azure Arc agents is required.

By logging in to the portal, you can see that Azure Arc for Servers is already currently available in public preview, while you need to register to manage Kubernetes environments and data services in preview.

Figure 4 – Azure Arc in the Azure portal

Thanks to the adoption of Azure Arc which introduces an overall view, you can reach, for hybrid architectures, the following objectives, difficult to achieve otherwise:

  • Standardization of operations
  • Organization of resources
  • Security
  • Cost Control
  • Business Continuity
  • Regulatory and corporate compliance

Figure 5 – Cloud-native governance with Azure Arc

Conclusions

Azure Arc was recently announced and although still in an embryonic phase, I think that will evolve significantly enough to revolutionize the management and development of hybrid environments. To keep up to date on how this solution will develop you can register at this page.

How to control the execution of applications using Azure Security Center

Azure Security Center provides several mechanisms to prevent security threats and reduce the attack surfaces of your environment. One of these mechanisms is theAdaptive Application Controls, a solution that can control which applications are running on the systems. Azure Security Center uses the machine learning engine to analyze applications running on virtual machines and leverages artificial intelligence to provide a list of allowed applications. This article lists the benefits that can be achieved by adopting this solution and how to perform the configuration.

Adopting this solution, available using the tier Standard of Azure Security Center, you can do the following:

  • Be alerted to attempts to run malicious applications, that may potentially not be detected by antimalware solutions. For Windows systems on Azure, you can also apply execution locks.
  • Respect corporate compliance, allowing the execution of only licensed software.
  • Avoid using unwanted or obsolete software in your infrastructure.
  • Control access to sensitive data that takes place using specific applications.

Figure 1 – Azure Security Center Free vs Standard Tier

Adaptive application controls can be used on systems regardless of their geographic location. Currently for systems not located in Azure and Linux VMs, only audit mode is supported.

This feature can be activated directly from the portal by accessing the Azure Security Center.

Figure 2 – Adaptive application controls in the "Advanced cloud defense" of Security Center

Security Center uses a proprietary algorithm to automatically create groups of machines with similar characteristics, to help enforce Application Control policies.

From the management interface, the groups are divided into three types:

  • Configured: list groups containing VMs where this feature is configured.
  • Recommended: there are groups of systems where enabling application control is recommended. Security Center uses machine learning mechanisms to identify VMs on which the same applications are always regularly running, and therefore are good candidates to enable application control.
  • Unconfigured: list of groups that contain the VMs for which there are no specific recommendations regarding the application control. For example, VMs that systematically run different applications.

Figure 3 – Types of groups

By clicking on the groups of virtual machines, you will be able to manage the Application control rules, that will allow you to create rules that evaluate the execution of applications.

Figure 4 – Configuring Application control rules

For each individual rule, you select the machines on which to apply it and the applications that you want to allow. For each application, the detail information is provided, in particular, the "Expoitable" column indicates whether it is an application that can potentially be used maliciously to bypass the list of allowed applications. For this type of application, you should pay close attention before allowing.

This configuration, for Windows systems, involves creating specific rules inApplocker, and it govern the execution of applications.

By default, Security Center enables application control in modeAudit, only to control activity on protected virtual machines without applying any locks on application execution. For each individual group, after verifying that the configuration you have made does not result in any malfunctions on the workloads on the systems, you can bring application control to application mode Enforce, as long as they are Windows virtual machines in the Azure environment, to block the execution of applications that are not expressly allowed. You can always change the name of the group from the same interface.

Figure 5 – Change the name and protection mode

At the end of this configuration, you will see, in the main Security Center panel, notifications concerning potential violations in the execution of applications than allowed.

Figure 6 - Violation notifications of applications Securiy Center

Figure 7 – Full list of the violations found

Figure 8 - Sample of violation

Conclusions

The functionality of Adaptive application controls allows with few easy steps to quickly enable a thorough check on the applications that run on systems. The configuration is simple and intuitive, especially thanks to functionality that allows to group the systems that have similar characteristics with regard to the execution of the application. It is therefore an important mechanism that helps prevent potential security threats and to minimize the attack surfaces of the environment. Added to the additional features, Adaptive application controls helps make Security Center a complete solution for the protection of workloads.