Category Archives: Cloud & Datacenter Management

Microsoft Copilot for Azure: how Artificial Intelligence is transforming Azure infrastructure design and management

In an era marked by relentless technological evolution, artificial intelligence (AI) is emerging as a revolutionary force in the cloud computing landscape. At the heart of this transformation is Microsoft, which has recently unveiled Microsoft Copilot for Azure. This innovative solution marks the beginning of a new era in the design, management, and optimization of Azure infrastructure and services. This article provides an overview of Microsoft Copilot for Azure, a true ally for businesses, designed to fully exploit the potential of the cloud through advanced features and AI-guided intuitiveness.

Premise: Copilot’s experience in Microsoft’s Cloud

Microsoft Copilot is a cutting-edge solution in the field of AI-based assistants. It stands out for the use of sophisticated language model algorithms (LLMs) and its perfect integration with Microsoft’s Cloud. This revolutionary tool aims to enhance productivity by facilitating access to critical data and ensuring high standards in security and privacy. Its core is an intuitive conversational interface that simplifies interaction with data and automation, making application creation simpler and more intuitive.

Copilot adapts to different needs: from basic usage that requires minimal effort and customization, to highly customized solutions that require substantial investment in development and data integration.

Figure 1 – Copilot’s Experience in Microsoft’s Cloud

The main ways to take advantage of Microsoft Copilot are:

  • Adopting Copilot: Microsoft offers various Copilot assistants to increase productivity and creativity. Integrated into various Microsoft products and platforms, Copilot transforms the digital workspace into a more interactive and efficient environment. Among these, Copilot for Azure stands out, which will be examined in detail in this article.
  • Extending Copilot: Developers have the opportunity to incorporate external data, simplifying user operations and reducing the need to change contexts. This not only improves productivity but also fosters greater collaboration. Through Copilot, it’s easy to integrate these data into common Microsoft products used daily. For example, both companies and ISVs have the ability to develop plugins to insert their own APIs and business data directly into Copilot. By adding these plugins, connectors, or extensions for messages, users can maximize the use of AI capabilities offered by Copilot.
  • Building your own Copilot: Beyond adoption and extension, it’s possible to create a customized Copilot for a unique conversational experience, using Azure OpenAI, Cognitive Search, Microsoft Copilot Studio, and other Microsoft Cloud technologies. A customized Copilot can integrate business data, access external data in real-time via APIs, and integrate into business applications.

Microsoft Copilot for Azure: the assistant revolutionizing the design, management, and optimization of Azure infrastructure and services via AI

Microsoft Copilot for Azure is an innovative AI-based tool designed to maximize the potential of Azure. Using LLMs (Large Language Models), Azure’s control plane, and detailed analysis of the Azure environment, Copilot makes work more effective and productive.

This assistant helps users navigate Azure’s numerous offerings, which include hundreds of services and thousands of resource types. It combines data and insights to increase productivity, minimize costs, and provide specific insights. Its ability to interpret natural language greatly simplifies managing Azure, responding to questions and providing personalized information about the user’s Azure environment.

Available directly through the Azure portal, Microsoft Copilot for Azure facilitates user interaction, responding to questions, generating queries, and performing tasks. Moreover, Copilot for Azure provides personalized, high-quality recommendations, respecting the organization’s policies and privacy.

The following paragraphs report the main features for which Microsoft Copilot for Azure can be used.

Performing tasks with improved efficiency

Copilot for Azure is designed to manage a wide range of basic operations that constitute the daily routine in managing Azure environments. These operations, essential for the maintenance and efficiency of architectures in Azure, can often be repetitive and time-consuming. However, with Copilot, it’s possible to manage these basic operations, saving valuable time and reducing the likelihood of human errors.

Interpreting and assessing the Azure environment:

  • Obtain information about resources through Azure Resource Graph queries.
  • Understand events and the health status of services.
  • Analyze, estimate, and optimize costs.

Working smarter with Azure services:

  • Deploy virtual machines effectively.
  • Build infrastructures and deploy workloads.
  • Obtain information about Azure Monitor metrics and logs.
  • Work more productively using Azure Stack HCI.
  • Secure and protect storage accounts.

Writing and optimizing code:

  • Generate Azure CLI scripts.
  • Discover performance recommendations.
  • Create API Management policies.
  • Generate YAML files for Kubernetes.
  • Resolve app issues more quickly with App Service.

Obtaining specific and detailed information and advice

Within the Azure portal, Copilot emerges as a useful tool for delving into a wide range of Azure concepts, services, or offerings. Its ability to provide answers is based on constantly updated documentation, ensuring users get up-to-date advice and valuable help in solving problems. This not only improves efficiency but also ensures that decisions are based on the most recent and relevant information.

Navigating the portal with greater ease

Navigating the Azure portal, often perceived as complex due to the vastness of services offered, is made simple and intuitive with Copilot’s assistance. Instead of manually searching among the numerous services, users can simply ask Copilot to guide them. Copilot not only responds by opening the requested service but also offers suggestions on service names and provides detailed explanations, making the navigation process smoother.

Simplified management of portal settings

Another notable aspect is Copilot’s ability to simplify the management of Azure portal settings. Users can now confirm or change settings directly through Copilot, without the need to access the control panel. For example, it’s possible to select and customize Azure themes directly through Copilot, making interaction with the portal not only more efficient but also more personalized.

Limitations as of December 2023

As of December 2023, Microsoft Copilot for Azure is in preview and has the following limitations:

  • Each user has a limit of ten questions per conversation and a maximum of five conversations per day.
  • Responses that include lists are limited to the first five items.
  • For some requests and queries, using the name of a resource may not be sufficient; it may be necessary to provide the Azure resource ID.
  • Available only in English.

Conclusions

Microsoft Copilot for Azure represents a revolutionary turn in cloud computing, leveraging artificial intelligence to significantly transform the management and optimization of Azure architectures. This tool elevates productivity and security, simplifying daily operations, providing detailed analysis, and assisting users in managing the Azure environment. Although we are still at the dawn of this technology, Copilot for Azure represents a significant advancement. This tool not only provides an intuitive and efficient user experience but also lays the groundwork for a future where artificial intelligence and cloud computing will be increasingly interconnected and synergistic.

Azure Stack HCI: the continuously evolving Hyper-Converged solution – December 2023 Edition

In the rapidly evolving current technological landscape, the need for flexible and scalable IT infrastructures has never been more pressing. Azure Stack HCI emerges as a response to this need, offering a hyper-converged (HCI) solution that enables the execution of workloads in on-premises environments while maintaining a strategic connection with various services offered by Azure. Azure Stack HCI is not just a hyper-converged solution, but is also a strategic component of the Azure services ecosystem, designed to integrate and amplify the capabilities of existing IT infrastructure.

As part of Azure’s hybrid offering, Azure Stack HCI is constantly evolving, adapting to the changing needs of the market and user expectations. The recent wave of innovations announced by Microsoft testifies to the company’s commitment not only to maintaining but also improving its position as a leader in the HCI solutions sector. These new features, which will be explored in detail in this article, promise to open new paths for the adoption of Azure Stack HCI, significantly improving the management of hybrid infrastructures and offering new opportunities to optimize the on-premises environment.

The lifecycle of updates and upgrades of Azure Stack HCI

A fundamental aspect of Azure Stack HCI is its predictable and manageable upgrade and update experience. Microsoft’s strategy for Azure Stack HCI updates is designed to ensure both security and continuous innovation of the solution. Here’s how it works:

  • Monthly quality and security updates: Microsoft regularly releases monthly updates focused on quality and security. These updates are essential to maintain the integrity and reliability of the Azure Stack HCI environment.
  • Annual feature updates: in addition to monthly updates, an annual feature update is released. These annual updates aim to improve and enrich the capabilities of Azure Stack HCI with new features and optimizations.
  • Timing for installing updates: to keep the Azure Stack HCI service in a supported state, users have up to six months to install updates. However, it is recommended to install updates as soon as they are released to ensure maximum efficiency and security of the system.
  • Support from Microsoft’s Hardware Partners: Microsoft’s hardware solution partners support Azure Stack HCI’s “Integrated Systems” and “Validated Nodes” with hardware support services, security updates, and assistance, for at least five years.

In addition to these established practices, during Microsoft Ignite 2023, a significant new development was announced: the public preview of Azure Stack HCI version 23H2. This latest version represents an important step in the evolution of Azure Stack HCI. The final version of this updated solution will be released in early 2024, slightly behind the planned release cycle. This delay is attributable to significant changes made to the solution, aimed at further improving the capabilities and performance of Azure Stack HCI. Initially, Azure Stack HCI version 23H2 will be available exclusively for new installations. Over the course of the year, it is expected that most users currently on Azure Stack HCI version 22H2 will have the opportunity to upgrade their clusters to the new version 23H2.

Figure 1 – Azure Stack HCI update release cycles

Activation and management of different workloads

Modern organizations often find themselves managing a wide range of applications: some based on containers, others on virtual machines (VMs), some running in the cloud, others in edge environments. Thanks to Azure Arc and an adaptive approach to the cloud, it’s possible to use common tools and implement uniform operational practices for all workloads, regardless of where they are executed. The 23H2 version of Azure Stack HCI provides all the necessary Azure Arc infrastructure, automatically configured as part of the cluster deployment, including the Arc Resource Bridge and other management agents and components. This means that, from the start, it’s possible to begin deploying Arc-enabled virtual machines, Azure Kubernetes Service clusters, and Azure Virtual Desktop session hosts.

Virtual Machines

The 23H2 version of Azure Stack HCI offers the ability to activate general-purpose VMs with flexible sizing and configuration options to meet the needs of different applications. Users can use their own custom Linux or Windows images or conveniently access those available in the Azure Marketplace. When creating a new virtual machine (VM) using the Azure portal, the Command Line Interface (CLI), or an ARM template, it is automatically equipped with the Connected Machine Agent. This includes the integration of extensions like Microsoft Defender, Azure Monitor, and Custom Script, thus ensuring uniform and integrated management of all machines, both in the cloud and at the edge.

Azure Kubernetes Service

The 23H2 version of Azure Stack HCI offers the Azure Kubernetes Service, a managed Kubernetes solution that operates in a local environment. The Azure Kubernetes Service is automatically configured as part of the Azure Stack HCI deployment and includes everything needed to start deploying container-based workloads. The Azure Kubernetes Service runs its control plane in the same Arc Resource Bridge as the general-purpose VMs and uses the same storage paths and logical networks. Each new Kubernetes cluster deployed via the Azure portal, CLI, or an ARM template is automatically configured with Azure Arc Kubernetes agents inside to enable extensions such as Microsoft Defender, Azure Monitor, and GitOps for application deployment and CI/CD.

Azure Virtual Desktop for Azure Stack HCI (Preview)

The 23H2 version of Azure Stack HCI has been optimized to support the deployment of virtualized desktops and applications. Azure Virtual Desktop, a Microsoft-managed desktop virtualization service with centralized control in the cloud, offers the experience and compatibility of Windows 11 and Windows 10. This service is distinguished by its multi-session capability, which increases efficiency and reduces costs. With Azure Virtual Desktop integrated into Azure Stack HCI, it is possible to position desktops and apps (session hosts) closer to end-users to reduce latency, and there is also the option for GPU acceleration. The 23H2 version introduces an updated public preview that offers provisioning of host pools directly from the Azure portal, simpler guest operating system activation, and updated Marketplace images with pre-installed Microsoft 365 apps. Microsoft will soon share more information on timings and pricing for general availability.

Advanced security

The increase in applications and infrastructures in edge environments requires organizations to adopt advanced security measures to keep pace with increasingly sophisticated threats from attackers. The 23H2 version of Azure Stack HCI facilitates this process with advanced security settings enabled by default, such as native integration with Microsoft Defender for Cloud and the option to protect virtual machines with Trusted Launch.

Integrated and Default-Enabled Security

The new 23H2 version of Azure Stack HCI presents a significantly strengthened security posture. Leveraging the foundations of Secured Core Server, over 300 settings in the hypervisor, storage system, and network stack are pre-configured following Microsoft’s recommendations. This covers 100% of the applicable settings in the Azure security baseline, doubling the security measures compared to the previous version 22H2. Any deviations from the settings are detected and automatically corrected to maintain the desired security posture over time. For enhanced protection against malware and ransomware, application control is activated by default, using a base policy provided by Microsoft.

Integration with Microsoft Defender for Cloud

In Microsoft Defender for Cloud, in addition to workload protection for Kubernetes clusters and VMs, new integrated security recommendations provide coverage for the Azure Stack HCI infrastructure as part of the Cloud Security Posture Management plan. For example, if the hardware is not set up for Secure Boot, if clustered storage volumes are not encrypted, or if application control is not activated, these issues will be highlighted in the Microsoft Defender for Cloud portal. Furthermore, it is possible to easily view the security status of host clusters, nodes, and workloads in a unified view. This greatly improves the ability to control and correct the security posture efficiently on a large scale, making it suitable for environments ranging from a limited number to hundreds of locations.

Trusted launch for Azure Arc-Enabled Virtual Machines

Trusted launch is a security feature designed to protect virtual machines (VMs) from direct attacks on firmware and bootloaders. Initially available only in Azure’s cloud, it has now been extended to the edge with Azure Stack HCI version 23H2. When creating an Azure Arc-enabled VM, this security option can be selected using the Azure portal, the Command Line Interface (CLI), or an ARM template. Trusted launch provides VMs with a virtual Trusted Platform Module (TPM), useful for the secure storage of keys, certificates, and secrets. Additionally, Secure Boot is enabled by default. VMs using Trusted launch also support automatic failover and live migration, transparently maintaining the state of the vTPM when moving the VM between cluster nodes. This implementation represents a significant step towards introducing confidential computing into edge computing.

Innovations in edge management

Sectors like retail, manufacturing, and healthcare often face the challenge of managing physical operations across multiple locations. In fact, integrating new technologies in places such as stores, factories, or clinics can become a complex and costly process. In this context, an edge infrastructure that can be rapidly deployed and centrally managed becomes a decisive competitive advantage. Tools enhanced with artificial intelligence, capable of scaling to thousands of resources, offer unprecedented operational efficiency.

With the 23H2 version of Azure Stack HCI, fundamental lifecycle operations such as deployment, patching, configuration, and monitoring are entirely managed from the cloud. This significantly reduces the need for on-site tools and personnel, making it easier to manage edge infrastructures.

Cloud-based Deployment

The 23H2 version of Azure Stack HCI simplifies large-scale deployment. At edge sites, once new machines arrive with the operating system pre-installed, local staff can simply connect them and establish the initial connection with Azure Arc. From that point on, the entire infrastructure, including clusters, storage, and network configuration, is deployed from the cloud. This minimizes the time and effort required on-site. Using the Azure portal, it’s possible to create an Azure Stack HCI cluster or scale it with a reusable Azure Resource Manager (ARM) template, with unique parameters for each location. This infrastructure-as-code approach ensures consistent configuration of Azure Stack HCI on a large scale.

Cloud-based update management

Keeping the system up to date is now simpler. The 23H2 version introduces the new Lifecycle Manager, which organizes all applicable updates into a single monthly package, covering the operating system, agents, services, and even drivers and firmware for participating hardware solutions. Lifecycle Manager ensures that the cluster always runs a combination of software validated by Microsoft and its partners, reducing the risk of problems or incompatibility. Update management for Azure Stack HCI clusters is integrated with Azure Update Manager, providing a unified tool for all machines across the cloud and edge.

Cloud-based monitoring

Azure Monitor provides an integrated and comprehensive view for applications and infrastructure, covering both cloud and on-premises environments. This now includes logs, metrics, and alert coverage for Azure Stack HCI version 23H2. Over 60 standard metrics are available, including CPU and memory usage, storage performance, network bandwidth, and more. Azure Stack HCI health issues, such as a failed disk or a misconfigured network port, are reported as new platform alerts, customizable to trigger notifications or actions. Additionally, Azure Monitor Insights, powered by Data Collection Rules and Workbooks, provides pre-configured views to help administrators monitor specific features, such as storage deduplication and compression.

Useful references

For all the details regarding the 23H2 version of Azure Stack HCI, you can consult the official Microsoft documentation.

Conclusions

Azure Stack HCI represents a milestone in the landscape of IT infrastructures, offering a robust, scalable, and secure solution for organizations navigating today’s complex technological ecosystem. With its approach, Azure Stack HCI effectively adapts to the needs of hybrid infrastructures, enabling seamless integration between on-premises environments and the Azure cloud. Its advanced features, such as optimized workload management, cutting-edge security, and ease of edge system management, not only meet current challenges but also open new possibilities for future innovation. The constant updating of its capabilities, highlighted by the 23H2 version, demonstrates Microsoft’s commitment to keeping pace with the evolving market needs and user expectations. Azure Stack HCI is not just a solution for current needs but a strategic investment to bring cloud innovation into one’s on-premises environment.

Unveiling the future: key insights from Microsoft Ignite on Azure IaaS and Azure Stack HCI

In this article, I take you through the latest technological advancements and updates announced at the recent Microsoft Ignite event. With a focus on Azure Infrastructure as a Service (IaaS) and Azure Stack, my aim is to provide a thorough and insightful overview of the innovative solutions and strategic initiatives unveiled by Microsoft. This pivotal event, renowned for its groundbreaking revelations in the tech sphere, has introduced a range of new features, enhancements, and visionary developments within the Microsoft ecosystem. I invite you to join me in exploring these developments in detail, as I offer my personal insights and analysis on how they are set to shape the future of cloud infrastructure and services.

Azure

General

Microsoft recently unveiled Copilot for Azure, an AI companion designed to enhance the design, operation, optimization, and troubleshooting of applications and infrastructure, from cloud to edge. Leveraging large language models and insights from Azure and Arc-enabled assets, Copilot offers new insights and functionality while prioritizing data security and privacy.

In AI infrastructure updates, Microsoft is optimizing its hardware and software stack, collaborating with industry leaders to offer diverse AI inferencing, training, and compute options. Key developments include:

  • Custom silicon chips, Azure Maia and Azure Cobalt, for AI and enterprise workloads, enhancing performance and cost-effectiveness.
  • Azure Boost, enhancing network and storage performance, is now generally available.
  • ND MI300 v5 virtual machines with AMD chips, optimized for generative AI workloads.
  • NC H100 v5 virtual machines with NVIDIA GPUs, improving mid-range AI training and inferencing efficiency.

Additionally, Microsoft and Oracle have announced the general availability of Oracle Database@Azure, integrating Oracle database services with Microsoft Azure’s security and services, starting in the US East Azure region in December 2023 and expanding further in early 2024.

Compute

Azure is introducing new AMD-based virtual machines (VMs), now in preview, featuring the 4th Generation AMD EPYC™ Genoa processor. These VMs offer enhanced performance and reliability across various series, each with different memory-to-core ratios catering to general purpose, memory-optimized, and compute-optimized needs.

For SAP HANA workloads, the Azure M-series Mv3 family, powered by 4th-generation Intel® Xeon® Scalable processors and Azure Boost, provides faster insights and improved price-performance. They also offer improved resilience, faster data load times for SAP HANA OLAP workloads, and higher performance per core for SAP OLTP workloads. Azure Boost enhances these VMs with improved network and storage performance and security.

Azure also introduces new confidential VMs with Intel processors, featuring Intel® Trust Domain Extensions (TDX) for secure processing of confidential workloads in the cloud. These VMs support a range of new features, including RHEL 9.3 for AMD SEV-SNP confidential VMs, Disk Integrity Tool for disk security, temporary disk encryption for AMD-based VMs, and expanded regional availability. The NCCv5 series confidential VMs, equipped with NVIDIA H100 Tensor Core GPUs, are unique in the cloud sphere. They offer AI developers the ability to deploy GPU-powered applications confidentially, ensuring data encryption in both CPU and GPU memory and providing attestation reports for data privacy.

Also, Azure has introduced two new features in public preview:

  • Azure VMSS Zonal Expansion: this feature allows users to transition their VMs from a regional to a zonal configuration across Azure availability zones, significantly enhancing business continuity and resilience.
  • VM Hibernation: Azure now offers a VM hibernation feature, allowing users to save on compute costs. When a VM is hibernated, its in-memory state is preserved in the OS disk, and the VM is deallocated, incurring charges only for storage and networking resources. Upon reactivation, the VM resumes its applications and processes from the saved state, allowing for quick continuation of work.

These updates reflect Azure’s commitment to offering advanced, secure, and versatile cloud computing options.

Storage

Azure has announced several updates to its storage services to enhance data management, performance, and cloud migration:

  • Azure Ultra Disk Storage: the IOPS and throughput for Azure Ultra Disk Storage have been increased, now supporting up to 400,000 IOPS and 10,000 MB/s per disk. This enhancement allows a single disk to support the largest VMs, reducing the need for multiple disks and enabling shared disk configurations.
  • Azure Storage Mover: this service, now generally available, facilitates the migration of on-premises file shares to Azure file shares and Azure Blob Storage. It includes new support for SMB share migration and a VMware agent image.
  • Azure Native Qumulo Scalable File Service: the ANQ V2 offers improved economics and scalability, separating performance from capacity. It simplifies cloud file services, enabling rapid deployment and management through a unified namespace.
  • Amazon S3 Shortcuts: now generally available, these shortcuts allow the integration of data in Amazon S3 with OneLake, enabling a unified virtualized data lake without data duplication.
  • Azure Data Lake Storage Gen2 Shortcuts: these shortcuts, also generally available, enable connection to external data lakes in ADLS Gen2 into OneLake. This allows data reuse without duplication and enhances interoperability with Azure Databricks and Power BI.

Networking

Azure introduces several updates aimed at enhancing network security, flexibility, and performance:

  • Private Subnet: a new feature allowing the disabling of default outbound access for new subnets, enhancing security and aligning with Azure’s “secure by default” model.
  • Customer-controlled maintenance: this public preview feature allows scheduling gateway maintenance during convenient times across various gateway resources.
  • Azure Virtual Network Manager Security Admin Rule: now generally available in select regions, it enforces standardized security policies globally across virtual networks, enhancing security management and reducing operational complexities.
  • ExpressRoute Direct and Circuit in different subscriptions: this general availability feature allows ExpressRoute Direct customers to manage network costs and connect circuits from multiple subscriptions, improving resource management.
  • ExpressRoute as a Trusted Service: now customers can store MACsec secrets in Azure KeyVault with Firewall Policies, restricting public access while enabling trusted service access.
  • ExpressRoute seamless gateway migration: this feature enables a smooth migration from a non-availability zone to an Availability-zone (AZ) enabled Gateway SKU, eliminating the need to dismantle existing gateways.
  • Rate Limiting on ExpressRoute Direct Circuits: this public preview feature allows rate-limiting on circuits, optimizing bandwidth usage and improving network performance.
  • ExpressRoute Scalable Gateway: The new ErGwScale Virtual Network Gateway SKU offers up to 40 Gbps connectivity and features auto-scaling based on bandwidth usage, enhancing flexibility and efficiency in network connectivity.

Azure Stack

Azure Stack HCI

Azure Stack HCI version 23H2

At Microsoft Ignite 2023, the company announced the public preview of Azure Stack HCI version 23H2, introducing several advancements. Key features include cloud-based deployment, update management, and monitoring, enhancing the ease and efficiency of managing infrastructure at scale. With version 23H2, deployment from the cloud is now possible, simplifying the setup process and minimizing on-site expertise requirements. The new Lifecycle Manager consolidates updates into a monthly package, streamlining update management and reducing compatibility issues. Azure Stack HCI now offers comprehensive monitoring with Azure Monitor, providing detailed insights into system performance and health.

The update also emphasizes central management of diverse workloads, whether container-based, VM-based, cloud, or edge-run, through Azure Arc and an adaptive cloud approach. Version 23H2 supports a variety of virtual machines and introduces Azure Kubernetes Service for edge-based container management. Additionally, Azure Virtual Desktop for Azure Stack HCI is in preview, offering enhanced virtualized desktops and apps with improved latency and optional GPU acceleration.

Significant attention is given to security with Azure Stack HCI version 23H2. It ensures a secure deployment by default and integrates with Microsoft Defender for Cloud for comprehensive security management. The Trusted launch feature for Azure Arc-enabled virtual machines, previously exclusive to the Azure cloud, is now available at the edge, providing additional protection against firmware and bootloader attacks.

While the 23H2 version is currently available for preview, it is not yet recommended for production use, with general availability (GA) expected in early 2024. Microsoft advises customers to continue using version 22H2 for production environments, with an update path from 22H2 to 23H2 to be detailed later. For more detailed information on Azure Stack HCI version 23H2, readers are encouraged to visit this article.

Conclusion

As we wrap up our exploration of the latest updates from Microsoft Ignite, it’s clear that the advancements in Azure IaaS and Azure Stack are not just incremental; they are transformative. Microsoft’s commitment to innovation and its vision for a more integrated, efficient, and scalable cloud infrastructure is evident in every announcement and feature update. These developments promise to redefine how businesses and developers leverage cloud computing, enhancing agility, security, and sustainability.

The implications of these updates extend beyond mere technical enhancements; they signal a shift towards a future where cloud infrastructure is more accessible, resilient, and adaptive to evolving business needs. As I conclude this article, I am left with a sense of excitement and anticipation for what these changes mean for the industry. The journey of cloud computing is ever-evolving, and with Microsoft’s recent announcements at Ignite, we are witnessing a significant leap forward in that journey.

Thank you for joining me in this deep dive into Microsoft’s latest innovations. I look forward to continuing this discussion and exploring how these advancements will unfold and impact our digital world in the days to come.

The evolution of Azure Stack HCI with Premier Solutions

As businesses worldwide seek more efficient, scalable, and customizable solutions for their IT infrastructure needs, Microsoft unveils the “Premier Solutions for Azure Stack HCI.” This launch provides companies with a range of new opportunities, seamlessly integrating with existing solutions to achieve Azure Stack HCI systems and enhancing possibilities for businesses of all sizes. In this article, we will explore the features of this new offering, how it integrates with existing solutions, and how it might redefine the future of Azure Stack HCI.

Previous Context

To activate the Azure Stack HCI solution, on-premise hardware is required. Until now, companies could rely on:

  • Azure Stack HCI Integrated Systems: Some hardware providers offer systems specifically designed and optimized for Azure Stack HCI, providing an experience reminiscent of a dedicated appliance. These solutions also include unified support, provided in collaboration between the provider and Microsoft.
  • Azure Stack HCI Validated Nodes: This method relies on the use of hardware carefully verified and validated by a specific hardware provider. This strategy allows advanced hardware customization based on customer needs, offering the possibility to select specific details related to the processor, memory, storage, and network card features, always respecting the provider’s compatibility specifications. Several hardware manufacturers offer solutions compatible with Azure Stack HCI, and most Azure Stack HCI configurations are currently made following this approach.

What’s New: Premier Solutions for Azure Stack HCI

Premier Solutions” represent a new category in the Azure Stack HCI product landscape, created to offer users a better operational experience. These solutions promise faster achievement of tangible results and unprecedented flexibility thanks to “as-a-service” provisioning options. This significant advancement is the result of collaboration with tech giants like Dell Technologies and Lenovo. The essence of this initiative is the fusion of the best available technologies into a deeply integrated, complete infrastructure solution, providing a seamless experience between hardware, software, and cloud services.

Key strengths of the Premier Solutions include:

  • Advanced Integration: An unparalleled combination of hardware, software, and cloud services, allowing companies to reduce time spent on infrastructure management and focus more on innovation.
  • Guaranteed Reliability: Microsoft and its partners are dedicated to continuous testing to ensure maximum reliability and minimal downtime.
  • Simplified Implementation: Comprehensive deployment workflows, making the implementation of Azure Stack HCI clusters a simple and repeatable process.
  • Facilitated Updates: Jointly tested and automated full-stack updates, allowing for continuous, easy updates.
  • Flexible Purchase Models: Various purchase options and additional services to facilitate the start of Azure Stack HCI solutions.
  • Global Availability: A consistent solution available everywhere, ensuring consistency worldwide.

Figure 1 – Azure Stack HCI Solution Categories

Visually, we can imagine the Azure Stack HCI solution categories as overlapping layers: at the top, we find the Premier Solutions, ready for immediate use after deployment; followed by the Integrated Systems, targeted configurations with pre-installed software for specific tasks; and finally, the Validated Nodes, boasting the broadest variety of hardware components.

For a detailed comparison between the different categories of Azure Stack HCI solutions, you can refer to this document.

A Case in Point: Dell APEX Cloud Platform for Microsoft Azure

A shining example of this collaboration is the new Dell APEX Cloud Platform for Microsoft Azure. This platform goes beyond the capabilities of the Validated Node and Integrated System categories, offering a turnkey Azure Stack HCI experience.

Born from close collaboration between Dell and Microsoft, its native integration with Azure aims to realize a shared goal: to simplify the customer experience and provide the flexibility needed for modern IT infrastructure.

Dell APEX Cloud Platform for Microsoft Azure is the result of meticulous engineering collaboration between Dell and Microsoft. It offers deep integration and automation between the technological layers of the two companies, accelerating the value achieved by customers and amplifying IT agility and productivity. With a wide range of configuration options and form factors, optimized for both main data center infrastructures and edge deployments, this platform can address a wide range of use scenarios, allowing organizations to drive innovation in every context.

A Look to the Future

In the coming months, Microsoft plans to expand the Premier Solutions portfolio with innovative edge platforms from Lenovo, consolidating its industry leadership and offering solutions increasingly suited to customer challenges. To learn more about the available Azure Stack HCI solutions, you can visit the relevant catalog.

Conclusions

Hybrid solutions represent the future of IT infrastructure, offering flexibility, scalability, and unprecedented integration between on-premise and cloud. The recent introduction of “Premier Solutions for Azure Stack HCI” is clear evidence of this, demonstrating Microsoft’s commitment to the constant evolution of its ecosystem. Collaboration with giants like Dell and Lenovo highlights a strategic synergy aimed at providing companies with cutting-edge, efficient, and optimized solutions. In particular, the Dell APEX Cloud Platform for Microsoft Azure symbolizes the pinnacle of this collaboration, presenting a solution that perfectly meets the modern needs of IT infrastructure management and evolution. As the IT landscape continues to evolve, it’s clear that solutions like Azure Stack HCI will be at the heart of digital transformation, guiding organizations towards a more connected, integrated, and innovative future.

Embracing the future: why Azure Stack HCI is the optimal choice for modernizing On-Premises infrastructure

As the digital landscape evolves, businesses are constantly seeking ways to harness the power of technology to stay competitive and efficient. While cloud computing has emerged as a game-changer, offering unparalleled flexibility and scalability, many enterprises still grapple with the challenge of integrating their on-premises infrastructure with the cloud. Microsoft’s Azure Stack HCI presents a compelling solution to this dilemma, bridging the gap between traditional data centers and the innovative world of the cloud. In this article, we delve into the unique advantages of Azure Stack HCI and why it stands out as the preferred choice for businesses aiming to modernize their IT infrastructure.

Azure Stack HCI is Microsoft’s solution that allows you to create a hyper-converged infrastructure (HCI) for running workloads in an on-premises environment, with a strategic connection to various Azure services. Azure Stack HCI has been specifically designed by Microsoft to help customers modernize their hybrid data center, offering a complete and familiar Azure experience on-premises. If you need more information about the Microsoft Azure Stack HCI solution, I invite you to watch this video.

Figure 1 – Overview of Azure Stack HCI

In my daily interactions with customers, I am often asked why they should choose Azure Stack HCI over other well-known solutions that have been on the market for a long time. In the following paragraphs, I will outline what I believe are the main reasons to opt for Azure Stack HCI.

Modernize your on-premises infrastructure by bringing innovation

Azure Stack HCI is not synonymous with a virtualization environment but allows you to achieve much more. It is ideal if you want to modernize your infrastructure by adopting a hyper-converged architecture that allows you to:

    • Activate virtual machines based on consolidated technologies that make the environment stable and highly available, especially suitable for workloads that require high performance and scalability.
    • Deploy and manage modern applications based on microservices, alongside virtual machines, in the same cluster environment, using Azure Kubernetes Service (AKS). In addition to running Windows and Linux apps in containers, AKS provides the infrastructure to run selected Azure PaaS services on-premises, thanks to Azure Arc.
    • Activate virtual machines with Windows Server 2022 Azure Datacenter edition, which offers specific features not available in the classic Standard and Datacenter editions. To learn more about the features available in this edition, you can consult this article.
    • Create Azure Virtual Desktop session host pools using virtual machines running on-premises. This hybrid scenario becomes interesting in situations where applications are latency-sensitive, such as video editing, or scenarios where users need to use a legacy system on-premises that cannot be easily accessed.
    • Extend the features of the on-premises solution by connecting to various Azure services such as Azure Site Recovery, Azure Backup, Azure Monitor, and Defender for Cloud. This aspect ensures constant innovation, given the continuous evolution of cloud services.

Optimize costs

The Azure Stack HCI cost model, detailed in this article, is straightforward. Specifically, for customers with a Software Assurance contract, adopting Azure Stack HCI results in a drastic reduction in the costs of modernizing the virtualization environment, making this solution even more cost-competitive compared to competitors in the market. Recently, when comparing the costs between Azure Stack HCI and VMware vSphere + vSAN over a 3-year projection, it emerged that Azure Stack HCI allows savings of up to 40%.

Increase the level of security

Azure Stack HCI offers cross-cutting security on hardware and firmware, integrated into the operating system’s features, capable of helping protect servers from advanced threats. Azure Stack HCI systems can adopt Secured-core security features, all through an easy configuration experience from Windows Admin Center. Additionally, Azure Stack HCI allows you to obtain important security patches for legacy Microsoft products that have passed the support deadline, through the Extended Security Update (ESU) program. Considering that October 10, 2023, marks the end of extended support for Windows Server 2012 and Windows Server 2012 R2, Azure Stack HCI allows more time to embark on an application modernization path without neglecting security aspects.

Maximize existing investments

Azure Stack HCI can integrate with the existing environment and the most popular third-party solutions. Therefore, adopting this solution does not require new investments to introduce or adapt management, identity, security, and protection solutions. Specifically, the administrative management of Azure Stack HCI does not require specific software, but existing management tools such as Admin Center, PowerShell, System Center Virtual Machine Manager, and even third-party tools can be used. Furthermore, by adopting Azure Stack HCI and Azure Arc, it is possible to apply cloud management models to the on-premises environment, greatly simplifying the user experience. Azure Stack HCI allows you to fully exploit not only the investments already made concerning tools but also the skills of IT staff.

Conclusions

In today’s fast-paced technological era, the choice of IT infrastructure can significantly influence a business’s agility, security, and overall growth. While there are numerous solutions available, Azure Stack HCI emerges as a frontrunner, seamlessly merging the reliability of on-premises systems with the innovation of the cloud. Its unique features, cost-effectiveness, and robust security measures make it an invaluable asset for companies aiming to stay ahead of the curve. By choosing Azure Stack HCI, businesses not only safeguard their current investments but also pave the way for a future-ready, integrated, and efficient IT environment.

Microsoft Defender for Cloud: a summer of innovations to reshape corporate security

In an era where data security and efficient management of cloud resources have become essential priorities, Microsoft Defender for Cloud emerges as a strategic tool for modern businesses. This solution, integrated into the Azure environment, offers advanced protection, facilitating enterprise-wide security and compliance management. In this article, will be explored the main innovations that characterized Defender for Cloud in the summer 2023, outlining how these innovations can represent added value for companies.

The benefits of adopting Defender for Cloud

Adopting Defender for Cloud in a business context is not just a strategic choice, but a growing need. This solution allows you to centralize and simplify security management, offering a holistic view that facilitates continuous monitoring and rapid response to security threats. Furthermore, helps optimize the security posture of hybrid and multi-cloud environments, while ensuring advanced protection and compliance with different regulatory compliances.

Summer news 2023

Ability to include Defender for Cloud in business cases made with Azure Migrate

For companies intending to move their resources to cloud platforms such as Azure, migration planning is key. With the integration of Defender for Cloud in Azure Migrate, it is now possible to guarantee advanced protection right from the initial migration phase. This integration ensures that security strategies are well integrated into the migration plan, providing a more secure and seamless transition to the cloud.

Defender for Cloud, increasingly agentless

Many Defender for Cloud features are now available without the need to install an agent:

  • Container protection in Defender CSPM: discovery agentless. The transition from agent-driven discovery to agentless discovery, for protecting containers in Defender CSPM, represents a notable qualitative leap towards more streamlined and effective security management. This new feature eliminates the need to install agents on each container, thus simplifying the discovery process and reducing resource usage.
  • Defender for Containers: agentless discovery per Kubernetes. Defender for Containers has launched agentless discovery for Kubernetes, representing a notable step forward in container security. This feature provides a detailed view and comprehensive inventory capability of Kubernetes environments, ensuring an unparalleled level of security and compliance.
  • Defender for Servers P2 & Defender CSPM: agentless secret scanning for Virtual Machines. The functionality of scanning secrets without the use of agents, inside in Defender for Server P2 and Defender CSPM, allows you to discover unsupervised and vulnerable secrets stored on virtual machines. This tool is essential to prevent lateral movement actions in the network and reduce the related risks.

Data Aware Security Posture

Adopting a conscious security posture for data is essential and now Microsoft Defender for Cloud is able to satisfy this need too. This feature allows companies to minimize data risks, providing tools that automatically identify sensitive information and assess related threats, improving response to data breaches. In particular, sensitive data identification for PaaS databases is currently being previewed. This allows users to catalog critical data and recognize types of information within their databases, proving fundamental for the effective management and protection of sensitive data.

GCP support in Defender CSPM

Introducing support for Google Cloud Platform (GCP) in Defender CSPM, currently in preview, marks a significant step towards more integrated and versatile security management, extending Defender CSPM capabilities to a wide range of services in Google's public cloud.

Malware scanning in Defender for Storage

Defender for Storage introduces malware scanning functionality, overcoming traditional malware protection challenges and providing an ideal solution for highly regulated industries. This function, available as an add-on, represents a significant enhancement of Microsoft Defender for Storage security solutions. With malware scanning you get the following benefits.

  • Protection, in near real time, without agent: ability to intercept advanced malware such as polymorphic and metamorphic ones.
  • Cost Optimization: thanks to flexible pricing, you can control costs based on the amount of data examined and with resource-level granularity.
  • Enablement at scale: without the need for maintenance, supports automated responses at scale and offers several options for activation via tools and platforms such as Azure policy, Bicep, ARM, Terraform, REST API and the Azure portal.
  • Application versatility: based on feedback from beta users over the last two years, Malware scanning has proven useful in a variety of scenarios, as web applications, content protection, compliance, integrations with third parties, collaborative platforms, data streams and datasets for machine learning (ML).

Express Configuration for Vulnerability Assessments in Defender for SQL

The configuration option 'express’ for vulnerability assessments in Defender for SQL, now available for everyone, facilitates the recognition and management of vulnerabilities, ensuring a timely response and more effective protection.

GitHub Advanced Security per Azure DevOps

It is now possible to view GitHub Advanced Security for Azure DevOps alerts (GHAzDO) related to CodeQL, secrets and dependencies, directly in Defender for Cloud. The results will appear in the DevOps section and Recommendations. To see these results, you need to integrate your GHAzDO-enabled repositories into Defender for Cloud.

New auto-provisioning process for SQL Server plan(preview)

The Microsoft Monitoring Agent (MMA) will be deprecated in August 2024. Defender for Cloud has updated its strategy by replacing MMA with the release of an Azure Monitor agent auto-provisioning process targeted at SQL Server.

Revisiting the business model and pricing structure

Microsoft has revised the business model and pricing structure of Defender for Cloud plans. These changes, aimed at offering greater clarity in expenses and making the cost structure more intuitive, were made in response to customer feedback to improve the overall user experience.

Conclusion

Summer 2023 marked a period of significant innovation for Microsoft Defender for Cloud. These new things, oriented towards more integrated and simplified security management, they promise to bring tangible benefits to companies, facilitating data protection and compliance in increasingly complex cloud environments.

Learn about foolproof strategies to optimize costs on Azure

The peculiarities and undeniable advantages of cloud computing can, in certain situations, hide pitfalls if not handled with due attention. Wise cost management is one of the crucial aspects of cloud governance. In this article, will be explored and outlined the principles and techniques that can be used to optimize and minimize expenses relating to the resources implemented in the Azure environment.

The issue of optimizing costs related to the cloud is a topic that is attracting increasingly greater interest among numerous customers. So that, for the seventh year in a row, emerges as the leading initiative in the cloud industry, as reported in Flexera's annual report 2023.

Figure 1 – Initiatives reported in the Flexera report of 2023

Principles to better manage costs

For effective management of costs associated with Azure, It is essential to adopt the principles outlined in the following paragraphs.

Design

A well-structured design process, which includes a meticulous analysis of business needs, it is essential to customize the adoption of cloud solutions. It therefore becomes crucial to outline the infrastructure to be implemented and how it will be used, through a design plan that aims to optimize the efficiency of the resources allocated in the Azure environment.

Visibility

It is vital to equip yourself with tools that offer a global view and allow you to receive notifications regarding Azure costs, thus facilitating constant and proactive monitoring of expenses.

Responsibility

Assigning cloud resource costs to the respective organizational units within the company is a smart practice. This ensures that managers are fully aware of the expenses attributable to their team, promoting an in-depth understanding of Azure spending at an organizational level. For this purpose, It is advisable to structure Azure resources in such a way as to facilitate the identification and attribution of costs.

Optimization

It is advisable to undertake periodic reviews of Azure resources with the intention of minimizing expenses where possible. Making use of available information, you can easily identify underutilized resources, eliminate waste and capitalize on cost saving opportunities.

Iteration

It is essential that IT staff are continuously engaged in the iterative processes of optimizing the costs of Azure resources. This represents a key element for responsible and effective management of the cloud environment.

Techniques to optimize costs

Regardless of the specific tools and solutions used, to refine cost management in Azure, you can adhere to the following strategies:

  • Turn off unused resources, given that the pricing of the various Azure services is based on the actual use of the resources. For those resources that do not require uninterrupted operation and that allow, without any loss of configurations or data, a deactivation or suspension, it is possible to implement an automation system. This system, regulated by a predefined schedule, facilitates the optimization of use and, consequentially, more economical management of the resources themselves.
  • Adequately size resources, consolidating workloads and proactively intervening on underutilized resources, allows us to avoid waste and guarantee a more efficient and targeted use of available capacities.
  • For resources used continuously in the Azure environment, evaluate the option of Reservations can prove to be an advantageous strategy. Azure Reservations offer the opportunity to benefit from a significant cost reduction, which can reach up to 72% compared to pay-as-you-go rates. This benefit can be obtained by committing to pay for the use of Azure resources for a period of one or three years. This payment can be made in advance or on a monthly basis, at no additional cost. The purchase of Reservations can be made directly from the Azure portal and is available to customers with the following subscription types: Enterprise Agreement, Pay-As-You-Go and Cloud Solution Provider (CSP).
  • To further mitigate costs associated with Azure, it is appropriate to consider the implementation of’Azure Hybrid Benefit. This advantage allows you to achieve significant savings, as Microsoft only allows you to bear the costs relating to the Azure infrastructure, while the licenses for Windows Server or SQL Server are covered by a Software Assurance contract or an existing subscription.

The Azure Hybrid Benefit can also be extended to Azure SQL Database, to SQL Servers installed on Azure virtual machines and SQL Managed Instances. These benefits facilitate the transition to cloud solutions, bidding up to 180 days of dual use right, and help leverage pre-existing investments in terms of SQL Server licenses. To learn more about how to use the Azure Hybrid Benefit for SQL Server, please consult the FAQs present in this document. It is important to note that this benefit is also applicable to RedHat and SUSE Linux subscriptions, further expanding the opportunities for savings and cost optimization.

The Azure Hybrid Benefit can be combined with Azure Reserved VM Instances, creating an opportunity for significant savings that can reach 80% of the total, especially when you opt for an Azure Reserved Instance purchase for the duration of 3 years. This synergy not only makes the investment cheaper, but also maximizes operational efficiency.

  • Considering the integration of new technologies and the application of architectural optimizations is crucial. This process involves the selection of the most appropriate Azure service for the specific needs of the application in question, ensuring not only optimal technological alignment, but also more efficient cost management.
  • Allocate and de-allocate resources dynamically is critical to meeting fluctuating performance needs. This approach is known as “autoscaling”, a process that facilitates the flexible allocation of resources to meet specific performance needs at any time. As the workload intensifies, an application may require additional resources to maintain desired performance levels and meet SLAs (Service Level Agreement). On the contrary, when demand reduces and additional resources are no longer essential, these can be de-allocated to minimize costs. Autoscaling capitalizes on the elasticity of cloud environments, allowing not only more effective cost management, but also reducing the administrative burden, as resources can be managed more smoothly and with less manual intervention.
  • For test and development environments, it is advisable to consider the use of Dev/Test subscriptions, which offer the opportunity to access significant discounts on Azure fees. These subscriptions can be activated under an Enterprise Agreement, thus facilitating more advantageous cost management and more agile and economical experimentation during the development and testing phases.

Conclusions

The adoption of a methodological approach in managing cloud costs, together with the use of appropriate strategies, represents a fundamental pillar for successfully navigating the complex challenge of cloud economic management. Drawing from the principles and techniques outlined in this article, users can not only optimize expenses, but also make the most of their investment in the cloud, ensuring a balance between costs and benefits.

Hotpatching in Windows Server: a revolution in virtual machine management

In the digital age, ensuring business continuity is essential, no longer just an added value. For many companies, frequent interruptions, even of short duration, are unacceptable for their critical workloads. However, ensuring that continuity can be complex, whereas the management of virtual machines (VM) with Windows Server operating system is in some respects complex, especially in relation to applying security patches and updates. With the advent of the hotpatching feature from Microsoft, a new chapter in VM management has opened: a more efficient approach that minimizes disruption, guaranteeing servers that are always up-to-date and protected. This article looks at the features and benefits of this innovative solution.

What is Hotpatching?

Hotpatching, introduced by Microsoft, is an advanced technique that allows you to update Windows Server operating systems without the need to restart. Imagine you can “change tires” of your moving car without having to stop it. This is the "magic" of hotpatching.

Where you can use Hotpatching

Hotpatch functionality is supported on “Windows Server 2022 Datacenter: Azure Edition”, that you can use it for VMs running in Azure and Azure Stack HCI environment.

The Azure images available for this feature are:

  • Windows Server 2022 Datacenter: Azure Edition Hotpatch (Desktop Experience)
  • Windows Server 2022 Datacenter: Azure Edition Core

Note that Hotpatch is enabled by default on Server Core images and that Microsoft recently extended hotpatching support to include Windows Server with Desktop Experience, further expanding the scope of this feature.

Updates supported

Hotpatch covers Windows security updates and maintains an alignment with the content of security updates issued in the regular Windows update channel (non hotpatch).

There are some important considerations for running a Windows Server Azure Edition VM with hotpatch enabled:

  • reboots are still required to install updates that are not included in the hotpatch program;
  • reboots are also required periodically after a new baseline has been installed;
  • reboots keep the VM in sync with non-security patches included in the latest cumulative update.

Patches not currently included in the hotpatch program include non-security updates released for Windows, .NET updates and non-Windows updates (as driver, firmware updates, etc.). These types of patches may require a reboot during the Hotpatch months.

Benefits of Hotpatching

The benefits of this technology are many:

  • Better security: with hotpatching, security patches are applied quickly and efficiently. This reduces the window of vulnerability between the release of a patch and its application, offering fast protection against threats.
  • Minimization of downtime: one of the main benefits of hotpatching is the ability to apply updates without the need to restart the server. This means fewer outages and higher availability for applications and services.
  • More flexible management: system administrators have the freedom to decide when to apply patches, without the worry of having to do careful planning to ensure that running processes are not interrupted while applying updates.

How hotpatching works

During a hotpatching process, the security patch is injected into the operating system's running code in memory, updating the system while it is still running.

Hotpatch works by first establishing a baseline with the current Cumulative Update for Windows Server. Periodically (on a quarterly basis), the baseline is updated with the latest Cumulative Update, after which they are released hotpatch for the next two months. For example,, if a Cumulative Update is released in January, February and March would see the release of hotpatch. For the hotpatch release schedule, you can consult the Release Notes for Hotpatch in Azure Automanage for Windows Server 2022.

The hotpatch contain updates that do not require a restart. Because Hotpatch fixes the in-memory code of running processes without the need to restart the process, applications hosted on the operating system are not affected by the patching process. This action is separate from any performance and functionality implications of the patch itself.

The following image shows an example of an annual update release schedule (including examples of unplanned baselines due to zero-day corrections).

Figure 1 – Outline of a sample yearly schedule for releasing Hotpatch updates

There are two types of baselines:

  • Planned Baselines: are released on a regular basis, with hotpatch releases in between. Planned Baselines include all updates in a newer Cumulative Update and require a restart.
  • Unplanned Baselines: they are released when a major update is released (like a zero-day correction) and that particular update cannot be released as a hotpatch. When unscheduled baselines are released, a hotpatch release is replaced with an unplanned baseline in that month. Unplanned Baselines also include all updates in a newer Cumulative Update and require a restart.

The programming shown in the example image illustrates:

  • four baseline releases planned in a calendar year (five total in the diagram) and eight hotpatch releases;
  • two unplanned baselines that would replace the hotpatch releases for those months.

Patch orchestration process

Hotpatch is to be considered as an extension of Windows Update and patch orchestration tools vary depending on the platform in use.

Hotpatch orchestration on Azure

Virtual machines created in Azure are enabled by default for automatic patching when using a supported image of "Windows Server Datacenter: Azure Edition”:

  • patches classified as Critical or Security are automatically downloaded and applied to the VM;
  • patches are applied during off-peak hours considering the time zone of the VM;
  • Azure handles patch orchestration and patches are applied following the availability principles;
  • the health status of the virtual machine, determined through Azure platform health signals, is monitored for patching failures.

Hotpatch orchestration on Azure Stack HCI

Hotpatch updates for active VMs in Azure Stack HCI environment can be orchestrated using:

  • Group Policy to configure Windows Update client settings;
  • Windows Update client settings or SCONFIG per Server Core;
  • a third-party patch management solution.

Considerations and Limitations

However, like any technology, even hotpatching has its nuances. Not all patches are suitable for hotpatching; some may still require a traditional restart. Furthermore, before applying any patches, it remains crucial to test it in a controlled environment to avoid potential problems.

Installing Hotpatch updates does not support automatic rollback. In fact,, if a VM experiences a problem during or after an upgrade, you need to uninstall the update and install the latest known good baseline update. After the rollback you will need to restart the VM.

Conclusion

The introduction of hotpatching by Microsoft represents a significant step forward in the management of VMs running Windows Server operating system. With the ability to apply security patches and updates non-disruptively, administrators can ensure that their servers are protected and updated in a faster and more effective way. In a world where safety is paramount and where every second counts, hotpatching is positioned as a valuable solution for any company that uses Windows Server in an Azure environment or in an Azure Stack HCI environment.

Revolutionize cloud cost management with AI: discover the new Microsoft Cost Management co-pilot!

In the digital age, cloud computing has become an essential component for many companies, offering flexibility, scalability and agility. However, with the ever more widespread adoption of the cloud, the management of associated costs has become an increasingly complex challenge and companies are looking for innovative solutions to optimize their expenses in the cloud. In this context, Microsoft introduced “Copilot” in Cost Management, a new feature based on artificial intelligence, designed to help businesses navigate this complex landscape. This article shows the main features of this integration, that promises to revolutionize the way businesses manage and optimize their spending on cloud resources.

A clear view of costs with Microsoft Cost Management

Microsoft Cost Management, available directly from the Azure portal, offers a detailed view of operating costs, allowing businesses to better understand how their funds are being spent. This tool provides detailed information about your expenses, highlighting any anomalies and spending patterns. Furthermore, allows you to set budgets, share costs among different teams and identify opportunities for optimization.

AI at the service of cost management

With the introduction of AI in Microsoft Cost Management, users can now ask questions in natural language to quickly get the information they need. For example,, to understand a recent invoice, it is possible to request a detailed breakdown of expenses. The AI ​​will provide an overview of the different spending categories and their impact on the total.

As well as providing an overview of costs, the AI ​​offers suggestions on how to analyze expenses further. Users can compare monthly bills, examine specific expenses or investigate any anomalies. The AI ​​also provides detailed information on any changes in costs and suggests corrective actions.

The AI ​​integrated into Microsoft Cost Management interprets user intentions and retrieves the necessary data from various sources. This information is then presented to an advanced language model which generates a response. It is important to note that the retrieved data is not used to train the model, but only to provide the context needed to generate a relevant response.

Future perspectives

The capabilities of AI in Microsoft Cost Management are constantly evolving. In the future, users will be able to take advantage of simulations and modeling “what-if” to make informed decisions. For example,, will be able to explore how storage costs will vary as the business grows or evaluate the impact of moving resources from one region to another.

Figure 1 – Example of simulation and modeling “what-if”

Benefits

The introduction of AI in Microsoft Cost Management allows to obtain the following benefits:

  • Greater visibility and cost control: with greater visibility and understanding of cloud resource costs, organizations can make more informed decisions and better manage their budgets.
  • Operational efficiency: using AI to analyze and interpret data reduces the time and effort needed to gain valuable insights. Furthermore, users can ask specific questions in natural language and receive detailed answers, customized to their needs.

Figure 2 – Examples of questions

  • Optimization: with AI-driven tips and recommendations, organizations can identify and implement optimization opportunities to further reduce costs.

Conclusion

The integration of Copilot into Microsoft Cost Management represents a significant step forward in cloud cost management. With the help of artificial intelligence, businesses now have a powerful tool to optimize their spending and ensure they operate at peak efficiency. With the constant evolution of artificial intelligence, further and interesting innovations are expected in the field of cloud cost management and beyond.

Azure by your side: new solutions for Windows Server 2012/R2 end of support

In the era of Artificial Intelligence and native services for the cloud, organizations continue to rely on Windows Server as a secure and reliable platform for their mission-critical workloads. However, it is important to note that support for Windows Server 2012/R2 will end on 10 October 2023. After that date, Windows Server 2012/R2 systems will become vulnerable if action is not taken, as they will no longer receive regular security updates. Recently, Microsoft has announced that Azure offers new solutions to better manage the end of support of Windows Server 2012/R2. These solutions will be examined in detail in this article, after a brief summary to set the context.

The impact of end of support for Windows Server 2012 R2: what it means for companies?

Microsoft has announced the end of support for Windows Server 2012 and 2012 R2, fixed for 10 October 2023. This event represents a turning point for many organizations that rely on these servers to access applications and data. But what exactly does end of support mean (EOL) and what are the implications for companies?

Understanding end of support

Microsoft has a lifecycle policy that provides support for its products, including Windows Server 2012 and 2012 R2. End of support refers to when a product is no longer supported by Microsoft, which means no more security updates will be provided, patches or technical support.

Why companies should care

Without regular updates and patches, companies using Windows Server 2012 and 2012 R2 are exposed to security vulnerabilities, such as ransomware attacks and data breaches. Furthermore, using an unsupported product such as Windows Server 2012 or 2012 R2 can lead to non-compliance issues. Finally, outdated software can cause compatibility issues with newer applications and hardware, hampering efficiency and productivity.

An opportunity to review IT strategy

Companies should use the EOL event as an opportunity to review their IT strategy and determine the desired business goals for their technology. In this way, they can align the technology with their long-term goals, leveraging the latest cloud solutions and improving operational efficiency.

The strategies that can be adopted to deal with this situation, thus avoiding exposing your IT infrastructure to security issues, have already been addressed in the article: How the End of Support of Windows Server 2012 can be a great opportunity for CTOs.

About this, Microsoft has introduced two new options, provided through Azure, to help manage this situation:

  • updating servers with Azure Migrate;
  • distribution on Azure Arc-enabled servers of updates deriving from the ESU (Extended Security Updates).

The following paragraphs describe the characteristics of these new options.

Updating Windows servers in end of support phase (EOS) with Azure Migrate

Azure Migrate is a service offered by Microsoft Azure that allows you to assess and migrate on-premises resources, as virtual machines, applications and databases, towards the Azure cloud infrastructure. Recently, Azure Migrate has introduced support for in-place upgrades for Windows Server 2012 and later, when moving to Azure. This allows organizations to move their legacy applications and databases to a fully supported operating system, compatible and compliant as Windows Server 2016, 2019 or 2022.

Key benefits of Azure Migrate's OS update feature

Risk mitigation: Azure Migrate creates a replica of the original server in Azure, allowing the OS to be updated on the replica while the source server remains intact. In case of problems, customers can easily go back to the original operating system.

Compatibility Test: Azure Migrate provides the ability to perform a test migration in an isolated environment in Azure. This is especially useful for OS updates, allowing customers to evaluate the compatibility of their operating system and updated applications without impacting production. This way you can identify and fix any problems in advance.

Reduced effort and downtime: integrating OS updates with cloud migration, customers can significantly save time and effort. With only one additional data, the version of the target operating system, Azure Migrate takes care of the rest, simplifying the process. This integration further reduces downtime of the server and applications hosted on it, increasing efficiency.

No separate Windows licenses: with the Azure Migrate OS update, you do not need to purchase an operating system license separately to upgrade. That the customer uses Azure Hybrid Benefits (AHB) o PAYG, is covered when migrating to an Azure VM using Azure Migrate.

Large-scale server upgrade: Azure Migrate supports large-scale server OS upgrades, allowing customers to upgrade up to 500 server in parallel when migrating to Azure. Using the Azure portal, you will be able to select up to 10 VMs at a time to set up replicas. To replicate multiple VMs you can use the portal and add VMs to be replicated in multiple batches of 10 VMs, or use the Azure Migrate PowerShell interface to configure replication.

Supported OS versions

Azure Migrate can handle:

  • Windows Server 2012: supports upgrading to Windows Server 2016;
  • Windows Server 2012 R2: supports upgrading to Windows Server 2016, Windows Server 2019;
  • Windows Server 2016: supports upgrading to Windows Server 2019, Windows Server 2022;
  • Windows Server 2019: supports upgrading to Windows Server 2022.

Deployment of ESU-derived updates on Azure Arc-enabled servers

Azure Arc is a set of Microsoft solutions that help businesses manage, govern and protect assets in various environments, including on premise, edge e multi-cloud, extending the management capabilities of Azure to any infrastructure.

For organizations unable to modernize or migrate before Windows Server 2012/R2 end of support date, Microsoft has announced Extended Security Updates (ESU) enable Azure Arc. With Azure Arc, organizations will be able to purchase and distribute Extended Security Updates seamlessly (ESU) in on-premises or multicloud environments, direct from the Azure Portal.

To get Extended Security Updates (ESU) for Windows Server 2012/R2 and SQL Server 2012 enable Azure Arc, you need to follow the steps below:

  • Preparing the Azure Arc environment: first of all, you need an Azure environment and a working Azure Arc infrastructure. Azure Arc can be installed on any server running Windows Server 2012/R2 or SQL Server 2012, provided that the connectivity requirements are met.
  • Server registration in Azure Arc: once the Azure Arc environment is set up, you need to register your Windows servers or SQL Server systems in Azure Arc. This process allows systems to become managed resources in Azure, making them eligible for ESUs.
  • Purchase of ESUs: once the servers are registered in Azure Arc, ESUs can be purchased, for each server you want to protect, through Azure.
  • ESU activation: after the purchase of the ESUs, you need to activate them on the servers. This process involves installing a license key and downloading security updates from Windows Update or your local update distribution infrastructure.
  • Installing updates: finally, once the ESUs are activated, you can install security updates on servers. This process can be managed manually or by automating it through update management tools.

Note: ESUs only provide critical and important security updates, they do not include new features or performance improvements. Furthermore, ESUs are only available for a limited time after Microsoft's end of support. Therefore, we recommend that you consider migrating to newer versions of servers to have access to all features, in addition to security updates.

Conclusions

This year, Microsoft celebrates the 30th anniversary of Windows Server, a goal achieved thanks to relentless innovation and customer support. However, customers must commit to keeping their Windows Server systems up-to-date near the end of support. In particular, the end of support for Windows Server 2012 and 2012 R2 poses a significant risk to companies, but it also presents an opportunity to review and improve their IT strategy. Identifying desired business goals, engaging in strategic planning e, if necessary, using these new solutions offered by Azure, companies can ensure a smooth and successful transition, optimizing their IT infrastructure to achieve their long-term goals.