Archivi categoria: Cloud & Datacenter Management

Leveraging Azure infrastructure in Italy: opportunities and strategies for businesses

In the digital era, the evolution of cloud computing represents a crucial turning point for businesses, radically changing the way data, applications, and IT infrastructures are managed. At the heart of this revolution stands Microsoft Azure, a leading cloud platform that offers an extensive assortment of services and solutions, designed to increase efficiency, security, and resilience of workloads. Azure’s availability in Italy presents an opportunity for businesses of all sizes to optimize their IT resources and expand in the digital landscape. This article aims to explore the potential of Azure’s infrastructure in Italy, highlighting how companies can benefit from its innovative services, seizing practical advantages and concrete opportunities that arise from it.

Azure: a flexible ecosystem for innovation

Azure’s philosophy is clear: simplify IT management without sacrificing reliability and efficiency. Microsoft has structured Azure to be a versatile platform, suitable for reducing costs and complexity for customers. This flexibility is manifested in Azure’s ability to integrate with existing environments, whether it be hybrid clouds with VMware and Nutanix, or in the implementation of IaaS and PaaS services for business applications. Microsoft’s approach is to put the customer at the center, offering customized solutions adaptable to every specific need.

Figure 1 – Microsoft Azure: The Infrastructure Designed for Various Workloads

The launch of Azure in Italy: a customer-oriented process

Azure’s service rollout follows a well-defined path. Initially, in the pre-launch phase, Microsoft establishes the ‘Azure Foundational Services’, which include the essential basic infrastructure: computing, storage, and networking. These components constitute the fundamental core of a cloud environment. With the official start of a region, Microsoft enters a new phase, introducing the ‘Azure Mainstream Services’. These services are expanded and adapted to directly meet customer needs, marking a crucial step in tailoring Microsoft’s cloud offering to the specificities of the local market. As the region matures, Microsoft launches the ‘Azure Strategic Services’, designed to meet more complex requirements and cover advanced use scenarios. At this stage, the focus is on close collaboration with customers to optimize workloads and performance, reflecting a continuous commitment to listening and responding to customer needs. Currently, the Azure Italy North region is in a phase of dynamic development, marked by a constant commitment to evolving services. Microsoft aims to grow in harmony with the needs of its customers, aiming not only to meet current needs but also to drive future innovation.

Figure 2 – Phases of Implementing Azure Services in a New Region

Reliability and resilience: a shared commitment and a framework for excellence

Reliability in the cloud represents a shared goal by both providers and users. Microsoft commits to providing a resilient foundation for the cloud, but it is up to companies to build robust systems on this foundation. Through the use of Azure, customers have the opportunity to implement resilience solutions, including high availability and disaster recovery, which integrate into their infrastructures to ensure operational continuity and security. The “Azure Well-Architected Framework” plays a crucial role in an effective cloud strategy. This framework guides companies through fundamental practices such as design, testing, and monitoring, emphasizing the need for a conscious approach in design, rigorous testing, and constant control. In this way, companies can ensure they operate in a reliable cloud environment.

Resilience and availability: Multi-Region vs. Single-Region

It is important to clarify a fundamental aspect. Historically, Microsoft Azure has adopted a multi-region design approach to ensure high cloud availability. By implementing multi-region architectures, Azure has allowed customers to distribute workloads across different regions, creating an effective failover architecture in case of interruptions. For example, European businesses have been able to distribute their workloads between West Europe and North Europe. In case of problems in one region, the other can automatically intervene, reducing the risk of downtime and ensuring operational continuity.

Figure 3 – Multi-Region Model

The ‘Data Residency Boundary’ demonstrates that, despite geographic distribution, data remains confined within a designated area, ensuring compliance with local regulations. Azure thus meets not only the technical needs for availability but also the legal and compliance requirements of global customers.

However, with Microsoft’s global expansion, there has been an evolution in the design of cloud infrastructures. The Azure North Italy region is an example, adopting a single-region approach with three availability zones and no paired regions, still ensuring excellent service availability and resilience.

Figure 4 – Single-Region Model

The North Italy region, created after careful risk analysis, ensures optimal security and performance. With latencies below 2 milliseconds, it offers synchronous replication of applications and data, maintaining operational continuity and data integrity. Each availability zone has independent data centers with autonomous resources, highlighting Microsoft’s commitment to high operability and service reliability.

Respecting the ‘Data Residency Boundary’ is crucial in Europe, where data protection regulations are stringent. The North Italy region is a model of adaptation to these needs, ensuring that data remains within regional borders and compliant with local laws.

The Azure Italy North Region: an opportunity for italian businesses

Addressing Italian companies considering the cloud to expand or transfer their IT infrastructure, the Azure North Italy region emerges as a promising choice. For businesses with operational headquarters exclusively in Italy, adopting this region offers tangible benefits, such as reduced latency and high performance, critical aspects for those operating predominantly at the national level. This choice also aligns with EU and Italian data residency regulations.

For customers currently using Azure services in other regions, such as West Europe, the transition to North Italy requires a more in-depth analysis. Key elements to consider include the impact on existing IT infrastructures, operational costs, and application performance. It is also crucial to evaluate latency in interactions between services located in different regions.

Another relevant factor is the need to serve users in geographically distant areas. In such cases, it might be more effective to maintain some services in a region closer to end-users or consider solutions that involve the distribution of Azure services across multiple regions.

The decision on the most suitable Azure region depends on the specific needs of the company and the geographical distribution of users. The advantage of the cloud lies in its flexibility and ability to adapt to various scenarios. Therefore, for Italian companies, both in the initial phase of adopting the cloud and in the expansion phase, the Azure North Italy region represents an option to be carefully considered.

Why choose Azure in northern Italy

Among the main aspects leading to the choice of adopting the Azure region located in Italy are:

  • Opting for a data center located in Italy means that data is physically stored within the country, in compliance with Italian data residency regulations. This choice not only minimizes risks related to data sovereignty but also ensures compliance with national and European Union regulations.
  • Companies can be sure of operating within the data borders of the EU and take advantage of advanced confidential computing capabilities, which provide additional levels of protection for sensitive data.
  • Optimization of performance for applications. Whether it’s Internet of Things (IoT) applications, Virtual Desktop solutions, or hybrid infrastructures, the North Italy region has been designed to support intensive use scenarios and to ensure the necessary performance for these advanced technologies.
  • Energy cost savings and reduced environmental impact. The North Italy region stands out for its energy efficiency, with a Power Usage Effectiveness (PUE) index of 1.12 and a Water Usage Effectiveness (WUE) of 0.023 l/kWh. These figures reflect Microsoft’s commitment to sustainability and energy efficiency.

Conclusions

The presence of Microsoft Azure in Italy represents a valuable resource for local businesses. Azure stands out for its versatility and adaptability, proposing an ecosystem that is easily integrable with various operational contexts and capable of meeting specific business needs, while ensuring efficiency and reliability. The availability of this cloud platform in Italy allows companies to benefit from greater scalability in the digital sector, thanks to reduced latency, high performance, and full adherence to local regulations. A determining factor is compliance and sovereign data management: the Azure Italy North region strictly respects European data protection laws, ensuring that data remains within regional borders and compliant with current regulations. A distinctive feature of this region is also energy efficiency and a lower environmental impact, resulting in advantageous energy consumption. With Azure, Italian companies have the opportunity to embark on a path of digital innovation, availing themselves of customized solutions, maximum security, and regulatory compliance

Microsoft Cloud for Sovereignty: the solution to meet sovereignty requirements in the cloud and hybrid environments

Microsoft has recently announced the availability of Microsoft Cloud for Sovereignty across all Azure regions. This solution offers reliable options for the public sector, designed to support the migration, development, and transformation of workloads in Microsoft’s cloud while complying with regulatory, security, and control requirements. In this article, we delve into the distinctive features of Microsoft Cloud for Sovereignty, exploring how it can ensure rapid digital transformation for government entities in compliance with regulations.

Sovereignty in the Hyperscale Cloud

Governments worldwide must meet a wide range of national and regional compliance requirements for applications and workloads, including governance, security controls, privacy, and in some cases, data residency and sovereign protections. Until now, most solutions to meet these regulatory requirements relied on private clouds and on-premises environments, slowing the adoption of scalable, secure, and resilient cloud solutions.

What is Data Sovereignty and Microsoft’s Stance on ‘Sovereignty’?

Data sovereignty is the concept that data is under the customer’s control and regulated by local laws. While data residency ensures data remains in a specific geographic location, data sovereignty ensures adherence to the regulations of the country where the public sector customer is located. Each jurisdiction has its own requirements, vision, and unique needs when it comes to addressing sovereignty. In this regard, while Microsoft believes many of these needs are met through standard cloud solutions, it has introduced Microsoft Cloud for Sovereignty, providing an additional layer of capabilities to meet the individual needs of public sector and government clients. It is then up to partners and clients to determine what is appropriate for their specific needs. For the most sensitive workloads that cannot be hosted in the public cloud, Microsoft offers hybrid options, such as Azure Stack HCI, allowing customers to keep data in their own on-premises environments.

The following paragraphs outline the most common requests for achieving data sovereignty in the cloud.

Residency, Security, and Compliance in the Hyperscale Cloud

Microsoft Cloud for Sovereignty is rooted in over 60 global Azure cloud regions, ensuring unmatched security and a wide range of regulatory compliance. This positions Microsoft as the cloud provider with the most regions worldwide, and this infrastructure allows customers to implement specific policies to ensure their data and applications remain within their preferred geographic boundary, fully respecting national or regional data residency requirements.

Controls for Data Access

Microsoft Cloud for Sovereignty provides controls to ensure sovereignty, protection, and encryption of sensitive data and to control access, enabled by:

  • Sovereign Landing Zone: A specific Azure landing zone designed for entities requiring privacy, security, and sovereign controls in compliance with governmental regulations. These zones provide a repeatable and secure approach for cloud service development and deployment. Governments facing complex and multilevel regulatory contexts find in the Sovereign Landing Zones an effective solution for designing, implementing, and managing solutions, adhering to established policies. They allow for the implementation and configuration of Azure resources, ensuring alignment with the best practices of the Cloud Adoption Framework (CAF). These guides enable organizations to meet data sovereignty requirements. For more information on SLZ and their features, it is recommended to consult the documentation on GitHub.
  • Azure Confidential Computing: A technology developed by Microsoft aimed at enhancing data security while being processed in the cloud. Traditionally, data can be protected while at rest (stored) or in transit (during transmission), but become vulnerable when in use or running on a server. Confidential Computing seeks to bridge this gap by protecting data even when in execution. This is achieved through the use of a technology called “Trusted Execution Environment” (TEE), which is essentially a secure area of the processor. TEEs isolate data and code in execution from other processes, including those of the operating system, so that only authorized code can access the data. This means that even if an attacker manages to penetrate the operating system or network, they would not be able to access the protected data within the TEE. Azure Confidential Computing is particularly useful for use cases requiring a high level of data security, such as financial transactions, healthcare information management, or handling sensitive data for businesses or governments.

The Complexity of Addressing Regulations that Vary from Country to Country

Digital sovereignty is a complex issue, varying significantly from one nation to another. To address this challenge, Microsoft has adopted a collaborative and customized approach with its Microsoft Cloud for Sovereignty. By working closely with local partners in different countries, Microsoft is able to tailor its cloud solutions to the specific needs of each client, maximizing efficiency and ensuring secure implementations.

In this context, Microsoft offers its clients the ability to adopt specific policies related to sovereignty through Azure, simplifying the process of complying with national and regional regulations. These initiatives (set of policies) help clients establish cloud security parameters, facilitating compliance with regulations.

A concrete example is the adoption of the Azure Cloud Security Benchmark. Clients can start here, then add the new Sovereignty Policy Baseline to strengthen digital sovereignty practices. Additionally, they can integrate specific layers for their regions, such as the guidelines for cloud migration from the Italian National Agency for Cybersecurity of Public Administration (ACN) for clients in Italy.

Furthermore, the new Cloud Security Alliance Cloud Controls Matrix (CSA CCM v4) policy initiative offers a global benchmark that informs and guides many regional standards, further consolidating Microsoft’s commitment to secure, compliant, and sovereign cloud solutions.

How Microsoft Ensures Data Remains in a Specific Country and Supports Sovereignty Needs of Governments Without Azure Regions in Their Territory?

Microsoft provides detailed information about data residency in the Microsoft Cloud through its documentation and the Microsoft Trust Portal. Additional measures to maximize data residency have been announced as part of the EU Data Boundary. Governments worldwide have different preferences regarding sovereignty and data residency. For some clients, data residency in their own country is not a prerequisite for sovereignty. Moreover, the sovereignty controls that Microsoft provides can be used anywhere, even in the absence of a region in their own country.

Microsoft Cloud for Sovereignty for Italian Clients

A significant step towards digital sovereignty in Italy is represented by the introduction of the new Azure Italy North region. This region opens new possibilities for public and private clients, offering them access to Sovereign Landing Zones. Additionally, Azure Italy North stands out for adopting cutting-edge technologies like Azure Confidential Computing. With the addition of Azure Italy North, Microsoft demonstrates its commitment to supporting the specific needs of Italian clients, providing advanced technological solutions that meet the challenges of digital sovereignty and data security.

Capabilities of Microsoft Cloud for Sovereignty

The capabilities of Microsoft Cloud for Sovereignty extend across several levels:

Figure 1 – The Various Layers that Compose Microsoft Cloud for Sovereignty

New Capabilities for Sovereignty

The following new solutions highlight Microsoft’s ongoing investment in improving sovereignty in the hyperscale cloud:

  • Drift Analysis Capability: Continuous administration and maintenance can potentially introduce changes that are not compliant with established policies, causing the deployment to deviate from compliance over time. The new drift analysis tool inspects the deployment and generates a list of non-compliant settings, along with a severity assessment, facilitating the identification of discrepancies to be remedied and the verification of compliance in specific environments.
  • Transparency Logs: Provides eligible customers with visibility into instances where Microsoft engineers have accessed customer resources through Just-In-Time (JIT) access, most commonly in response to a customer support request.
  • New Configuration Tools in the Azure Portal: Allow customers to create a new custom Sovereign Landing Zone in two simple steps using a guided experience.

Conclusions

In conclusion, Microsoft Cloud for Sovereignty represents a significant turning point in data management and digital sovereignty in the cloud and hybrid environments. With its ability to meet complex compliance requirements and ensure data security, this solution stands as a fundamental pillar for the public and governmental sector. The availability across all Azure regions, coupled with innovative Azure Confidential Computing and Sovereign Landing Zones, offers customers unprecedented flexibility to keep data within national or regional boundaries, respecting local regulations. Microsoft’s personalized and collaborative approach in responding to the specific needs of each country demonstrates a clear commitment to digital sovereignty, offering secure, scalable, and reliable solutions. Particularly for Italian clients, the opening of the Azure Italy North region is a significant step forward, highlighting Microsoft’s investment in supporting local needs and strengthening data security. Overall, Microsoft Cloud for Sovereignty emerges as an important innovation in the cloud computing landscape, advancing the mission of a safer, compliant, and sovereign digital future.

Microsoft Copilot for Azure: how Artificial Intelligence is transforming Azure infrastructure design and management

In an era marked by relentless technological evolution, artificial intelligence (AI) is emerging as a revolutionary force in the cloud computing landscape. At the heart of this transformation is Microsoft, which has recently unveiled Microsoft Copilot for Azure. This innovative solution marks the beginning of a new era in the design, management, and optimization of Azure infrastructure and services. This article provides an overview of Microsoft Copilot for Azure, a true ally for businesses, designed to fully exploit the potential of the cloud through advanced features and AI-guided intuitiveness.

Premise: Copilot’s experience in Microsoft’s Cloud

Microsoft Copilot is a cutting-edge solution in the field of AI-based assistants. It stands out for the use of sophisticated language model algorithms (LLMs) and its perfect integration with Microsoft’s Cloud. This revolutionary tool aims to enhance productivity by facilitating access to critical data and ensuring high standards in security and privacy. Its core is an intuitive conversational interface that simplifies interaction with data and automation, making application creation simpler and more intuitive.

Copilot adapts to different needs: from basic usage that requires minimal effort and customization, to highly customized solutions that require substantial investment in development and data integration.

Figure 1 – Copilot’s Experience in Microsoft’s Cloud

The main ways to take advantage of Microsoft Copilot are:

  • Adopting Copilot: Microsoft offers various Copilot assistants to increase productivity and creativity. Integrated into various Microsoft products and platforms, Copilot transforms the digital workspace into a more interactive and efficient environment. Among these, Copilot for Azure stands out, which will be examined in detail in this article.
  • Extending Copilot: Developers have the opportunity to incorporate external data, simplifying user operations and reducing the need to change contexts. This not only improves productivity but also fosters greater collaboration. Through Copilot, it’s easy to integrate these data into common Microsoft products used daily. For example, both companies and ISVs have the ability to develop plugins to insert their own APIs and business data directly into Copilot. By adding these plugins, connectors, or extensions for messages, users can maximize the use of AI capabilities offered by Copilot.
  • Building your own Copilot: Beyond adoption and extension, it’s possible to create a customized Copilot for a unique conversational experience, using Azure OpenAI, Cognitive Search, Microsoft Copilot Studio, and other Microsoft Cloud technologies. A customized Copilot can integrate business data, access external data in real-time via APIs, and integrate into business applications.

Microsoft Copilot for Azure: the assistant revolutionizing the design, management, and optimization of Azure infrastructure and services via AI

Microsoft Copilot for Azure is an innovative AI-based tool designed to maximize the potential of Azure. Using LLMs (Large Language Models), Azure’s control plane, and detailed analysis of the Azure environment, Copilot makes work more effective and productive.

This assistant helps users navigate Azure’s numerous offerings, which include hundreds of services and thousands of resource types. It combines data and insights to increase productivity, minimize costs, and provide specific insights. Its ability to interpret natural language greatly simplifies managing Azure, responding to questions and providing personalized information about the user’s Azure environment.

Available directly through the Azure portal, Microsoft Copilot for Azure facilitates user interaction, responding to questions, generating queries, and performing tasks. Moreover, Copilot for Azure provides personalized, high-quality recommendations, respecting the organization’s policies and privacy.

The following paragraphs report the main features for which Microsoft Copilot for Azure can be used.

Performing tasks with improved efficiency

Copilot for Azure is designed to manage a wide range of basic operations that constitute the daily routine in managing Azure environments. These operations, essential for the maintenance and efficiency of architectures in Azure, can often be repetitive and time-consuming. However, with Copilot, it’s possible to manage these basic operations, saving valuable time and reducing the likelihood of human errors.

Interpreting and assessing the Azure environment:

  • Obtain information about resources through Azure Resource Graph queries.
  • Understand events and the health status of services.
  • Analyze, estimate, and optimize costs.

Working smarter with Azure services:

  • Deploy virtual machines effectively.
  • Build infrastructures and deploy workloads.
  • Obtain information about Azure Monitor metrics and logs.
  • Work more productively using Azure Stack HCI.
  • Secure and protect storage accounts.

Writing and optimizing code:

  • Generate Azure CLI scripts.
  • Discover performance recommendations.
  • Create API Management policies.
  • Generate YAML files for Kubernetes.
  • Resolve app issues more quickly with App Service.

Obtaining specific and detailed information and advice

Within the Azure portal, Copilot emerges as a useful tool for delving into a wide range of Azure concepts, services, or offerings. Its ability to provide answers is based on constantly updated documentation, ensuring users get up-to-date advice and valuable help in solving problems. This not only improves efficiency but also ensures that decisions are based on the most recent and relevant information.

Navigating the portal with greater ease

Navigating the Azure portal, often perceived as complex due to the vastness of services offered, is made simple and intuitive with Copilot’s assistance. Instead of manually searching among the numerous services, users can simply ask Copilot to guide them. Copilot not only responds by opening the requested service but also offers suggestions on service names and provides detailed explanations, making the navigation process smoother.

Simplified management of portal settings

Another notable aspect is Copilot’s ability to simplify the management of Azure portal settings. Users can now confirm or change settings directly through Copilot, without the need to access the control panel. For example, it’s possible to select and customize Azure themes directly through Copilot, making interaction with the portal not only more efficient but also more personalized.

Limitations as of December 2023

As of December 2023, Microsoft Copilot for Azure is in preview and has the following limitations:

  • Each user has a limit of ten questions per conversation and a maximum of five conversations per day.
  • Responses that include lists are limited to the first five items.
  • For some requests and queries, using the name of a resource may not be sufficient; it may be necessary to provide the Azure resource ID.
  • Available only in English.

Conclusions

Microsoft Copilot for Azure represents a revolutionary turn in cloud computing, leveraging artificial intelligence to significantly transform the management and optimization of Azure architectures. This tool elevates productivity and security, simplifying daily operations, providing detailed analysis, and assisting users in managing the Azure environment. Although we are still at the dawn of this technology, Copilot for Azure represents a significant advancement. This tool not only provides an intuitive and efficient user experience but also lays the groundwork for a future where artificial intelligence and cloud computing will be increasingly interconnected and synergistic.

Azure Stack HCI: the continuously evolving Hyper-Converged solution – December 2023 Edition

In the rapidly evolving current technological landscape, the need for flexible and scalable IT infrastructures has never been more pressing. Azure Stack HCI emerges as a response to this need, offering a hyper-converged (HCI) solution that enables the execution of workloads in on-premises environments while maintaining a strategic connection with various services offered by Azure. Azure Stack HCI is not just a hyper-converged solution, but is also a strategic component of the Azure services ecosystem, designed to integrate and amplify the capabilities of existing IT infrastructure.

As part of Azure’s hybrid offering, Azure Stack HCI is constantly evolving, adapting to the changing needs of the market and user expectations. The recent wave of innovations announced by Microsoft testifies to the company’s commitment not only to maintaining but also improving its position as a leader in the HCI solutions sector. These new features, which will be explored in detail in this article, promise to open new paths for the adoption of Azure Stack HCI, significantly improving the management of hybrid infrastructures and offering new opportunities to optimize the on-premises environment.

The lifecycle of updates and upgrades of Azure Stack HCI

A fundamental aspect of Azure Stack HCI is its predictable and manageable upgrade and update experience. Microsoft’s strategy for Azure Stack HCI updates is designed to ensure both security and continuous innovation of the solution. Here’s how it works:

  • Monthly quality and security updates: Microsoft regularly releases monthly updates focused on quality and security. These updates are essential to maintain the integrity and reliability of the Azure Stack HCI environment.
  • Annual feature updates: in addition to monthly updates, an annual feature update is released. These annual updates aim to improve and enrich the capabilities of Azure Stack HCI with new features and optimizations.
  • Timing for installing updates: to keep the Azure Stack HCI service in a supported state, users have up to six months to install updates. However, it is recommended to install updates as soon as they are released to ensure maximum efficiency and security of the system.
  • Support from Microsoft’s Hardware Partners: Microsoft’s hardware solution partners support Azure Stack HCI’s “Integrated Systems” and “Validated Nodes” with hardware support services, security updates, and assistance, for at least five years.

In addition to these established practices, during Microsoft Ignite 2023, a significant new development was announced: the public preview of Azure Stack HCI version 23H2. This latest version represents an important step in the evolution of Azure Stack HCI. The final version of this updated solution will be released in early 2024, slightly behind the planned release cycle. This delay is attributable to significant changes made to the solution, aimed at further improving the capabilities and performance of Azure Stack HCI. Initially, Azure Stack HCI version 23H2 will be available exclusively for new installations. Over the course of the year, it is expected that most users currently on Azure Stack HCI version 22H2 will have the opportunity to upgrade their clusters to the new version 23H2.

Figure 1 – Azure Stack HCI update release cycles

Activation and management of different workloads

Modern organizations often find themselves managing a wide range of applications: some based on containers, others on virtual machines (VMs), some running in the cloud, others in edge environments. Thanks to Azure Arc and an adaptive approach to the cloud, it’s possible to use common tools and implement uniform operational practices for all workloads, regardless of where they are executed. The 23H2 version of Azure Stack HCI provides all the necessary Azure Arc infrastructure, automatically configured as part of the cluster deployment, including the Arc Resource Bridge and other management agents and components. This means that, from the start, it’s possible to begin deploying Arc-enabled virtual machines, Azure Kubernetes Service clusters, and Azure Virtual Desktop session hosts.

Virtual Machines

The 23H2 version of Azure Stack HCI offers the ability to activate general-purpose VMs with flexible sizing and configuration options to meet the needs of different applications. Users can use their own custom Linux or Windows images or conveniently access those available in the Azure Marketplace. When creating a new virtual machine (VM) using the Azure portal, the Command Line Interface (CLI), or an ARM template, it is automatically equipped with the Connected Machine Agent. This includes the integration of extensions like Microsoft Defender, Azure Monitor, and Custom Script, thus ensuring uniform and integrated management of all machines, both in the cloud and at the edge.

Azure Kubernetes Service

The 23H2 version of Azure Stack HCI offers the Azure Kubernetes Service, a managed Kubernetes solution that operates in a local environment. The Azure Kubernetes Service is automatically configured as part of the Azure Stack HCI deployment and includes everything needed to start deploying container-based workloads. The Azure Kubernetes Service runs its control plane in the same Arc Resource Bridge as the general-purpose VMs and uses the same storage paths and logical networks. Each new Kubernetes cluster deployed via the Azure portal, CLI, or an ARM template is automatically configured with Azure Arc Kubernetes agents inside to enable extensions such as Microsoft Defender, Azure Monitor, and GitOps for application deployment and CI/CD.

Azure Virtual Desktop for Azure Stack HCI (Preview)

The 23H2 version of Azure Stack HCI has been optimized to support the deployment of virtualized desktops and applications. Azure Virtual Desktop, a Microsoft-managed desktop virtualization service with centralized control in the cloud, offers the experience and compatibility of Windows 11 and Windows 10. This service is distinguished by its multi-session capability, which increases efficiency and reduces costs. With Azure Virtual Desktop integrated into Azure Stack HCI, it is possible to position desktops and apps (session hosts) closer to end-users to reduce latency, and there is also the option for GPU acceleration. The 23H2 version introduces an updated public preview that offers provisioning of host pools directly from the Azure portal, simpler guest operating system activation, and updated Marketplace images with pre-installed Microsoft 365 apps. Microsoft will soon share more information on timings and pricing for general availability.

Advanced security

The increase in applications and infrastructures in edge environments requires organizations to adopt advanced security measures to keep pace with increasingly sophisticated threats from attackers. The 23H2 version of Azure Stack HCI facilitates this process with advanced security settings enabled by default, such as native integration with Microsoft Defender for Cloud and the option to protect virtual machines with Trusted Launch.

Integrated and Default-Enabled Security

The new 23H2 version of Azure Stack HCI presents a significantly strengthened security posture. Leveraging the foundations of Secured Core Server, over 300 settings in the hypervisor, storage system, and network stack are pre-configured following Microsoft’s recommendations. This covers 100% of the applicable settings in the Azure security baseline, doubling the security measures compared to the previous version 22H2. Any deviations from the settings are detected and automatically corrected to maintain the desired security posture over time. For enhanced protection against malware and ransomware, application control is activated by default, using a base policy provided by Microsoft.

Integration with Microsoft Defender for Cloud

In Microsoft Defender for Cloud, in addition to workload protection for Kubernetes clusters and VMs, new integrated security recommendations provide coverage for the Azure Stack HCI infrastructure as part of the Cloud Security Posture Management plan. For example, if the hardware is not set up for Secure Boot, if clustered storage volumes are not encrypted, or if application control is not activated, these issues will be highlighted in the Microsoft Defender for Cloud portal. Furthermore, it is possible to easily view the security status of host clusters, nodes, and workloads in a unified view. This greatly improves the ability to control and correct the security posture efficiently on a large scale, making it suitable for environments ranging from a limited number to hundreds of locations.

Trusted launch for Azure Arc-Enabled Virtual Machines

Trusted launch is a security feature designed to protect virtual machines (VMs) from direct attacks on firmware and bootloaders. Initially available only in Azure’s cloud, it has now been extended to the edge with Azure Stack HCI version 23H2. When creating an Azure Arc-enabled VM, this security option can be selected using the Azure portal, the Command Line Interface (CLI), or an ARM template. Trusted launch provides VMs with a virtual Trusted Platform Module (TPM), useful for the secure storage of keys, certificates, and secrets. Additionally, Secure Boot is enabled by default. VMs using Trusted launch also support automatic failover and live migration, transparently maintaining the state of the vTPM when moving the VM between cluster nodes. This implementation represents a significant step towards introducing confidential computing into edge computing.

Innovations in edge management

Sectors like retail, manufacturing, and healthcare often face the challenge of managing physical operations across multiple locations. In fact, integrating new technologies in places such as stores, factories, or clinics can become a complex and costly process. In this context, an edge infrastructure that can be rapidly deployed and centrally managed becomes a decisive competitive advantage. Tools enhanced with artificial intelligence, capable of scaling to thousands of resources, offer unprecedented operational efficiency.

With the 23H2 version of Azure Stack HCI, fundamental lifecycle operations such as deployment, patching, configuration, and monitoring are entirely managed from the cloud. This significantly reduces the need for on-site tools and personnel, making it easier to manage edge infrastructures.

Cloud-based Deployment

The 23H2 version of Azure Stack HCI simplifies large-scale deployment. At edge sites, once new machines arrive with the operating system pre-installed, local staff can simply connect them and establish the initial connection with Azure Arc. From that point on, the entire infrastructure, including clusters, storage, and network configuration, is deployed from the cloud. This minimizes the time and effort required on-site. Using the Azure portal, it’s possible to create an Azure Stack HCI cluster or scale it with a reusable Azure Resource Manager (ARM) template, with unique parameters for each location. This infrastructure-as-code approach ensures consistent configuration of Azure Stack HCI on a large scale.

Cloud-based update management

Keeping the system up to date is now simpler. The 23H2 version introduces the new Lifecycle Manager, which organizes all applicable updates into a single monthly package, covering the operating system, agents, services, and even drivers and firmware for participating hardware solutions. Lifecycle Manager ensures that the cluster always runs a combination of software validated by Microsoft and its partners, reducing the risk of problems or incompatibility. Update management for Azure Stack HCI clusters is integrated with Azure Update Manager, providing a unified tool for all machines across the cloud and edge.

Cloud-based monitoring

Azure Monitor provides an integrated and comprehensive view for applications and infrastructure, covering both cloud and on-premises environments. This now includes logs, metrics, and alert coverage for Azure Stack HCI version 23H2. Over 60 standard metrics are available, including CPU and memory usage, storage performance, network bandwidth, and more. Azure Stack HCI health issues, such as a failed disk or a misconfigured network port, are reported as new platform alerts, customizable to trigger notifications or actions. Additionally, Azure Monitor Insights, powered by Data Collection Rules and Workbooks, provides pre-configured views to help administrators monitor specific features, such as storage deduplication and compression.

Useful references

For all the details regarding the 23H2 version of Azure Stack HCI, you can consult the official Microsoft documentation.

Conclusions

Azure Stack HCI represents a milestone in the landscape of IT infrastructures, offering a robust, scalable, and secure solution for organizations navigating today’s complex technological ecosystem. With its approach, Azure Stack HCI effectively adapts to the needs of hybrid infrastructures, enabling seamless integration between on-premises environments and the Azure cloud. Its advanced features, such as optimized workload management, cutting-edge security, and ease of edge system management, not only meet current challenges but also open new possibilities for future innovation. The constant updating of its capabilities, highlighted by the 23H2 version, demonstrates Microsoft’s commitment to keeping pace with the evolving market needs and user expectations. Azure Stack HCI is not just a solution for current needs but a strategic investment to bring cloud innovation into one’s on-premises environment.

Unveiling the future: key insights from Microsoft Ignite 2023 on Azure IaaS and Azure Stack HCI

In this article, I take you through the latest technological advancements and updates announced at the recent Microsoft Ignite event. With a focus on Azure Infrastructure as a Service (IaaS) and Azure Stack, my aim is to provide a thorough and insightful overview of the innovative solutions and strategic initiatives unveiled by Microsoft. This pivotal event, renowned for its groundbreaking revelations in the tech sphere, has introduced a range of new features, enhancements, and visionary developments within the Microsoft ecosystem. I invite you to join me in exploring these developments in detail, as I offer my personal insights and analysis on how they are set to shape the future of cloud infrastructure and services.

Azure

General

Microsoft recently unveiled Copilot for Azure, an AI companion designed to enhance the design, operation, optimization, and troubleshooting of applications and infrastructure, from cloud to edge. Leveraging large language models and insights from Azure and Arc-enabled assets, Copilot offers new insights and functionality while prioritizing data security and privacy.

In AI infrastructure updates, Microsoft is optimizing its hardware and software stack, collaborating with industry leaders to offer diverse AI inferencing, training, and compute options. Key developments include:

  • Custom silicon chips, Azure Maia and Azure Cobalt, for AI and enterprise workloads, enhancing performance and cost-effectiveness.
  • Azure Boost, enhancing network and storage performance, is now generally available.
  • ND MI300 v5 virtual machines with AMD chips, optimized for generative AI workloads.
  • NC H100 v5 virtual machines with NVIDIA GPUs, improving mid-range AI training and inferencing efficiency.

Additionally, Microsoft and Oracle have announced the general availability of Oracle Database@Azure, integrating Oracle database services with Microsoft Azure’s security and services, starting in the US East Azure region in December 2023 and expanding further in early 2024.

Compute

Azure is introducing new AMD-based virtual machines (VMs), now in preview, featuring the 4th Generation AMD EPYC™ Genoa processor. These VMs offer enhanced performance and reliability across various series, each with different memory-to-core ratios catering to general purpose, memory-optimized, and compute-optimized needs.

For SAP HANA workloads, the Azure M-series Mv3 family, powered by 4th-generation Intel® Xeon® Scalable processors and Azure Boost, provides faster insights and improved price-performance. They also offer improved resilience, faster data load times for SAP HANA OLAP workloads, and higher performance per core for SAP OLTP workloads. Azure Boost enhances these VMs with improved network and storage performance and security.

Azure also introduces new confidential VMs with Intel processors, featuring Intel® Trust Domain Extensions (TDX) for secure processing of confidential workloads in the cloud. These VMs support a range of new features, including RHEL 9.3 for AMD SEV-SNP confidential VMs, Disk Integrity Tool for disk security, temporary disk encryption for AMD-based VMs, and expanded regional availability. The NCCv5 series confidential VMs, equipped with NVIDIA H100 Tensor Core GPUs, are unique in the cloud sphere. They offer AI developers the ability to deploy GPU-powered applications confidentially, ensuring data encryption in both CPU and GPU memory and providing attestation reports for data privacy.

Also, Azure has introduced two new features in public preview:

  • Azure VMSS Zonal Expansion: this feature allows users to transition their VMs from a regional to a zonal configuration across Azure availability zones, significantly enhancing business continuity and resilience.
  • VM Hibernation: Azure now offers a VM hibernation feature, allowing users to save on compute costs. When a VM is hibernated, its in-memory state is preserved in the OS disk, and the VM is deallocated, incurring charges only for storage and networking resources. Upon reactivation, the VM resumes its applications and processes from the saved state, allowing for quick continuation of work.

These updates reflect Azure’s commitment to offering advanced, secure, and versatile cloud computing options.

Storage

Azure has announced several updates to its storage services to enhance data management, performance, and cloud migration:

  • Azure Ultra Disk Storage: the IOPS and throughput for Azure Ultra Disk Storage have been increased, now supporting up to 400,000 IOPS and 10,000 MB/s per disk. This enhancement allows a single disk to support the largest VMs, reducing the need for multiple disks and enabling shared disk configurations.
  • Azure Storage Mover: this service, now generally available, facilitates the migration of on-premises file shares to Azure file shares and Azure Blob Storage. It includes new support for SMB share migration and a VMware agent image.
  • Azure Native Qumulo Scalable File Service: the ANQ V2 offers improved economics and scalability, separating performance from capacity. It simplifies cloud file services, enabling rapid deployment and management through a unified namespace.
  • Amazon S3 Shortcuts: now generally available, these shortcuts allow the integration of data in Amazon S3 with OneLake, enabling a unified virtualized data lake without data duplication.
  • Azure Data Lake Storage Gen2 Shortcuts: these shortcuts, also generally available, enable connection to external data lakes in ADLS Gen2 into OneLake. This allows data reuse without duplication and enhances interoperability with Azure Databricks and Power BI.

Networking

Azure introduces several updates aimed at enhancing network security, flexibility, and performance:

  • Private Subnet: a new feature allowing the disabling of default outbound access for new subnets, enhancing security and aligning with Azure’s “secure by default” model.
  • Customer-controlled maintenance: this public preview feature allows scheduling gateway maintenance during convenient times across various gateway resources.
  • Azure Virtual Network Manager Security Admin Rule: now generally available in select regions, it enforces standardized security policies globally across virtual networks, enhancing security management and reducing operational complexities.
  • ExpressRoute Direct and Circuit in different subscriptions: this general availability feature allows ExpressRoute Direct customers to manage network costs and connect circuits from multiple subscriptions, improving resource management.
  • ExpressRoute as a Trusted Service: now customers can store MACsec secrets in Azure KeyVault with Firewall Policies, restricting public access while enabling trusted service access.
  • ExpressRoute seamless gateway migration: this feature enables a smooth migration from a non-availability zone to an Availability-zone (AZ) enabled Gateway SKU, eliminating the need to dismantle existing gateways.
  • Rate Limiting on ExpressRoute Direct Circuits: this public preview feature allows rate-limiting on circuits, optimizing bandwidth usage and improving network performance.
  • ExpressRoute Scalable Gateway: The new ErGwScale Virtual Network Gateway SKU offers up to 40 Gbps connectivity and features auto-scaling based on bandwidth usage, enhancing flexibility and efficiency in network connectivity.

Azure Stack

Azure Stack HCI

Azure Stack HCI version 23H2

At Microsoft Ignite 2023, the company announced the public preview of Azure Stack HCI version 23H2, introducing several advancements. Key features include cloud-based deployment, update management, and monitoring, enhancing the ease and efficiency of managing infrastructure at scale. With version 23H2, deployment from the cloud is now possible, simplifying the setup process and minimizing on-site expertise requirements. The new Lifecycle Manager consolidates updates into a monthly package, streamlining update management and reducing compatibility issues. Azure Stack HCI now offers comprehensive monitoring with Azure Monitor, providing detailed insights into system performance and health.

The update also emphasizes central management of diverse workloads, whether container-based, VM-based, cloud, or edge-run, through Azure Arc and an adaptive cloud approach. Version 23H2 supports a variety of virtual machines and introduces Azure Kubernetes Service for edge-based container management. Additionally, Azure Virtual Desktop for Azure Stack HCI is in preview, offering enhanced virtualized desktops and apps with improved latency and optional GPU acceleration.

Significant attention is given to security with Azure Stack HCI version 23H2. It ensures a secure deployment by default and integrates with Microsoft Defender for Cloud for comprehensive security management. The Trusted launch feature for Azure Arc-enabled virtual machines, previously exclusive to the Azure cloud, is now available at the edge, providing additional protection against firmware and bootloader attacks.

While the 23H2 version is currently available for preview, it is not yet recommended for production use, with general availability (GA) expected in early 2024. Microsoft advises customers to continue using version 22H2 for production environments, with an update path from 22H2 to 23H2 to be detailed later. For more detailed information on Azure Stack HCI version 23H2, readers are encouraged to visit this article.

Conclusion

As we wrap up our exploration of the latest updates from Microsoft Ignite, it’s clear that the advancements in Azure IaaS and Azure Stack are not just incremental; they are transformative. Microsoft’s commitment to innovation and its vision for a more integrated, efficient, and scalable cloud infrastructure is evident in every announcement and feature update. These developments promise to redefine how businesses and developers leverage cloud computing, enhancing agility, security, and sustainability.

The implications of these updates extend beyond mere technical enhancements; they signal a shift towards a future where cloud infrastructure is more accessible, resilient, and adaptive to evolving business needs. As I conclude this article, I am left with a sense of excitement and anticipation for what these changes mean for the industry. The journey of cloud computing is ever-evolving, and with Microsoft’s recent announcements at Ignite, we are witnessing a significant leap forward in that journey.

Thank you for joining me in this deep dive into Microsoft’s latest innovations. I look forward to continuing this discussion and exploring how these advancements will unfold and impact our digital world in the days to come.

The evolution of Azure Stack HCI with Premier Solutions

As businesses worldwide seek more efficient, scalable, and customizable solutions for their IT infrastructure needs, Microsoft unveils the “Premier Solutions for Azure Stack HCI.” This launch provides companies with a range of new opportunities, seamlessly integrating with existing solutions to achieve Azure Stack HCI systems and enhancing possibilities for businesses of all sizes. In this article, we will explore the features of this new offering, how it integrates with existing solutions, and how it might redefine the future of Azure Stack HCI.

Previous Context

To activate the Azure Stack HCI solution, on-premise hardware is required. Until now, companies could rely on:

  • Azure Stack HCI Integrated Systems: Some hardware providers offer systems specifically designed and optimized for Azure Stack HCI, providing an experience reminiscent of a dedicated appliance. These solutions also include unified support, provided in collaboration between the provider and Microsoft.
  • Azure Stack HCI Validated Nodes: This method relies on the use of hardware carefully verified and validated by a specific hardware provider. This strategy allows advanced hardware customization based on customer needs, offering the possibility to select specific details related to the processor, memory, storage, and network card features, always respecting the provider’s compatibility specifications. Several hardware manufacturers offer solutions compatible with Azure Stack HCI, and most Azure Stack HCI configurations are currently made following this approach.

What’s New: Premier Solutions for Azure Stack HCI

Premier Solutions” represent a new category in the Azure Stack HCI product landscape, created to offer users a better operational experience. These solutions promise faster achievement of tangible results and unprecedented flexibility thanks to “as-a-service” provisioning options. This significant advancement is the result of collaboration with tech giants like Dell Technologies and Lenovo. The essence of this initiative is the fusion of the best available technologies into a deeply integrated, complete infrastructure solution, providing a seamless experience between hardware, software, and cloud services.

Key strengths of the Premier Solutions include:

  • Advanced Integration: An unparalleled combination of hardware, software, and cloud services, allowing companies to reduce time spent on infrastructure management and focus more on innovation.
  • Guaranteed Reliability: Microsoft and its partners are dedicated to continuous testing to ensure maximum reliability and minimal downtime.
  • Simplified Implementation: Comprehensive deployment workflows, making the implementation of Azure Stack HCI clusters a simple and repeatable process.
  • Facilitated Updates: Jointly tested and automated full-stack updates, allowing for continuous, easy updates.
  • Flexible Purchase Models: Various purchase options and additional services to facilitate the start of Azure Stack HCI solutions.
  • Global Availability: A consistent solution available everywhere, ensuring consistency worldwide.

Figure 1 – Azure Stack HCI Solution Categories

Visually, we can imagine the Azure Stack HCI solution categories as overlapping layers: at the top, we find the Premier Solutions, ready for immediate use after deployment; followed by the Integrated Systems, targeted configurations with pre-installed software for specific tasks; and finally, the Validated Nodes, boasting the broadest variety of hardware components.

For a detailed comparison between the different categories of Azure Stack HCI solutions, you can refer to this document.

A Case in Point: Dell APEX Cloud Platform for Microsoft Azure

A shining example of this collaboration is the new Dell APEX Cloud Platform for Microsoft Azure. This platform goes beyond the capabilities of the Validated Node and Integrated System categories, offering a turnkey Azure Stack HCI experience.

Born from close collaboration between Dell and Microsoft, its native integration with Azure aims to realize a shared goal: to simplify the customer experience and provide the flexibility needed for modern IT infrastructure.

Dell APEX Cloud Platform for Microsoft Azure is the result of meticulous engineering collaboration between Dell and Microsoft. It offers deep integration and automation between the technological layers of the two companies, accelerating the value achieved by customers and amplifying IT agility and productivity. With a wide range of configuration options and form factors, optimized for both main data center infrastructures and edge deployments, this platform can address a wide range of use scenarios, allowing organizations to drive innovation in every context.

A Look to the Future

In the coming months, Microsoft plans to expand the Premier Solutions portfolio with innovative edge platforms from Lenovo, consolidating its industry leadership and offering solutions increasingly suited to customer challenges. To learn more about the available Azure Stack HCI solutions, you can visit the relevant catalog.

Conclusions

Hybrid solutions represent the future of IT infrastructure, offering flexibility, scalability, and unprecedented integration between on-premise and cloud. The recent introduction of “Premier Solutions for Azure Stack HCI” is clear evidence of this, demonstrating Microsoft’s commitment to the constant evolution of its ecosystem. Collaboration with giants like Dell and Lenovo highlights a strategic synergy aimed at providing companies with cutting-edge, efficient, and optimized solutions. In particular, the Dell APEX Cloud Platform for Microsoft Azure symbolizes the pinnacle of this collaboration, presenting a solution that perfectly meets the modern needs of IT infrastructure management and evolution. As the IT landscape continues to evolve, it’s clear that solutions like Azure Stack HCI will be at the heart of digital transformation, guiding organizations towards a more connected, integrated, and innovative future.

Embracing the future: why Azure Stack HCI is the optimal choice for modernizing On-Premises infrastructure

As the digital landscape evolves, businesses are constantly seeking ways to harness the power of technology to stay competitive and efficient. While cloud computing has emerged as a game-changer, offering unparalleled flexibility and scalability, many enterprises still grapple with the challenge of integrating their on-premises infrastructure with the cloud. Microsoft’s Azure Stack HCI presents a compelling solution to this dilemma, bridging the gap between traditional data centers and the innovative world of the cloud. In this article, we delve into the unique advantages of Azure Stack HCI and why it stands out as the preferred choice for businesses aiming to modernize their IT infrastructure.

Azure Stack HCI is Microsoft’s solution that allows you to create a hyper-converged infrastructure (HCI) for running workloads in an on-premises environment, with a strategic connection to various Azure services. Azure Stack HCI has been specifically designed by Microsoft to help customers modernize their hybrid data center, offering a complete and familiar Azure experience on-premises. If you need more information about the Microsoft Azure Stack HCI solution, I invite you to watch this video.

Figure 1 – Overview of Azure Stack HCI

In my daily interactions with customers, I am often asked why they should choose Azure Stack HCI over other well-known solutions that have been on the market for a long time. In the following paragraphs, I will outline what I believe are the main reasons to opt for Azure Stack HCI.

Modernize your on-premises infrastructure by bringing innovation

Azure Stack HCI is not synonymous with a virtualization environment but allows you to achieve much more. It is ideal if you want to modernize your infrastructure by adopting a hyper-converged architecture that allows you to:

    • Activate virtual machines based on consolidated technologies that make the environment stable and highly available, especially suitable for workloads that require high performance and scalability.
    • Deploy and manage modern applications based on microservices, alongside virtual machines, in the same cluster environment, using Azure Kubernetes Service (AKS). In addition to running Windows and Linux apps in containers, AKS provides the infrastructure to run selected Azure PaaS services on-premises, thanks to Azure Arc.
    • Activate virtual machines with Windows Server 2022 Azure Datacenter edition, which offers specific features not available in the classic Standard and Datacenter editions. To learn more about the features available in this edition, you can consult this article.
    • Create Azure Virtual Desktop session host pools using virtual machines running on-premises. This hybrid scenario becomes interesting in situations where applications are latency-sensitive, such as video editing, or scenarios where users need to use a legacy system on-premises that cannot be easily accessed.
    • Extend the features of the on-premises solution by connecting to various Azure services such as Azure Site Recovery, Azure Backup, Azure Monitor, and Defender for Cloud. This aspect ensures constant innovation, given the continuous evolution of cloud services.

Optimize costs

The Azure Stack HCI cost model, detailed in this article, is straightforward. Specifically, for customers with a Software Assurance contract, adopting Azure Stack HCI results in a drastic reduction in the costs of modernizing the virtualization environment, making this solution even more cost-competitive compared to competitors in the market. Recently, when comparing the costs between Azure Stack HCI and VMware vSphere + vSAN over a 3-year projection, it emerged that Azure Stack HCI allows savings of up to 40%.

Increase the level of security

Azure Stack HCI offers cross-cutting security on hardware and firmware, integrated into the operating system’s features, capable of helping protect servers from advanced threats. Azure Stack HCI systems can adopt Secured-core security features, all through an easy configuration experience from Windows Admin Center. Additionally, Azure Stack HCI allows you to obtain important security patches for legacy Microsoft products that have passed the support deadline, through the Extended Security Update (ESU) program. Considering that October 10, 2023, marks the end of extended support for Windows Server 2012 and Windows Server 2012 R2, Azure Stack HCI allows more time to embark on an application modernization path without neglecting security aspects.

Maximize existing investments

Azure Stack HCI can integrate with the existing environment and the most popular third-party solutions. Therefore, adopting this solution does not require new investments to introduce or adapt management, identity, security, and protection solutions. Specifically, the administrative management of Azure Stack HCI does not require specific software, but existing management tools such as Admin Center, PowerShell, System Center Virtual Machine Manager, and even third-party tools can be used. Furthermore, by adopting Azure Stack HCI and Azure Arc, it is possible to apply cloud management models to the on-premises environment, greatly simplifying the user experience. Azure Stack HCI allows you to fully exploit not only the investments already made concerning tools but also the skills of IT staff.

Conclusions

In today’s fast-paced technological era, the choice of IT infrastructure can significantly influence a business’s agility, security, and overall growth. While there are numerous solutions available, Azure Stack HCI emerges as a frontrunner, seamlessly merging the reliability of on-premises systems with the innovation of the cloud. Its unique features, cost-effectiveness, and robust security measures make it an invaluable asset for companies aiming to stay ahead of the curve. By choosing Azure Stack HCI, businesses not only safeguard their current investments but also pave the way for a future-ready, integrated, and efficient IT environment.

Microsoft Defender for Cloud: un’estate di innovazioni per rimodellare la sicurezza aziendale

In un’era in cui la sicurezza dei dati e la gestione efficiente delle risorse cloud sono diventate priorità imprescindibili, Microsoft Defender for Cloud emerge come uno strumento strategico per le aziende moderne. Questa soluzione, integrata nell’ambiente Azure, offre una protezione avanzata, facilitando la gestione della sicurezza e delle conformità a livello aziendale. In questo articolo, verranno esplorate le principali novità che hanno caratterizzato Defender for Cloud nell’estate 2023, delineando come queste innovazioni possano rappresentare un valore aggiunto per le aziende.

I benefici dell’adozione di Defender for Cloud

Adottare Defender for Cloud in un contesto aziendale non è solo una scelta strategica, ma una necessità crescente. Questa soluzione consente di centralizzare e semplificare la gestione della sicurezza, offrendo una visione olistica che facilita il monitoraggio continuo e la risposta rapida alle minacce di sicurezza. Inoltre, contribuisce a ottimizzare la security posture di ambienti ibridi e multi-cloud, garantendo nel contempo una protezione avanzata e il rispetto di differenti conformità normative.

Novità dell’Estate 2023

Possibilità di includere Defender for Cloud nei business case fatti con Azure Migrate

Per le aziende intenzionate a trasferire le proprie risorse su piattaforme cloud come Azure, la pianificazione della migrazione è fondamentale. Con l’integrazione di Defender for Cloud in Azure Migrate, ora è possibile garantire una protezione avanzata fin dalla fase iniziale di migrazione. Questa integrazione assicura che le strategie di sicurezza siano ben integrate nel piano di migrazione, fornendo una transizione più sicura e senza problemi verso il cloud.

Defender for Cloud, sempre più agentless

Numerose funzionalità di Defender for Cloud sono ora disponibili senza la necessità di installare un agent:

  • Protezione dei container in Defender CSPM: discovery agentless. La transizione dal discovery guidato da agenti al discovery senza agenti, per la protezione dei container in Defender CSPM, rappresenta un salto qualitativo notevole verso una gestione della sicurezza più snella ed efficace. Questa nuova funzionalità elimina la necessità di installare agenti su ogni container, semplificando così il processo di discovery e riducendo l’utilizzo delle risorse.
  • Defender for Containers: agentless discovery per Kubernetes. Defender for Containers ha lanciato la funzione di discovery senza agent per Kubernetes, rappresentando un notevole passo avanti nella sicurezza dei container. Questo strumento offre una visione dettagliata e una completa capacità di inventario degli ambienti Kubernetes, assicurando un livello di sicurezza e conformità senza pari.
  • Defender for Servers P2 & Defender CSPM: agentless secret scanning per Virtual Machines. La funzionalità di scansione dei secret senza l’uso di agent, presente in Defender for Server P2 e Defender CSPM, consente di individuare secret non supervisionati e vulnerabili memorizzati sulle macchine virtuali. Questo strumento si rivela essenziale per impedire azioni di lateral movement nella rete e ridurre i rischi correlati.

Data Aware Security Posture 

Adottare una postura di sicurezza consapevole anche per i dati è fondamentale ed ora Microsoft Defender for Cloud è in grado di soddisfare anche questa esigenza. Questa funzionalità consente alle aziende di minimizzare i rischi legati ai dati, fornendo strumenti che individuano automaticamente le informazioni sensibili e valutano le relative minacce, migliorando la risposta alle violazioni dei dati. In particolare, la funzione di identificazione dei dati sensibili per i database PaaS è attualmente in fase di anteprima. Questa permette agli utenti di catalogare i dati critici e riconoscere le tipologie di informazioni all’interno dei loro database, rivelandosi fondamentale per una gestione e protezione efficace dei dati sensibili.

Supporto di GCP in Defender CSPM

L’introduzione del supporto per Google Cloud Platform (GCP) in Defender CSPM, attualmente in anteprima, segna un passo significativo verso una gestione della sicurezza più integrata e versatile, estendendo le capacità di Defender CSPM ad un’ampia gamma di servizi presenti nel cloud pubblico di Google.

Scansione malware in Defender for Storage

Defender per Storage introduce la funzionalità di scansione malware, superando le sfide tradizionali legate alla protezione da malware e fornendo una soluzione ideale per settori fortemente regolamentati. Questa funzione, disponibile come componente aggiuntivo, rappresenta un notevole potenziamento delle soluzioni di sicurezza di Microsoft Defender for Storage. Con la scansione malware si ottengono i seguenti benefici.

  • Protezione, in tempo pressoché reale, senza agent: capacità di intercettare malware avanzati come quelli polimorfici e metamorfici.
  • Ottimizzazione dei costi: grazie a una tariffazione flessibile, si possono controllare i costi basandosi sulla quantità di dati esaminati e con una granularità a livello di risorsa.
  • Abilitazione su larga scala: senza necessità di manutenzione, supporta risposte automatizzate su larga scala e offre diverse opzioni per l’attivazione tramite strumenti e piattaforme come Azure policy, Bicep, ARM, Terraform, API REST e il portale Azure.
  • Versatilità applicativa: basandosi sul feedback degli utenti beta negli ultimi due anni, la scansione malware si è dimostrata utile in una varietà di scenari, come applicazioni web, protezione dei contenuti, conformità, integrazioni con terze parti, piattaforme collaborative, flussi di dati e set di dati per l’apprendimento automatico (ML).

Express Configuration per il Vulnerability Assessments in Defender for SQL 

L’opzione di configurazione ‘express’ per le valutazioni delle vulnerabilità in Defender for SQL, ora disponibile per tutti, agevola il riconoscimento e la gestione delle vulnerabilità, garantendo una risposta tempestiva e una protezione più efficace.

GitHub Advanced Security per Azure DevOps

Risulta ora possibile visualizzare gli alert di GitHub Advanced Security per Azure DevOps (GHAzDO) relativi a CodeQL, secret e dipendenze, direttamente in Defender for Cloud. I risultati verranno visualizzati nella sezione DevOps e nelle Raccomandazioni. Per vedere questi risultati, è necessario integrare i propri repository abilitati a GHAzDO in Defender for Cloud.

Nuovo processo di auto-provisioning per il piano SQL Server (preview)

L’agente di monitoraggio Microsoft (MMA) verrà deprecato nell’agosto 2024. Defender for Cloud ha aggiornato la sua strategia sostituendo MMA con il rilascio di un processo di auto-provisioning dell’agente di monitoraggio Azure mirato a SQL Server.

Rivisitazione del modello di business e della struttura tariffaria

Microsoft ha rivisto il modello di business e la struttura tariffaria dei piani Defender for Cloud. Queste modifiche, mirate a offrire una maggiore chiarezza nelle spese e a rendere più intuitiva la struttura dei costi, sono state apportate in risposta al feedback dei clienti per migliorare l’esperienza d’uso complessiva.

Conclusione

L’estate 2023 ha segnato un periodo di innovazioni significative per Microsoft Defender for Cloud. Queste novità, orientate verso una gestione della sicurezza più integrata e semplificata, promettono di portare benefici tangibili alle aziende, facilitando la protezione dei dati e la conformità in ambienti cloud sempre più complessi.

Scopri le strategie infallibili per ottimizzare i costi su Azure

Le peculiarità e i vantaggi innegabili del cloud computing possono, in determinate situazioni, celare delle insidie se non gestite con la dovuta attenzione. Una gestione dei costi oculata rappresenta uno degli aspetti cruciali della governance del cloud. In questo articolo, verranno esplorati e delineati i principi e le tecniche che si possono impiegare per ottimizzare e minimizzare le spese relative alle risorse implementate nell’ambiente Azure.

La questione dell’ottimizzazione dei costi legati al cloud è un argomento che riscuote un interesse sempre più marcato tra numerosi clienti. Tanto che, per il settimo anno di fila, emerge come la principale iniziativa nel settore cloud, secondo quanto riportato nel rapporto annuale di Flexera del 2023.

Figura 1 – Iniziative riportate nel rapporto di Flexera del 2023

Principi per gestire al meglio i costi

Per una gestione efficace dei costi associati ad Azure, è fondamentale adottare i principi delineati nei paragrafi che seguono.

Progettazione

Un processo di progettazione ben strutturato, che includa un’analisi meticolosa delle necessità aziendali, è essenziale per personalizzare l’adozione delle soluzioni cloud. Diventa quindi cruciale delineare l’infrastruttura da implementare e il modo in cui verrà utilizzata, attraverso un piano di progettazione che mira a ottimizzare l’efficienza delle risorse allocate nell’ambiente Azure.

Visibilità

È vitale dotarsi di strumenti che offrano una visione globale e che permettano di ricevere notifiche relative ai costi di Azure, facilitando così un monitoraggio costante e proattivo delle spese.

Responsabilità

Assegnare i costi delle risorse cloud alle rispettive unità organizzative all’interno dell’azienda è una pratica sagace. Ciò assicura che i responsabili siano pienamente consapevoli delle spese attribuibili al loro team, promuovendo una comprensione approfondita delle spese di Azure a livello organizzativo. A tale scopo, è consigliabile strutturare le risorse Azure in modo da facilitare l’identificazione e l’attribuzione dei costi.

Ottimizzazione

È consigliabile intraprendere revisioni periodiche delle risorse Azure con l’intento di minimizzare le spese ove possibile. Avvalendosi delle informazioni disponibili, è possibile identificare con facilità le risorse sottoutilizzate, eliminare gli sprechi e capitalizzare le opportunità di risparmio sui costi.

Iterazione

È fondamentale che il personale IT sia impegnato in maniera continua nei processi iterativi di ottimizzazione dei costi delle risorse Azure. Questo rappresenta un elemento chiave per una gestione responsabile e efficace dell’ambiente cloud.

Tecniche per ottimizzare i costi

Indipendentemente dagli specifici strumenti e dalle soluzioni impiegate, per affinare la gestione dei costi in Azure, è possibile aderire alle seguenti strategie:

  • Spegnere le risorse inutilizzate, dato che la tariffazione dei diversi servizi Azure si basa sull’effettivo utilizzo delle risorse. Per quelle risorse che non richiedono un funzionamento ininterrotto e che consentono, senza alcuna perdita di configurazioni o dati, una disattivazione o una sospensione, è possibile implementare un sistema di automazione. Questo sistema, regolato da una programmazione predefinita, facilita l’ottimizzazione dell’utilizzo e, di conseguenza, una gestione più economica delle risorse stesse.
  • Dimensionare adeguatamente le risorse, consolidando i carichi di lavoro e intervenendo proattivamente sulle risorse sottoutilizzate, permette di evitare sprechi e di garantire un utilizzo più efficiente e mirato delle capacità disponibili.
  • Per le risorse utilizzate in modo continuativo nell’ambiente Azure, valutare l’opzione delle Reservations può rivelarsi una strategia vantaggiosa. Le Azure Reservations offrono la possibilità di beneficiare di una significativa riduzione dei costi, che può arrivare fino al 72% rispetto alle tariffe pay-as-you-go. Questo vantaggio è ottenibile impegnandosi a pagare per l’utilizzo delle risorse Azure per un periodo di uno o tre anni. Tale pagamento può essere effettuato in anticipo o su base mensile, senza alcun costo aggiuntivo. L’acquisto delle Reservations può essere effettuato direttamente dal portale Azure ed è disponibile per i clienti con i seguenti tipi di abbonamento: Enterprise Agreement, Pay-As-You-Go e Cloud Solution Provider (CSP).
  • Per attenuare ulteriormente i costi associati ad Azure, è opportuno considerare l’implementazione dell’Azure Hybrid Benefit. Questo vantaggio consente di realizzare un notevole risparmio, in quanto Microsoft permette di sostenere unicamente i costi relativi all’infrastruttura Azure, mentre le licenze per Windows Server o per SQL Server sono coperte da un contratto di Software Assurance o da una subscription già esistente.

L’Azure Hybrid Benefit può essere esteso anche ad Azure SQL Database, ai SQL Server installati su macchine virtuali Azure e alle SQL Managed Instances. Questi benefici facilitano la transizione verso soluzioni cloud, offrendo fino a 180 giorni di diritto di doppio utilizzo, e contribuiscono a valorizzare gli investimenti preesistenti in termini di licenze SQL Server. Per approfondire le modalità di utilizzo dell’Azure Hybrid Benefit per SQL Server, si invita a consultare le FAQ presenti in questo documento. È importante notare che questo vantaggio è applicabile anche alle sottoscrizioni di RedHat e SUSE Linux, ampliando ulteriormente le opportunità di risparmio e ottimizzazione dei costi.

L’Azure Hybrid Benefit può essere combinato con le Azure Reserved VM Instances, creando un’opportunità di risparmio significativo che può toccare l’80% del totale, specialmente quando si opta per un acquisto di Azure Reserved Instance della durata di 3 anni. Questa sinergia non solo rende l’investimento più economico, ma massimizza anche l’efficienza operativa.

  • Considerare l’integrazione di nuove tecnologie e l’applicazione di ottimizzazioni architetturali è cruciale. Questo processo implica la selezione del servizio Azure più appropriato per le specifiche esigenze dell’applicazione in questione, garantendo non solo un allineamento tecnologico ottimale, ma anche una gestione dei costi più efficiente.
  • Allocare e de-allocare risorse in modo dinamico è fondamentale per rispondere alle fluttuanti esigenze di prestazioni. Questo approccio è noto come “autoscaling”, un processo che facilita l’allocazione flessibile delle risorse per incontrare le specifiche necessità di prestazioni in ogni momento. Con l’intensificarsi del carico di lavoro, un’applicazione potrebbe necessitare di risorse supplementari per mantenere i livelli di prestazioni desiderati e adempiere agli SLA (Service Level Agreement). Al contrario, quando la domanda si riduce e le risorse aggiuntive non sono più indispensabili, queste possono essere de-allocate per minimizzare i costi. L’autoscaling capitalizza sull’elasticità degli ambienti cloud, permettendo non solo una gestione dei costi più efficace, ma anche riducendo il carico amministrativo, poiché le risorse possono essere gestite in modo più fluido e con meno interventi manuali.
  • Per gli ambienti dedicati a test e sviluppo, è consigliabile considerare l’utilizzo delle sottoscrizioni Dev/Test, che offrono la possibilità di accedere a sconti significativi sulle tariffe di Azure. Queste sottoscrizioni possono essere attivate nell’ambito di un contratto di Enterprise Agreement, facilitando così una gestione dei costi più vantaggiosa e una sperimentazione più agile e economica durante le fasi di sviluppo e test.

Conclusioni

L’adozione di un approccio metodologico nella gestione dei costi del cloud, unitamente all’impiego di strategie adeguate, rappresenta un pilastro fondamentale per navigare con successo nella complessa sfida della gestione economica del cloud. Attingendo dai principi e dalle tecniche delineate in questo articolo, gli utenti possono non solo ottimizzare le spese, ma anche valorizzare al massimo il loro investimento nel cloud, garantendo un equilibrio tra costi e benefici.

Hotpatching di Windows Server: una rivoluzione nella gestione delle macchine virtuali

Nell’era digitale, assicurare una continuità operativa è essenziale, non più solo un valore aggiunto. Per molte aziende, interruzioni frequenti, anche di breve durata, sono inaccettabili per i loro workload critici. Tuttavia, garantire tale continuità può risultare complesso, considerando che la gestione delle macchine virtuali (VM) con sistema operativo Windows Server è per certi aspetti complessa, soprattutto in relazione all’applicazione di patch di sicurezza e aggiornamenti. Con l’avvento della funzionalità di hotpatching da parte di Microsoft, si è aperto un nuovo capitolo nella gestione delle VM: un approccio più efficiente che minimizza le interruzioni, garantendo server sempre aggiornati e protetti. Questo articolo esamina le caratteristiche e i vantaggi di questa innovativa soluzione.

Cos’è l’Hotpatching?

L’hotpatching, introdotto da Microsoft, è una tecnica avanzata che consente di aggiornare sistemi operativi Windows Server senza la necessità di effettuare un riavvio. Immagina di poter “cambiare le gomme” della tua auto in movimento senza doverla fermare. Questa è la “magia” dell’hotpatching.

Dove è possibile utilizzare l’Hotpatching

La funzionalità Hotpatch è supportata sul sistema operativo “Windows Server 2022 Datacenter: Azure Edition”, che è possibile utilizzarlo per le VM che girano in ambiente Azure ed Azure Stack HCI.

Le immagini Azure disponibili per questa funzionalità sono:

  • Windows Server 2022 Datacenter: Azure Edition Hotpatch (Desktop Experience)
  • Windows Server 2022 Datacenter: Azure Edition Core

Da notare che Hotpatch è attivato di default sulle immagini Server Core e che Microsoft ha recentemente esteso il supporto all’hotpatching per includere Windows Server con Desktop Experience, ampliando ulteriormente il campo di applicazione di questa funzionalità.

Aggiornamenti supportati

Hotpatch copre gli aggiornamenti di sicurezza di Windows e mantiene un allineamento con il contenuto degli aggiornamenti di sicurezza emessi nel canale di aggiornamento Windows regolare (non hotpatch).

Ci sono alcune considerazioni importanti per l’esecuzione di una VM Windows Server Azure Edition con hotpatch abilitato:

  • i riavvii sono ancora necessari per installare gli aggiornamenti che non sono inclusi nel programma hotpatch;
  • i riavvii sono anche richiesti periodicamente dopo che è stata installata una nuova baseline;
  • i riavvii mantengono la VM sincronizzata con le patch non di sicurezza incluse nell’ultimo aggiornamento cumulativo.

Le patch attualmente non incluse nel programma hotpatch includono aggiornamenti non di sicurezza rilasciati per Windows, aggiornamenti .NET e aggiornamenti non-Windows (come driver, aggiornamenti firmware, ecc.). Questi tipi di patch potrebbero richiedere un riavvio durante i mesi di Hotpatch.

Benefici dell’Hotpatching

I benefici di questa tecnologia sono molteplici:

  • Migliore sicurezza: con l’hotpatching, le patch di sicurezza vengono applicate in modo rapido ed efficiente. Questo riduce la finestra di vulnerabilità tra il rilascio di una patch e la sua applicazione, offrendo una protezione rapida contro le minacce.
  • Minimizzazione del downtime: uno dei principali vantaggi dell’hotpatching è la capacità di applicare aggiornamenti senza la necessità di riavviare il server. Ciò significa meno interruzioni e una maggiore disponibilità per le applicazioni e per i servizi.
  • Gestione più flessibile: gli amministratori di sistema hanno la libertà di decidere quando applicare le patch, senza la preoccupazione di dover effettuare una attenta pianificazione per garantire che i processi in esecuzione non vengano interrotti durante l’applicazione degli aggiornamenti.

Come funziona l’Hotpatching

Durante un processo di hotpatching, la patch di sicurezza viene iniettata nel codice in esecuzione del sistema operativo in memoria, aggiornando il sistema mentre è ancora in funzione.

Hotpatch funziona stabilendo prima una baseline con l’attuale Aggiornamento Cumulativo per Windows Server. Periodicamente (con cadenza trimestrale), la baseline viene aggiornata con l’ultimo Aggiornamento Cumulativo, dopodiché vengono rilasciati hotpatch per i due mesi successivi. Ad esempio, se a gennaio viene rilasciato un Aggiornamento Cumulativo, febbraio e marzo vedrebbero il rilascio di hotpatch. Per il calendario di rilascio degli hotpatch, è possibile consulta le note di rilascio per Hotpatch in Azure Automanage per Windows Server 2022.

Gli hotpatch contengono aggiornamenti che non richiedono un riavvio. Poiché Hotpatch corregge il codice in memoria dei processi in esecuzione senza la necessità di riavviare il processo, le applicazioni ospitate sul sistema operativo non sono influenzate dal processo di patching. Questa azione è separata da eventuali implicazioni sulle prestazioni e sulle funzionalità della patch stessa.

L’immagine seguente riporta un esempio di un programma annuale di rilascio degli aggiornamenti (inclusi esempi di baseline non pianificate a causa di correzioni zero-day).

Figura 1 – Schema di una programmazione annuale di esempio per il rilascio degli aggiornamenti Hotpatch

Ci sono due tipi di baseline:

  • Baseline Pianificate: vengono rilasciate con una cadenza regolare, con rilasci di hotpatch nel mezzo. Le Baseline Pianificate includono tutti gli aggiornamenti in un Aggiornamento Cumulativo più recente e richiedono un riavvio.
  • Baseline Non Pianificate: vengono rilasciate quando viene rilasciato un aggiornamento importante (come una correzione zero-day) e quel particolare aggiornamento non può essere rilasciato come hotpatch. Quando vengono rilasciate le baseline non pianificate, un rilascio di hotpatch viene sostituito con una baseline non pianificata in quel mese. Anche le Baseline Non Pianificate includono tutti gli aggiornamenti in un Aggiornamento Cumulativo più recente e richiedono un riavvio.

La programmazione riportata nell’immagine di esempio illustra:

  • quattro rilasci di baseline pianificate in un anno solare (cinque in totale nel diagramma) e otto rilasci di hotpatch;
  • due baseline non pianificate che sostituirebbero i rilasci di hotpatch per quei mesi.

Processo di orchestrazione delle patch

Hotpatch è da considerate come un’estensione di Windows Update e gli strumenti di orchestrazione delle patch variano a seconda della piattaforma in uso.

Orchestrazione di Hotpatch in Azure

Le macchine virtuali create in Azure sono abilitate di default per il patching automatico se utilizzata un’immagine supportata di “Windows Server Datacenter: Azure Edition”:

  • le patch classificate come Critiche o di Sicurezza vengono automaticamente scaricate e applicate sulla VM;
  • le patch vengono applicate durante le ore di minore attività considerando il fuso orario della VM;
  • Azure gestisce l’orchestrazione delle patch e le patch vengono applicate seguendo i principi di disponibilità;
  • lo stato di salute della macchina virtuale, determinato attraverso i segnali di salute della piattaforma Azure, viene monitorato per rilevare fallimenti nel patching.

Orchestrazione di Hotpatch in Azure Stack HCI

Gli aggiornamenti Hotpatch per le macchine virtuali attive in ambiente Azure Stack HCI possono essere orchestrati utilizzando:

  • Group Policy per configurare le impostazioni del client Windows Update;
  • le impostazioni del client Windows Update oppure SCONFIG per Server Core;
  • una soluzione di gestione delle patch di terze parti.

Considerazioni e limitazioni

Tuttavia, come ogni tecnologia, anche l’hotpatching ha le sue sfumature. Non tutte le patch sono adatte per l’hotpatching; alcune potrebbero ancora richiedere un riavvio tradizionale. Inoltre, prima di applicare qualsiasi patch, rimane fondamentale testarla in un ambiente controllato per evitare potenziali problemi.

L’installazione di aggiornamenti Hotpatch non supporta il rollback automatico. Infatti, se una VM riscontra un problema durante o dopo un aggiornamento, risulta necessario disinstallare l’aggiornamento e installare l’ultimo aggiornamento baseline noto come valido. In seguito al rollback sarà necessario riavviare la VM.

Conclusione

L’introduzione dell’hotpatching da parte di Microsoft rappresenta un passo avanti significativo nella gestione delle VM con sistema operativo Windows Server. Con la capacità di applicare patch di sicurezza e aggiornamenti senza interruzioni, gli amministratori possono garantire che i loro server siano protetti e aggiornati in un modo più rapido ed efficace. In un mondo in cui la sicurezza è di primaria importanza e in cui ogni secondo conta, l’hotpatching si posiziona come una soluzione di valore per ogni azienda che utilizza Windows Server in ambiente Azure oppure in ambiente Azure Stack HCI.