Category Archives: Microsoft Azure

Azure Security: how to secure the Azure Deployment and Resource Management service

To achieve a high level of security in your public cloud environment, you need to provide protection for the individual resources that are activated, however it is also appropriate to monitor the service that allows the distribution and management of the resources themselves. In the Microsoft public cloud, the deployment and management service is defined as Azure Resource Manager, a crucial service connected to all Azure resources, therefore a potential and ambitious target for attackers. Microsoft, aware of this aspect, recently announced Azure Defender for Resource Manager. This article describes the features of this solution that allows you to carry out an advanced security analysis, in order to detect potential threats and be alerted to suspicious activity affecting Azure Resource Manager.

In Azure Defender, there are protections designed specifically for individual Azure services, such as for Azure SQL DB, Azure Storage, Azure VMs, and protections that transversally affect all those components that can be used by the various Azure resources. These include Azure Defender for Azure Network, Key Vault and the availability of Azure Defender for Azure DNS and Azure Resource Manager was also announced recently. These tools allow you to obtain an additional level of protection and control in your Azure environment.

Figure 1 – Azure Defender Threat Protection for Azure Workloads

Azure Resource Manager provides the management layer that allows you to create, update and delete resources in the Azure environment. It also provides specific features for the governance of the Azure environment, such as access control, locks and tags, that help protect and organize resources after they are distributed.

Azure Defender for Resource Manager automatically monitors the organization's Azure resource management operations, regardless of whether these are done through the Azure portal, Azure REST APIs, the command line interface or with other Azure programming clients.

Figure 2 – Protection of Azure Defender for Resource Manager

To activate this type of protection, simply enable the specific Azure Defender plan in the Azure Security Center settings:

Figure 3 - Activation of Azure Defender for Resource Manager

Azure Defender for Resource Manager can enable protection when the following conditions occur:

  • Resource management operations classified as suspicious, such as operations from dubious IP addresses, disabling the antimalware component and ambiguous scripts running through the VM extensions.
  • Use of exploitation toolkits such as Microburst or PowerZure.
  • Lateral shift from the Azure management layer to the Azure resources data plane.

A complete list of alerts that Azure Defender for Resource Manager is able to generate, is located in this Microsoft's document.

Security alerts generated by Azure Defender for Resource Manager are based on potential threats that are detected by monitoring Azure Resource Manager operations using the following sources:

  • Azure Activity Log, the Azure platform log providing information about subscription-level events.
  • Azure Resource Manager Internal Logs, not accessible by customers, but only by Microsoft personnel.

In order to obtain a better and more in-depth investigation experience, it is advisable to merge the Azure Activity Logs into Azure Sentinel, following the steps in this Microsoft's document.

Simulating an attack on the Azure Resource Manager layer using the PowerZure exploitation toolkits, Azure Defender for Resource Manager generates an alert with high severity, as shown in the following image:

Figure 4 – Alert generated by Azure Defender for Resource Manager

For such an alert you can also receive a notification by appropriately setting up an action group in Azure Monitor. Furthermore, if the integration between Azure Security Center and Azure Sentinel has been activated, the same alert would also be present in Azure Sentinel, with the relevant information necessary to start the investigation process and provide a prompt response to a problem of this type.

Conclusions

Protecting resources effectively in the Azure environment also means adopting the appropriate tools to deal with potential attacks that can exploit the distribution and management mechanisms of the resources themselves. Thanks to the new tool Azure Defender for Resource Manager it is possible to take advantage of effective protection in a fully integrated way in the Azure platform, without having to install specific software or enable additional agents.

Azure IaaS and Azure Stack: announcements and updates (January 2021 – Weeks: 01 and 02)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

Azure

Compute

New datacenter region in Chile

Microsoft has announced plans for a new datacenter region in Chile, as part of a “Transforma Chile” initiative. A skilling program as well as an Advisory Board are also part of the initiative, targeted at reaching 180,00 Chileans.

NCas_T4_v3-Series VMs are now generally available

NCas_T4_v3Virtual Machines feature 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores, and 448 GiB of system memory. These virtual machines are ideal to run ML and AI workloads utilizing Cuda, TensorFlow, Pytorch, Caffe, and other frameworks or the graphics workloads using NVIDIA GRID technology. NCas_T4_v3 VMs are now generally available in West US2, West Europe, and Korea Central regions.

Networking

Public IP SKU upgrade

Azure public IP addresses now support the ability to be upgraded from Basic to Standard SKU. Additionally, any Basic Public Load Balancer can now be upgraded to a Standard Public Load Balancer, while retaining the same public IP address. This is supported via PowerShell, CLI, templates, and API and available across all Azure regions.

Azure Hybrid Cloud: Azure Stack Edge solution overview

Microsoft to better meet the needs of adopting solutions that can extend your environment, from the main datacenter to the peripheral sites, with innovative Azure services, makes the Azure Stack portfolio available to its customers. It is a set of hybryd cloud solutions, that allow you to deploy and run your application workloads consistently, without restrictions imposed by the geographical location. This article provides an overview of the Azure Stack Edge platform (ASE) and its characteristics, examining the use cases and the main features.

Before going into the specifics of Azure Stack Edge it is good to specify that the solutions included in the Azure Stack portfolio are the following:

  • Azure Stack Edge: the Azure managed appliance that can bring computational power, cloud storage and intelligence in a remote edge of the customer.
  • Azure Stack HCI: the solution that allows the execution of virtual machines and an easy connection to Azure thanks to a hyper-converged infrastructure (HCI).
  • Azure Stack Hub: the offer for enterprise companies and public sector customers, needing a cloud environment but disconnected from the Internet, or need to meet specific regulatory and compliance requirements.

Figure 1 – Azure Stack Product Family

To get an overview of these solutions I invite you to read this article.

Azure Stack Edge value proposition

The results that can be obtained by adopting the Azure Stack Edge solution are the following:

  • Possibility of adopting an on-premises model Infrastructure as a service (IaaS) for workloads on peripheral sites (edge), where both hardware and software are provided by Microsoft.
  • Ability to run applications at customer sites, in order to keep them close to the data sources. Furthermore, allows you to run not only proprietary and third-party applications at the edge, but also to take advantage of different Azure services.
  • Availability of built-in hardware accelerators that allow you to run machine learning and AI scenarios at the edge, right where the data is, without having to send data to the cloud for further analysis.
  • Possibility of having an integrated cloud storage gateway that allows easy data transfer from the edge to the cloud environment.

Usage scenarios

The main scenarios for using Azure Stack Edge are the following:

  • Machine learning at peripheral sites: thanks to the presence of integrated hardware accelerators and the processing capabilities offered by the solution, you have the ability to cope with these scenarios right where the data resides, processing them in real time, without having to send them to Azure.
  • Computational capacity at edge: customers can run their business applications and IoT solutions at peripheral sites, without necessarily having to rely on constant connectivity to the cloud environment.
  • Network transfer of data from the edge to the cloud: used in scenarios where you want to periodically transfer data from the edge to the cloud, for further analysis or storage purposes.

Form factors

To support the different usage scenarios reported, vertically between industrial sectors, Azure Stack Edge is available in three separate form factors:

  • Azure Stack Edge Pro, a 1U blade server with one or two GPUs.
  • Azure Stack Edge Pro R, a rugged server with GPU, in a sturdy carrying case, complete with UPS and backup battery.
  • Azure Stack Edge Mini R, a machine with a reduced form factor with a battery and a low weight (less than 3,5 Kg).

Figure 2 – Azure Stack Edge Form Factors

Azure Stack Edge "rugged" versions allow resistance to extreme environmental conditions, and battery-powered versions allow easy transport.

Azure Stack Edge stack software

The customer can place the Azure Stack Edge order and provisioning directly from the Azure portal, and then use the classic Azure management tools to monitor and perform updates. Hardware support is provided directly by Microsoft, that will replace the components in case of problems. There is no upfront cost to obtain this appliance, but the cost will be included monthly in the billing of Azure services. Since, once configured, any application running on Azure Stack Edge can be configured and deployed from the Azure portal, eliminates the need for IT staff in the edge location.

Azure Stack Edge Computational Capacity

The ability to offer computational capacity taken from the edges is one of the key features of Azure Stack Edge, which can be provided in one of the following ways:

  • IoT Edge: the execution of containerized workloads distributed through the IoT hub has always been supported since the launch of Azure Stack Edge and continues to be so.
  • Kubernetes: recently, support was introduced for the execution of containerized workloads in Kubernetes clusters running on Azure Stack Edge.
  • Virtual machines: another way to run applications is by activating workloads on board virtual machines.

Kubernetes environment in Azure Stack Edge

Kubernetes is becoming the de facto standard for the execution and orchestration of containerized workloads, but those who know these environments, is aware of some of the operational challenges that can arise from managing a Kubernetes cluster. In this context, the goal of Azure Stack Edge is to simplify the deployment and management of Kubernetes clusters. With a simple configuration, you can activate a Kubernetes cluster on Azure Stack Edge.

Once the Kubernetes cluster has been configured, you must perform additional management steps, that are simplified in ASE with simple add-ons. Among these operations we find:

  • The ability to easily enable hardware accelerators.
  • The provisioning of the storage system to create persistent volumes.
  • Keep it up to date with Kubernetes releases by taking the latest updates available.
  • The ability to apply security and governance mechanisms from their own infrastructure.

Cluster environment configuration completed, Simple mechanisms are provided for deploying and managing workloads on the Kubernetes cluster, by using the following modes:

  • Azure Arc: ASE comes with native integration with Azure Arc. With just a few steps you can enable Azure Arc, allowing applications to be distributed in the Kubernetes cluster directly from the Azure portal.
  • IoT Hub: by enabling the IoT hub add-on it is possible to use it for the distribution of conteiners.
  • Kubectl: finally supports the native way kubectl, typically used in disconnected environments or if you have an existing infrastructure that already integrates with this mode.

Figure 3 – Kubernetes deployment in Azure Stack Edge

Virtual machines in Azure Stack Edge

Another variant to offer computational capacity at the edges is the activation of virtual machines. Azure Stack Edge allows you to host virtual machines, both Windows and Linux, offering the ability to deploy and manage these virtual machines directly from Azure or by acting locally.

Figure 4 – Virtual Machines in Azure Stack Edge

One thing to consider is that Azure Stack Edge allows you to set up simpler network topologies than Azure or Azure Stack Hub.

Regarding the hardware acceleration features in Azure Stack Edge, these two variants are supported:

  • GPU NVIDIA T4, fully integrated with the GPU stack
  • Intel Movidius Visual Processing Unit (VPU), for AI and ML scenarios

Azure services that can be deployed in Azure Stack Edge

The number of services that can be activated in Azure Stack Edge is large, among those recently introduced we find:

  • Live Video Analytics: a platform for creating video solutions and applications based on artificial intelligence, to carry out real-time insights using video streams.
  • Spatial Analysis: a real-time computer vision module to analyze videos and understand people's movements in physical spaces. For example,, during the Covid period, many retail stores want to implement social distancing policies and may use a special analytics module to understand certain behavior based on videos shot in the store.
  • Azure Monitor: this increases application performance and availability by collecting logs from containers and analyzing them.

Figure 5 – Azure Solutions in Azure Stack Edge

Conclusions

In business realities, the adoption of totally cloud-based solutions does not always turn out to be a viable choice or the best of all, hybrid solutions often have to be adopted, which in any case include the possibility of using the innovations introduced by the cloud. Azure Stack Edge is a flexible and modern solution that allows you to meet your needs, even the most challenging ones, emerging for edge sites, without neglecting the potential offered by the public cloud.

Azure IaaS and Azure Stack: announcements and updates (December 2020 – Weeks: 53)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

In the last week of the year, there was little news, thanks to the holiday period. This series of blog posts will continue into 2021. I take this opportunity to wish you a Happy New Year!

Azure

Azure NetApp Files: Application Consistent Snapshot tool (preview)

Azure Application Consistent Snapshot tool (AzAcSnap) is in public preview. It is a command-line tool enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL).

Azure Management services: what's new in December 2020

In December several news regarding Azure management services were announced by Microsoft. Our community releases this monthly summary that gives you a comprehensive overview of the main news of the month, in order to stay up to date on these news and have the necessary references to conduct further study.

The following diagram shows the different areas related to management, which are covered in this series of articles, in order to stay up to date on these topics and to better deploy and maintain applications and resources.

Figure 1 – Management services in Azure overview

Monitor

Azure Monitor

New Azure Monitor agent and new Data Collection Rules features(preview)

Azure Monitor introduces (in preview) a new unified agent (Azure Monitor Agent – AMA) and a new concept to make data collection more efficient (Data Collection Rules – DCR).

Among the various key features added in this new agent we find:

  • Support for Azure Arc server(Windows and Linux) 
  • Virtual Machine Scale Set support (VMSS)
  • Installation via ARM template

With regard to the Data Collection, these innovations have been made:

  • Better control in defining the scope of data collection (e.g.. ability to collect from a subset of VMs for a single workspace)
  • Single collection and sending to both Log Analytics and Azure Monitor Metrics
  • Send to multiple workspaces (multi-homing for Linux)
  • Ability to better filter Windows events
  • Better extension management

Azure Monitor for Windows Virtual Desktop (preview)

Azure Monitor now allows you to perform the following operations related to Windows Virtual Desktop environments:

  • View a summary of the status and health of host pools
  • Find and resolve any deployment issues
  • Evaluate resource usage and make decisions about scalability and cost management
  • Understanding and addressing user feedback

Azure Monitor for containers: tab reports and deployment logs

In Azure Monitor for containers a new tab has been made available Reports that gives customers complete access to all advanced monitoring workbooks for Kubernetes, for example: Node-disk, Node-network, workloads and Persistent Volume monitoring.

Furthermore, you can now view real-time logs of Azure Kubernetes Service deployments (AKS), accessing the live logs of the pods directly. Log Analytics will allow you to search by applying filters to view historical pod deployment logs, useful for diagnosing any issues.

Azure Monitor for containers: support for Private Cluster live logs (preview)

In Azure Monitor for containers support for private cluster live logs has been introduced, this allows you to view in real time container logs, pod events and metrics. For more details please visit the Microsoft-specific documentation.

Infrastructure Encryption for Azure Monitor data 

Starting from 1 November 2020 data that flows into Azure Monitor is encrypted twice: at the service level and now also at the infrastructure level, thanks to the double encryption available for Azure storage.

Configure

Azure Automation

Support for Azure Private Link available

Microsoft has introduced support forAzure Private Link, necessary to securely connect virtual networks to Azure Automation through the use of private endpoints. This feature is useful for:

  • Establish a private connection with Azure Automation, without opening access from the public network.
  • Ensure that Azure Automation data is accessible only through authorized private networks.
  • Protect yourself from data extraction by allowing granular access to specific resources.
  • Keep all traffic within the Microsoft Azure backbone network.

Availability in new regions

Azure Automation is now available in the “Norway East” and “Germany West Central”. To check the availability of the service in all the Azure regions you can consult this document.

Support for Python3 runbooks (preview)

In Azure Automation, you can now import, create and run runbooks Python 3 in Azure or in a Hybrid Runbook Worker.

Evaluation of Azure

To test for free and evaluate the services provided by Azure you can access this page.

Azure IaaS and Azure Stack: announcements and updates (December 2020 – Weeks: 51 and 52)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

Azure

Compute

Azure VMware Solution: now available in UK South and Japan East Azure regions

The new Azure VMware Solution empowers customers to seamlessly extend or migrate their existing on-premises VMware applications to Azure without the cost, effort or risk of re-architecting applications or retooling operations. General Availability of the new Azure VMware Solution was announced at Microsoft Ignite, Sept 2020, with initial availability in US East, US West, West Europe and Australia. Microsoft has now expanded availability to two more Azure regions Japan East and UK South. For updates on more upcoming region availability please visit the product by region page here.

HBv2-series VMs for HPC now available in the UAE North region

HBv2 VMs are now Generally Available in the Azure UAE North region.

Storage

Azure File Sync agent v11.1

Azure File Sync agent v11.1 is now on Microsoft Update and Microsoft Download Center.

Improvements and issues that are fixed:

  • New cloud tiering modes to control initial download and proactive recall
    • Initial download mode: you can now choose how you want your files to be initially downloaded onto your new server endpoint. Want all your files tiered or as many files as possible downloaded onto your server by last modified timestamp? You can do that! Can’t use cloud tiering? You can now opt to avoid tiered files on your system. To learn more, see Create a server endpoint section in the Deploy Azure File Sync documentation.
    • Proactive recall mode: whenever a file is created or modified, you can proactively recall it to servers that you specify within the same sync group. This makes the file readily available for consumption in each server you specified. Have teams across the globe working on the same data? Enable proactive recalling so that when the team arrives the next morning, all the files updated by a team in a different time zone are downloaded and ready to go! To learn more, see Proactively recall new and changed files from an Azure file share section in the Deploy Azure File Sync documentation.
  • Exclude applications from cloud tiering last access time tracking
    • You can now exclude applications from last access time tracking. When an application accesses a file, the last access time for the file is updated in the cloud tiering database. Applications that scan the file system like anti-virus cause all files to have the same last access time which impacts when files are tiered. For more details, see the release notes.
  • Miscellaneous performance and reliability improvements
    • Improved change detection performance to detect files that have changed in the Azure file share.
    • Improved sync upload performance.
    • Initial upload is now performed from a VSS snapshot which reduces per-item errors and sync session failures.
    • Sync reliability improvements for certain I/O patterns.
    • Fixed a bug to prevent the sync database from going back-in-time on failover clusters when a failover occurs.
    • Improved recall performance when accessing a tiered file.

More information about this release:

  • This update is available for Windows Server 2012 R2, Windows Server 2016 and Windows Server 2019 installations that have Azure File Sync agent version 4.0.1.0 or later installed.
  • The agent version for this release is 11.1.0.0.
  • A restart may be required if files are in use during the agent installation.
  • Installation instructions are documented in KB4539951.

The possibilities offered by Azure for container execution

The strong trend in application development involving microservice-based architectures make containers perfect for efficiently deploying software and operating at scale. containers can work on windows operating systems, Linux and Mac, on virtual machines or bare metal, in on-premise data centers and, obviously, in the public cloud. Microsoft is certainly a leading provider that enables enterprise-level container execution in the public cloud. This article provides an overview of the main solutions that can be adopted to run containers in a Microsoft Azure environment.

Virtual machines

IaaS virtual machines in Azure environment can provide maximum flexibility to run Docker containers. In fact,, on Windows and Linux virtual machines it is possible to install the Docker runtime and thanks to the availability of different combinations of CPU and RAM you can have the necessary resources to run one or more containers. This approach is typically recommended in DevTest environments, as the cost of configuring and maintaining the virtual machine is not negligible.

Serverless approaches

Azure Container Instances (ACI)

Azure Container Instances (ACI) is the easiest and fastest way in Azure to run on-demand containers in a managed serverless environment. All this is made possible without having to activate specific virtual machines and the necessary maintenance is almost negligible. The solution Azure Container Instances is suitable in scenarios that require isolated containers, without the need to adopt a complex orchestration system. ACI is in fact able to provide only some basic scheduling features offered by the orchestration platforms and, although it does not cover the valuable services provided by such platforms, can be seen as a complementary solution.

Top-level resources in Azure Container Instances are the Container group, a collection of containers that are scheduled on the same host machine. Containers within a container group share the lifecycle, resources, the local network and storage volumes. Container group concept is similar to pod concept in Kubernetes environment.

Figure 1 – Container group sample in Azure Container Instances

The service Azure Container Instances involves costs that depend on the number of vCPUs and GBs of memory allocated per second. For more details on costs please visit the Microsoft official page.

Azure Web App for Containers

For web-based workloads, there is the ability to run containers from Azure App Service, the Azure web hosting platform, using the service Azure Web App for Containers, with the advantage of being able to exploit the distribution methodologies, scalability and monitors inherent in the solution.

Azure Batch and Containers

If workloads require you to scale with multiple job batches, you can put them in containers and manage scaling through Azure Batch. In this scenario, the combination of Azure Batch and containers turns out to be a winner. Azure Batch allows the execution and resizing of a large number of batch processing processes in Azure, while containers provide an easy way to perform Batch tasks, without having to manage the environment and its dependencies, required to run applications. In these scenarios, it is possible to envisage the adoption of low-priority VMs with Azure Batch to reduce costs.

Containers orchestration

The automation and management tasks of a large number of containers and the ways in which they interact with each other is known as orchestration. In case therefore there is a need to orchestrate more containers it is necessary to adopt more sophisticated solutions such as: Azure Kubernetes Service (AKS) or Azure Service Fabric.

Azure Kubernetes Service (AKS)

Azure Kubernetes Service (AKS) is the fully managed Azure service that allows the activation of a Kubernetes cluster.

Kubernetes, also known as “k8s”, provides automated orchestration of containers, improving its reliability and reducing the time and resources required in the DevOps field. Kubernetes tends to simplify deployments, allowing you to automatically perform implementations and rollbacks. Furthermore, it allows to improve the management of applications and to monitor the status of services to avoid errors in the implementation phase. Among the various functions there are services integrity checks, with the ability to restart containers that are not running or that are blocked, allowing to advertise to clients only the services that have started correctly. Kubernetes also allows you to automatically scale based on usage and exactly like containers, allows you to manage the cluster environment in a declarative way, allowing version-controlled and easily replicable configuration.

Figure 2 - Example of microservices architecture based on Azure Kubernetes Service (AKS)

Azure Service Fabric

Another possibility to orchestrate containers is the adoption of the reliable and flexible platform Azure Service Fabric. This is Microsoft's container orchestrator that allows the deployment and management of microservices in highly intensive cluster environments with very fast deployment times. With this solution you have the opportunity, for the same application, to combine services residing in processes and services within containers. The unique and scalable architecture of Service Fabric allows you to perform data analysis almost in real time, computational calculations in memory, parallel transactions and event processing in applications. Service Fabric provides a sophisticated and lightweight runtime that supports stateless and stateful microservices. A key differentiator of Service Fabric is its robust support for creating stateful services, adopting built-in programming models of Service Fabric or stateful containerized services. For more information on the application scenarios that can take advantage of Service Fabric stateful services you can consult this document.

Figure 3 - Azure Service Fabric overview

Azure Service Fabric can boast of hosting many Microsoft services, including Azure SQL Database, Azure Cosmos DB, Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, Dynamics 365, Skype for Business, and many core Azure services.

Conclusions

Microsoft offers a range of options for running containers in its public cloud. The choice of the solution that best suits your needs among all those offered, despite requiring careful evaluation, allows to have a high flexibility. From the adoption of serverless approaches, the management of cluster environments for orchestration, up to the creation of your own infrastructure based on virtual machines, you can find the ideal solution to run containers in the Microsoft Azure environment.

Azure IaaS and Azure Stack: announcements and updates (December 2020 – Weeks: 49 and 50)

This series of blog posts includes the most important announcements and major updates regarding Azure infrastructure as a service (IaaS) and Azure Stack, officialized by Microsoft in the last two weeks.

Azure

Compute

Azure Dedicated Host: automatic VM placement and Azure Virtual Machine Scale Sets available

You can simplify the deployment and increase the scalability of your Azure Dedicated Hosts environments with two new features now generally available:

  • You can accelerate the deployment of Azure VMs in Dedicated Hosts by letting the platform select the host group to which the VM will be deployed.
  • You can also use Virtual Machine Scale Sets in conjunction with Dedicated Hosts. This new capability allows IT organizations to use scale sets across multiple dedicated hosts part of a dedicated hosts group.

New datacenter region in Denmark

Microsoft has announced the most significant investment in its 30-year history in Denmark, introducing Denmark as the location for its next sustainable datacenter region and a comprehensive skilling commitment for an estimated 200,000 Danes by 2024. Powered by 100 percent renewable energy, the datacenter region will provide Danish customers of all sizes faster access to the Microsoft Cloud, world-class security and the ability to store data at rest in the country.

HBv2-series VMs for HPC are now available in UAE North

HBv2 VMs for supercomputing lass HPC are now generally available in the Azure UAE North region.

Storage

Azure Storage blob inventory (preview)

A lot of valuable data is stored in Azure Blob Storage. Customers frequently want to have an overview of their data for business and compliance reasons. The Azure Storage blob inventory feature provides an overview of your blob data within a storage account. Use the inventory report to understand your total data size, age, encryption status, and so on. Enable blob inventory reports by adding a policy to your storage account. Add, edit, or remove a policy by using the Azure portal. Once enabled, an inventory report is automatically created daily.

Azure Storage account recovery available via portal

Azure Storage uses a storage account to contain all of your Azure Storage data including: blobs, files, tables, queues, and disks.  Accidentally deleting a storage account deletes all data in the account and previously could not be recovered. Microsoft is announcing that storage account recovery is available with some restrictions and this functionality is available via the Azure Portal.

 For a storage account to be recoverable:

  • A new storage account with the same name has not been recreated since deletion
  • The storage account was deleted in the last 14 days
  • It is not a classic storage account
  • Azure Blob Storage NFS 3.0 preview supports general purpose v2 (GPV2) storage accounts with standard tier performance in the following regions: Australia East, Korea Central, and South Central US. In addition, the NFS 3.0 preview is expanded to support block blob with premium performance tier in all available regions.

Azure Blob Storage NFS 3.0 preview supports general purpose v2 (GPV2) storage accounts

Azure Blob Storage NFS 3.0 preview supports general purpose v2 (GPV2) storage accounts with standard tier performance in the following regions: Australia East, Korea Central, and South Central US. In addition, the NFS 3.0 preview is expanded to support block blob with premium performance tier in all available regions.

Azure Stack

Azure Stack Edge

Virtual Machine Support (public preview)

Azure Stack Edge hosts Azure virtual machines so you can run your VM based IoT, AI, and business applications on an Azure appliance at your location. The system includes deployment and management from the Azure portal, meaning you use the Azure Portal to deploy a VM Image and a VM to your Edge device at your location. Because Azure Stack Edge supports Azure VMs, you can build and test your VM image in Azure before deploying straight to the edge. For local control, ARM compatible APIs and templates can deploy and manage VMs, even when the device is disconnected from Azure.

Kubernetes system is available

Azure Stack Edge includes a managed Kubernetes environment so you can deploy your containerized apps to the edge using this industry standard technology. Just click a button in the Azure Portal and Azure Stack Edge will create a Kubernetes cluster and keep it running for you. Then deploy your Kubernetes apps from the cloud via IoT Edge or Arc enabled Kubernetes. Or use native kubectl tools for local deployment. This makes it simple to have an on-premises Kubernetes environment for your AI, IoT, and modern business applications.

Azure Stack HCI

The new Azure Stack HCI is now generally available

Azure Stack HCI is the new subscription service for hyperconverged infrastructure from Microsoft Azure. Azure Stack HCI brings together the familiarity and flexibility of on-premises virtualization with powerful new hybrid capabilities. With Azure Stack HCI, you can run virtual machines, containers, and select Azure services on-premises with management, billing, and support through the Azure cloud.

Azure Stack Hub

Event hubs is available

Event Hubs is a reliable and scalable event streaming engine that backs thousands of applications across every kind of industry in Microsoft Azure. Microsoft is announcing the general availability of Event Hubs on Azure Stack Hub. Event Event Hubs on Azure Stack Hub will allow you to realize cloud and on-premises scenarios that use streaming architectures.

Azure Application delivery: which load balancing service to choose?

The transition to cloud solutions to deliver applications is a trend that is proceeding at a very fast pace and ensuring an access fast, secure and reliable to such applications is a challenging task that must be directed by adopting the right solutions. Microsoft Azure provides a wide range of services to ensure optimal application delivery, but in assessing which load-balancing solution to adopt there are several aspects to consider. This article wants to clarify what you should consider to adopt the most suitable Azure solution in this area.

The need to distribute workloads over multiple computing resources may be due to the need to optimize the use of resources, maximize throughput, minimize response times and avoid overloading every single resource. Furthermore, it can also be aimed at improving application availability by sharing a workload between redundant computing resources.

Azure load balancing services

To provide Azure load-balancing services we find the following components.

Azure Load Balancer and cross-region Azure Load Balancer: these are components that enable Layer-4 load balancing for all TCP and UDP protocols, ensuring high performance and very low latencies. Azure Load Balancer is a component zone-redundant, therefore provides high availability among availability zones.

Figure 1 – Azure Load Balancer and cross-region Azure Load Balancer overview

Azure Application Gateway: it is a service managed by the azure platform, with inherent features of high availability and scalability. The Application Gateway is a application load balancer (OSI layer 7) for web traffic, that allows you to govern HTTP and HTTPS applications traffic (URL path, host based, round robin, session affinity, redirection). The Application Gateway is able to centrally manage certificates for application publishing, using SSL and SSL offload policy when necessary. The Application Gateway may have assigned a private IP address or a public IP address, if the application must be republished in Internet. In particular, in the latter case, it is recommended to turn onWeb Application Firewall (WAF), that provides application protection, based on rulesOWASP core rule sets. The WAF protects the application from vulnerabilities and against common attacks, such as X-Site Scripting and SQL Injection attacks.

Figure 2 – Azure Application Gateway Overview

Front Door: is an application delivery network that provides global load balancing and site accelleration service for web applications. It offers Layer-7 functionality for application publishing such as SSL offload, path-based routing, fast failover, caching, in order to improve the performance and high availability of applications.

Figure 3 – Azure Front Door Overview

Traffic Manager: is a DNS-based load balancer that enables optimal distribution of traffic to services deployed in different Azure regions, while providing high availability and responsiveness. Are available different routing methods to determine which endpoint to direct traffic to. Based on DNS, failover may not be immediate due to common challenges related to DNS caching and systems not meeting DNS TTLs.

Figure 4 – Azure Traffic Manager Overview (performance traffic-routing method)

Things to consider when choosing Azure load balancing services

Each service has its own characteristics and to choose the most appropriate one it is good to make a classification with respect to the following aspects.

Load-balancing services: global vs regional

  • Global load-balancing: are used to distribute traffic to globally distributed backends across multiple regions, which can be deployed in cloud or hybrid environments. Fall into this category Azure Traffic Manager, Azure Front Door and the cross-region Azure Load Balancer.
  • Regional load-balancing: they allow you to distribute traffic to virtual machines connected to a specific virtual network or to endpoints in a specific region. This category includes Azure Load Balancer and the Azure Application Gateway.

Type of traffic: HTTP(S) vs non-HTTP(S)

Another important differentiating factor in the choice of the load-balancing solution to be adopted is the type of traffic that must be managed:

  • HTTP(S): the adoption of Layer-7 load-balancing services that accept only HTTP traffic is recommended(S). They are suitable for this type of traffic Azure Front Door and Azure Application Gateway. Typically they are used for web applications or other endpoints HTTP (S) and include features such as: SSL offload, web application firewall, path-based load balancing, and session affinity.
  • Non-HTTP(S): the use of load-balancing services is required that allow to contemplate the traffic non-HTTP (S), like Azure Traffic Manager, cross-region Azure Load Balancer and Azure Load Balancer.

In the evaluation of the Azure load-balancing service to be adopted, it is also appropriate to include considerations regarding the following aspects:

To facilitate the choice of the load-balancing solution, the following flow chart can be used as a starting point, which directs the choice on a series of key aspects:

Figure 5 – Flowchart for choosing azure load-balancing solution

Note: This flowchart does not cover the cross-region Azure Load Balancer as at the moment (11/2020) are in preview.

This flow chart is a great starting point for your evaluations, but since each application has unique requirements it is good to carry out a specific more detailed analysis.

If the application consists of multiple workloads, it is appropriate to evaluate each of these separately, as it may be necessary to adopt one or more load balancing solutions.

The various load load-balancing services can be used in combination with each other to ensure reliable and secure application access to the services provided in environments IaaS, PaaS or on-premises.

Figure 6 – Possible example of how to combine the various Azure load-balancing services

Conclusions

Thanks to a wide range of global and regional services, Azure is able to guarantee performance, security and high availability in application access. In order to establish the architecture that best meets your needs, there are several elements to evaluate, but the right combination of Azure Application Delivery solutions can deliver significant value to IT organizations, ensuring a distribution that is fast, secure and reliable for applications and user data.

Azure Networking: how to monitor and analyze Azure Firewall logs

In network architectures in Azure where Azure Firewall is present, the firewall-as-a-service solution (FWaaS) which allows to secure the resources present in the Virtual Networks and to govern the related network flows, it becomes strategic to adopt tools to effectively monitor the relevant logs. This article explores how to best interpret logs and how you can do in-depth analysis of Azure Firewall, a component that often plays a fundamental role in network architectures in Azure.

An important aspect to check is that the diagnostic settings are correctly configured in Azure Firewall, to flow log data and metrics to an Azure Monitor Log Analytics workspace.

Figure 1 – Azure Firewall diagnostic settings

To get an overview of the diagnostic logs and metrics available for Azure Firewall, you can consult the specific Microsoft documentation.

One of the most effective ways to view and analyze Azure Firewall logs is to use Workbooks, that allow you to combine text, Log Analytics query, Azure metrics and parameters, thus conseasing interactive and easily searchable reports.

For Azure Firewall there is a specific workbook provided by Microsoft that allows you to obtain detailed information on events, know the applications and network rules activated and view the statistics on firewall activity by URL, ports and addresses.

The import of this workbook can be done via ARM template or Gallery template, following the instructions in this article.

Figure 2 – Azure Firewall Workbook Import

After completing the import process, you can consult the overview an overview of the different events and types of logs present (application, Networks, threat intel, DNS proxy), with the possibility of applying specific filters related to workspaces, time slot and firewalls.

Figure 3 – Azure Firewall Workbook overview

There is a specific section in the workbook for Application rule where are shown sources by IP address, the use of application rules, and FQDNs denied and allowed. Furthermore, you can apply search filters on application rule data.

Figure 4 – Azure Firewall Workbook – Application rule log statistics

Furthermore, in the section Network Rule you can view the information based on the actions of the rules (allow/deny), target ports and DNAT actions.

Figure 5 – Azure Firewall Workbook – Network rule log statistics

If Azure Firewall has been set to work also as DNS Proxy it is possible to view in the tab “Azure Firewall – DNS Proxy” of the Workbook also information regarding the traffic and DNS requests managed.

If it is necessary to carry out further information to obtain more information on the communications of specific resources, you can use the section Investigation going to act on the filters available.

Figure 6 – Azure Firewall Workbook – Investigation

To view and analyze activity logs, you can connect Azure Firewall logs to Azure Sentinel, the service that expands the capabilities of traditional SIEM products (Security Information and Event Management), using the potential of the cloud and artificial intelligence. In this way, through specific workbooks available in Azure Sentinel, you can expand your analytics capabilities and create specific alerts to quickly identify and manage security threats that affect this infrastructure component. To connect Azure Firewall logs to Azure Sentinel you can follow the procedure in this Microsoft's document.

Conclusions

Azure Firewall is a widely used service and is often the centerpiece of your network architecture in Azure, where all network communications transit and are controlled. It therefore becomes important to date yourself with a tool to analyze the metrics and information collected, able to provide valid support in the resolution of any problems and incidents. Thanks to the adoption of these Workbooks you can easily consult the data collected by Azure Firewall, using visually appealing reports, with advanced features that allow you to enrich the analysis experience directly from the Azure portal.