Category Archives: Containers

How to increase the security of Azure Kubernetes-based microservices architectures

The spread of new application architectures based on microservices requires the adoption of cutting-edge solutions that ensure a high level of protection and that allow you to detect and respond to any security threats. Azure Defender is able to offer advanced and targeted protection of resources and workloads in hybrid environments and in Azure. This article describes how Azure Defender is able to guarantee the protection of instances of Azure Kubernetes Service (AKS) and scan the images in Azure Container Registry to detect any vulnerabilities.

Azure Kubernetes Service (AKS) is the fully managed Azure service that allows the activation of a Kubernetes cluster, ideal for simplifying the deployment and management of microservices-based architectures. Thanks to the features offered by AKS it is possible to scale automatically according to the use, use controls to ensure the integrity of the services, implement load balancing policies and manage secrets. In microservices-based architectures, it is also common to adopt the Azure Container Registry that allows you to create, store and manage container images and artifacts in a private registry. The use of this managed service is integrated with the container development and deployment pipelines.

Figure 1 – Example of an Azure Kubernetes-based microservices architecture

Azure Defender for Kubernetes

Through continuous analysis of the AKS environment, Azure Security Center (ASC) provides real-time threat protection for containerized environments and generates alerts if threats or malicious activity are detected, both at the host level and at the AKS cluster level.

Protection from security threats for Azure Kubernetes Service takes place at different levels:

  • Host level (provided by Azure Defender for servers): the Linux nodes of the AKS cluster are monitored through the Log Analytics agent. In this way the solution is able to detect suspicious activities such as connections from particular IP addresses and web shell detection. The agent is also able to monitor specific activities related to containers, such as creating privileged containers, access to API servers and the presence of SSH servers running inside a Docker container. The complete list of alerts that can be obtained by enabling Host level protection can be consulted in this document.
  • AKS cluster level (provided by Azure Defender for Kubernetes): at the cluster level, threat protection is based on the analysis of Kubernetes audit logs. It is a monitor that does not require the presence of specific agents and that allows you to generate alerts, monitoring AKS managed services, such as the presence of exposed Kubernetes dashboards and the creation of roles with elevated privileges. To see the complete list of alerts generated by this protection, you can access this link.

In an AKS environment it is recommended by best-prectices to also enable theAzure Policy add-on for Kubernetes as well as Azure Defender threat protection services. In this way, thanks to the iteration between the various platform components, in Azure Security Center you can analyze the following:

  • Audit logs from API servers
  • Raw security events (row) by the Log Analytics agent
  • Information on AKS cluster configuration
  • Workload configurations

Figure 2 – High-level architecture showing the interaction between ASC, AKS and Azure Policy

Azure Defender for container registry

The protection service Azure Defender for container registries allows you to evaluate and manage the presence of vulnerabilities in the images present in Azure Container Registry (ACR). Qualys' scanning tool allows you to perform an in-depth scan of images that takes place in three moments:

  • In case of push: each time an image is sent to the ACR, scan is automatically performed.
  • In case of recent extraction: because new vulnerabilities are discovered every day, it also analyzes any image for which an extraction has been made in the last 30 days.
  • When importing: Azure Container Registry has import tools to merge images into it from Docker Hub, Microsoft Container Registry or other ACR. All imported images are promptly analyzed.

During the scan, Qualys extracts the image and runs it in an isolated sandbox to track down any known vulnerabilities.

If any vulnerabilities are found, a notification will be generated in the Security Center dashboard. This alert will be accompanied by a severity classification and practical guidance on how to correct the specific vulnerabilities found in each image. To verify the images supported by the solution, you can access this link.

Figure 3 – High-level diagram showing ACR security using ASC

Activation and costs

The activation of these Azure Defender threat protection services can be done directly from the Azure portal:

Figure 4 – Enabling Kubernetes and ACR Azure Defender Security Services

Azure Defender modules in Azure Security Center are subject to specific costs that can be calculated using the tool Azure Pricing calculator. In particular, the cost of Azure Defender for Kubernetes is calculated on the number of cores of the VMs that make up the AKS cluster, while the cost of Azure Defender for Container registries is calculated based on scanned images.

Conclusions

Thanks to the coverage offered by ASC's Azure Defender services, it is possible to obtain a high degree of protection for application architectures based on microservices, that use Azure Kubernetes Service (AKS) and Azure Container Registry. Microsoft proves to be a provider capable of offering effective services for container execution in the cloud environment, flanked by modern and advanced security tools, useful both to quickly solve any problems in this area and to improve the security postures of your environment.

The possibilities offered by Azure for container execution

The strong trend in application development involving microservice-based architectures make containers perfect for efficiently deploying software and operating at scale. containers can work on windows operating systems, Linux and Mac, on virtual machines or bare metal, in on-premise data centers and, obviously, in the public cloud. Microsoft is certainly a leading provider that enables enterprise-level container execution in the public cloud. This article provides an overview of the main solutions that can be adopted to run containers in a Microsoft Azure environment.

Virtual machines

IaaS virtual machines in Azure environment can provide maximum flexibility to run Docker containers. In fact,, on Windows and Linux virtual machines it is possible to install the Docker runtime and thanks to the availability of different combinations of CPU and RAM you can have the necessary resources to run one or more containers. This approach is typically recommended in DevTest environments, as the cost of configuring and maintaining the virtual machine is not negligible.

Serverless approaches

Azure Container Instances (ACI)

Azure Container Instances (ACI) is the easiest and fastest way in Azure to run on-demand containers in a managed serverless environment. All this is made possible without having to activate specific virtual machines and the necessary maintenance is almost negligible. The solution Azure Container Instances is suitable in scenarios that require isolated containers, without the need to adopt a complex orchestration system. ACI is in fact able to provide only some basic scheduling features offered by the orchestration platforms and, although it does not cover the valuable services provided by such platforms, can be seen as a complementary solution.

Top-level resources in Azure Container Instances are the Container group, a collection of containers that are scheduled on the same host machine. Containers within a container group share the lifecycle, resources, the local network and storage volumes. Container group concept is similar to pod concept in Kubernetes environment.

Figure 1 – Container group sample in Azure Container Instances

The service Azure Container Instances involves costs that depend on the number of vCPUs and GBs of memory allocated per second. For more details on costs please visit the Microsoft official page.

Azure Web App for Containers

For web-based workloads, there is the ability to run containers from Azure App Service, the Azure web hosting platform, using the service Azure Web App for Containers, with the advantage of being able to exploit the distribution methodologies, scalability and monitors inherent in the solution.

Azure Batch and Containers

If workloads require you to scale with multiple job batches, you can put them in containers and manage scaling through Azure Batch. In this scenario, the combination of Azure Batch and containers turns out to be a winner. Azure Batch allows the execution and resizing of a large number of batch processing processes in Azure, while containers provide an easy way to perform Batch tasks, without having to manage the environment and its dependencies, required to run applications. In these scenarios, it is possible to envisage the adoption of low-priority VMs with Azure Batch to reduce costs.

Containers orchestration

The automation and management tasks of a large number of containers and the ways in which they interact with each other is known as orchestration. In case therefore there is a need to orchestrate more containers it is necessary to adopt more sophisticated solutions such as: Azure Kubernetes Service (AKS) or Azure Service Fabric.

Azure Kubernetes Service (AKS)

Azure Kubernetes Service (AKS) is the fully managed Azure service that allows the activation of a Kubernetes cluster.

Kubernetes, also known as “k8s”, provides automated orchestration of containers, improving its reliability and reducing the time and resources required in the DevOps field. Kubernetes tends to simplify deployments, allowing you to automatically perform implementations and rollbacks. Furthermore, it allows to improve the management of applications and to monitor the status of services to avoid errors in the implementation phase. Among the various functions there are services integrity checks, with the ability to restart containers that are not running or that are blocked, allowing to advertise to clients only the services that have started correctly. Kubernetes also allows you to automatically scale based on usage and exactly like containers, allows you to manage the cluster environment in a declarative way, allowing version-controlled and easily replicable configuration.

Figure 2 - Example of microservices architecture based on Azure Kubernetes Service (AKS)

Azure Service Fabric

Another possibility to orchestrate containers is the adoption of the reliable and flexible platform Azure Service Fabric. This is Microsoft's container orchestrator that allows the deployment and management of microservices in highly intensive cluster environments with very fast deployment times. With this solution you have the opportunity, for the same application, to combine services residing in processes and services within containers. The unique and scalable architecture of Service Fabric allows you to perform data analysis almost in real time, computational calculations in memory, parallel transactions and event processing in applications. Service Fabric provides a sophisticated and lightweight runtime that supports stateless and stateful microservices. A key differentiator of Service Fabric is its robust support for creating stateful services, adopting built-in programming models of Service Fabric or stateful containerized services. For more information on the application scenarios that can take advantage of Service Fabric stateful services you can consult this document.

Figure 3 - Azure Service Fabric overview

Azure Service Fabric can boast of hosting many Microsoft services, including Azure SQL Database, Azure Cosmos DB, Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, Dynamics 365, Skype for Business, and many core Azure services.

Conclusions

Microsoft offers a range of options for running containers in its public cloud. The choice of the solution that best suits your needs among all those offered, despite requiring careful evaluation, allows to have a high flexibility. From the adoption of serverless approaches, the management of cluster environments for orchestration, up to the creation of your own infrastructure based on virtual machines, you can find the ideal solution to run containers in the Microsoft Azure environment.

How to create a Docker environment in Azure using VM Extension

Docker is a software platform that allows you to create, manage and execute isolated applications in containers. A container is nothing more than a methodology for creating software packages in a format that allows it to be run independently and isolated on a shared operating system. Unlike the virtual machine containers do not include a complete operating system, but only the libraries and settings needed to run the software. Therefore there are a series of advantages in terms of size, speed, portability and resource management.

Figure 1 – Diagram of containers

 

In the world of Microsoft Azure there are different configuration and use possibilities about Docker containers that I list synthetically:

  • VM Extension: through a specific Extension you can implement Docker inside a virtual machine.
  • Azure Container Service: deploys quickly into a cluster environment Azure Docker Swarm, DC/OS or Kubernetes ready for production. This is the most complete solution for the orchestration of containers.
  • Docker EE for Azure: is a template available on the Azure Marketplace, a collaboration between Microsoft and Docker, which makes it possible to provision a clustered Docker Enterprise Edition integration with Azure VM Scale Sets, the Azure Load balancers and the Azure Storage.
  • Rancheros: is a Linux distribution designed to run containers available as template within the Marketplace Azure Docker.
  • Web App for Containers: you have the option of using containers, making the deployment in the Azure App Service managed platform as a Web App running in a Linux environment.
  • Azure Container Instances (currently in preview): is definitely the easiest and quickest way to run a container Docker in the Azure platform, without the need to create virtual machines, ideal in scenarios where containers blocks.
  • Azure Service Fabric: supports the containers in both the Windows and Linux. The platform contains natively support for Docker Wrote (currently in preview), allowing you to orchestrate applications based on containers in the Azure Service Fabric.
  • DC/OS on Azure: This is a managed cloud service that provides an environment for the deployment of workloads in cluster using DC/OS platform (Datacenter Operating System).

All these possibilities enable, according to the needs and to the scenario that you must implement, choosing the most appropriate deployment methodology in the container for execution environment Azure Docker.

In this article we will create a Docker environment in a Virtual Machine using the Docker Extension. Starting from a virtual machine in Azure, you can add the Docker Extension which installs and configures the daemon Docker, the client Docker and Docker Wrote.

This extension is supported for the following Linux distributions:

  • Ubuntu 13 or higher.
  • CentOS 7.1 or higher.
  • Red Hat Enterprise Linux (RHEL) 7.1 or higher.
  • CoreOS 899 or higher.

Adding the extension from the Azure Portal can be done via the following steps. The section Extensions Select the virtual machine button Add:

Figure 2 – Adding Extensions to the VM from the Azure Portal

 

Then shows the list of Extensions available, you stand onExtension Docker and press the button Create.

Figure 3 – Selection of Extension Docker

 

To enable secure communication with the Docker system implemented in your Azure environment you should use certificates and keys issued by a trusted CA. If you do not have a CA to generate these certificates you can follow the instructions in section Create a CA, Server and client keys with OpenSSL present in the official documentation of Docker.

 

Figure 4 – Communication scheme docker by encrypted protocol TLS

 

The Extension wizard requires first to enter the communications port of the Engine Docker (2376 is the default port). Also the CA's certificate is requested, your Server certificate and Server Key, in base64-encoded format:

Figure 5 – Parameters required by the wizard to add the Docker VM Extension

 

Adding the Extension Docker takes several minutes at the end of which the virtual machine will be installing the latest stable version of Docker Engine and daemon Docker will listen on the specified port using certificates entered in the wizard.

Figure 6 – Details of the Extension Docker

 

In case you need to allow Docker communication from outside the vNet where is attested the VM with Docker you must configure appropriate rules in Network Security Group used:

Figure 7 – Configuration example NSG to allow communication Docker (door 2376)

 

At this point the Docker environment is ready to be used and from a remote client you can start the communication:

Figure 8 – Docker command run from a remote client to retrieve information

 

Conclusions

The Azure Docker VM extension is ideal to implement easily, in a reliably and securely mode, a dev or production Docker environment on a single virtual machine. Microsoft Azure offers a wide range of possibilities in the choice of implementation related to the Docker platform, with a lot of flexibility by letting you choose the most appropriate solution for your needs.