Category Archives: Networking

Azure Networking: security services overview

In the modern era of cloud computing, the tendency is to move more frequently workloads in the public cloud and to use hybrid cloud. Security is often perceived as an inhibitor element for the use of cloud environments. Can you extend the datacenter to the cloud while maintaining a high level of network security? How to ensure safe access to services in the cloud and with which tools? One of the main reasons to use Azure, for your own applications and services, is the possibility to take advantage of a rich set of functionality and security tools integrated in the platform. This article will be a overview of network security services in Azure, reporting guidelines and useful tips to best utilize the potential of the platform, in order to structure the network in Azure respecting all security principles.

In field Azure Networking are available different services for enabling connectivity to distinct environments, according to different modes, to activate the protection of the network and to configure the application delivery. All these services are integrated with monitor systems offered by Azure, going to create a complete ecosystem for the provision of network services.

Figure 1 – Azure Networking Services

In order to configure the network protection for Azure we find the following services, available natively in the platform.

Network Security Group (NSG)

Network Security Groups (NSGs) are the main tool to control network traffic in Azure. Through the rules of deny and permit you can filter communications between different workloads on an Azure virtual network. Furthermore, you can apply filters on communications with systems that reside on-premises, connected to the Azure VNet, or for communications to and from Internet. Network Security Groups (NSGs) can be applied on a specific subnet of a Azure VNet or directly on the individual network adapters of Azure virtual machines. The advice is to apply them if possible directly on the subnet, to have a comprehensive and more flexible control of ACLs. The NSGs can contain rules with Service Tags, that allow you to group with predefined categories of IP addresses, including those assigned to specific Azure services (ex. AzureMonitor, Appservice, Storage, etc.).

The rules of the Network Security Groups can also be referenced Application Security Groups (ASGs). These are groups that contain network adapters of virtual machines on Azure. ASGs allow you to group multiple servers with mnemonic names, useful in particular for dynamic workloads. The Application Security Groups also enable you no longer have to manage in the rules of NSGs IP addresses of Azure virtual machines , as long as these IPs are related to VMs attested on the same VNet.

Figure 2 -Example of a NSG rule that contains a Service Tag and ASG

Figure 3 - Graphical display of network traffic segregation via NSG

Service Endpoints

Through the Virtual Network (VNet) service endpoints, you can increase the level of security for Azure Services, preventing unauthorized access. The vNet Service Endpoints allow you to isolate the Azure services, allowing access to them only by one or more subnets defined in the Virtual Network. This feature also ensures that all traffic generated from the VNet towards the Azure services will always remain within the Azure backbone network. For the supported services and get more details about this you can see the Microsoft documentation.

Figure 4 – Summary of Sevice Endpoints

Azure Firewall

The Azure Firewall is a firewall, fully integrated into the Microsoft public cloud, of type stateful, which makes it possible to centrally control, through policy enforcement, network communication streams, all cross subscriptions and cross virtual networks. Azure Firewall also allows you to filter traffic between the virtual networks of Azure and on-premises networks, interacting with connectivity that is through the Azure VPN Gateway and with Express Route Gateway. For more details about it you can see the article Introduction to Azure Firewall.

Figure 5 – Placement of Azure Firewall

 

Web Application Firewall

The application delivery may be made using the Azure Application Gateway, a service managed by the Azure platform, with inherent features of high availability and scalability. The Application Gateway is a application load balancer (OSI layer 7) for web traffic, that allows you to govern HTTP and HTTPS applications traffic (URL path, host based, round robin, session affinity, redirection). The Application Gateway is able to centrally manage certificates for application publishing, using SSL and SSL offload policy when necessary. The Application Gateway may have assigned a private IP address or a public IP address, if the application must be republished in Internet. In particular, in the latter case, it is recommended to turn on Web Application Firewall (WAF), that provides application protection, based on rules OWASP core rule sets. The WAF protects the application from vulnerabilities and against common attacks, such as X-Site Scripting and SQL Injection attacks.

Figure 6 – Overview of Application Gateway with WAF

DDoS protection

In Azure, DDoS protection is available in two different tiers: Basic oppure Standard.

The protection Basic is enabled by default in the Azure platform, which constantly monitors traffic and applies mitigations to the most common network attacks in real time. This tier provides the same level of protection adopted and tested by Microsoft's online services and is active for Azure Public IP addresses (Pv4 and IPv6). No configuration is required for the Basic tier.

Typology Azure DDoS Protection Standard provides additional mitigation features over the Basic tier, that are specifically optimized for resources located in Azure virtual networks. The protection policies are self-configured and are optimized by carrying out specific monitoring of network traffic and applying machine learning algorithms, that allow you to profile your application in the most appropriate and flexible way by studying the traffic generated. When the thresholds set in the DDoS policy are exceeded, the DDoS mitigation process is automatically started, which is suspended when it falls below the established traffic thresholds. These policies are applied to all public IP of Azure (IPv4) associated with resources present in the virtual network, like: virtual machines, Azure Load Balancer, Azure Application Gateway, Azure Firewall, VPN Gateway and Azure Service Fabric instances.

For more details about it you can see the article Protection from DDoS attacks in Azure.

Synergies and recommendations for the use of the various protection services

In order to obtain an effective network security and direct you in the use of the various components, are reported the main recommendations which is recommended to consider:

  • Network Security Groups (NSGs) and the Azure Firewall are complementary and using them together you get a high degree of defense. The NSGs is recommended to use them to filter traffic between the resources that reside within a VNet, while the Azure Firewall is useful for providing network and application protection between different Virtual Networks.
  • To increase the security of Azure PaaS services is advised to use the Service endpoints, which can be used in conjunction with Azure Firewall to consolidate and centralize access logs. To do this, you can enable the service endpoint in the Azure Firewall subnet, disabling the subnet present in the Spoke VNet.
  • Azure Firewall provides network protection Layer 3 for all ports and protocols, it also guarantees a level of application protection (Layer 7) for outbound HTTP/S traffic. For this reason, if you wish to make a secure application publishing (HTTP/S in inbound) you should use the Web Application Firewall present in the Application Gateway, then placing it alongside Azure Firewall.
  • Azure Firewall can also be supported by third-party WAF / DDoS solutions.

All these protection services, suitably configured in a Hub-Spoke network architecture allow you to perform a segregation of network traffic, achieving a high level of control and security.

Figure 7 – Security services in a Hub-and-Spoke architecture

Conclusions

Azure provides a wide range of services that allow you to achieve high levels of security, acting on different fronts. The security model that you decide to take, you can resize it and adapt flexibly, depending on the type of application workloads to be protected. A winning strategy can be obtained by applying a mix-and-match of different network security services, to get a protection on more layers.

Protection from DDoS attacks in Azure

A cyber attack of type distributed denial-of-service (DDoS attack – Distributed Denial of Service) is intended to exhaust deliberately the resources of a given system that provides a service to clients, such as a website that is hosted on web servers, to the point that it will no longer be able to provide these services to those who require it in a legitimate way. This article will show the security features that you can have in Azure for this type of attacks, in order to best protect the applications on the cloud and ensure their availability against DDoS attacks.

DDoS attacks are becoming more common and sophisticated, to the point where it can reach sizes, in bandwidth, increasingly important, which make it difficult to protect and increase the chances of making a downtime to published services, with a direct impact on company business.

Figure 1 – DDoS Attack Trends

Often this type of attack is also used by hackers to distract the companies and mask other types of cyber attacks (Cyber Smokescreen).

 

Features of the solution

In Azure, DDoS protection is available in two different tiers: Basic oppure Standard.

Figure 2 - Comparison of the features available in different tiers for DDoS Protection

The protection Basic is enabled by default in the Azure platform, which constantly monitors traffic and applies mitigations to the most common network attacks in real time. This tier provides the same level of protection adopted and tested by Microsoft's online services and is active for Azure Public IP addresses (Pv4 and IPv6). No configuration is required for the Basic tier.

Typology Azure DDoS Protection Standard provides additional mitigation features over the Basic tier, that are specifically optimized for resources located in Azure virtual networks. The protection policies are self-configured and are optimized by carrying out specific monitoring of network traffic and applying machine learning algorithms, that allow you to profile your application in the most appropriate and flexible way by studying the traffic generated. When the thresholds set in the DDoS policy are exceeded, the DDoS mitigation process is automatically started, which is suspended when it falls below the established traffic thresholds. These policies are applied to all public IP of Azure (IPv4) associated with resources present in the virtual network, like: virtual machines, Azure Load Balancer, Azure Application Gateway, Azure Firewall, VPN Gateway and Azure Service Fabric instances. This protection does not apply to App Service Environments.

Figure 3 – Overview of Azure DDoS Protection Standard

The Azure DDoS Protection Standard is able to cope with the following attacks:

  • Volumetric attacks: the goal of these attacks is to flood the network with a considerable amount of seemingly legitimate traffic (UDP floods, amplification floods, and other spoofed-packet floods).
  • Protocol attacks: These attacks are aiming to make inaccessible a specific destination, exploiting a weakness that is found in the layer 3 and in the layer 4 of the stack (for example SYN flood attacks and reflection attacks).
  • Resource (application) layer attacks: These attacks are targeting the Web application packages, in order to stop transmitting data between systems. Attacks of this type include: violations of the HTTP protocol, SQL injection, cross-site scripting and other attacks in level 7. To protect themselves from attacks of this type is not sufficient DDoS protection standard, but you must use it in conjunction with the Web Application Firewall (WAF) available in Azure Application Gateway, or with third-party web application firewall solution, available in the Azure Marketplace.

 

Enabling DDoS protection Standard

The DDoS protection Standard is enabled in the virtual network and is contemplated for all resources that reside in it. The activation of the Azure DDoS Protection Standard requires you to create a DDoS Protection Plan which collects the virtual networks with DDoS Protection Standard active, cross subscription.

Figure 4 – Creating a DDoS Protection Plan

The protection Plan is created in a particular subscription, which will be associated with the cost of the solution.

Figure 5 – Enabling DDoS protection Standard on an existing Virtual Network

The Standard tier provides a real-time telemetry that can be consulted via views in Azure Monitor.

Figure 6 – DDoS Metrics available in Azure Monitor

Any DDoS protection metrics can be used to generate alerts. Using the metric "Under DDoS attack"you can be notified when an attack is detected and DDoS mitigation action is applied.

DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address associated with a protected resource, so that resides on a virtual network with active the DDoS standard service.

Figure 7 – Monitor mitigation metrics available in Azure

To report generation, regarding the actions undertaken to mitigate DDoS attacks, you must configure the diagnostics settings.

Figure 8 – Diagnostics Settings in Azure Monitor

Figure 9 - Enable diagnostics of Public IP to collect logs DDoSMitigationReports

In the diagnostic settings it is possible to also collect other logs relating to mitigation activities and notifications. For more information about it you can see Configure DDoS attack analytics in the Microsoft documentation. The metrics for the DDoS protection Standard are maintained in Azure for Moniotr 30 days.

Figure 10 – Attack flow logs in Azure Log Analytics

How to test the effectiveness of the solution

Microsoft has partnered withBreakingPoint Cloud and, thanks to a very intuitive interface, it allows you to generate traffic, towards the public IPs of Azure, to simulate a DDoS attack. In this way you can:

  • Validate the effectiveness of the solution.
  • Simulate and optimize responses against incident related to DDoS attacks.
  • Document the compliance level for attacks of this type.
  • Train the network security team.

Costs of the solution

The Basic tier foresees no cost, while enabling the DDoS Protection Standard requires a fixed monthly price (not negligible) and a charge for data that are processed. The fixed monthly price includes protection for 100 resources, above which there is an additional unit cost for each protected resource. For more details on Azure DDoS Protection Standard costs you can see the Microsoft's official page.

Conclusions

The protection from DDoS attacks in Azure allows us to always have active a basic protection to deal with such attacks. Depending on the application criticality, can be evaluated the Standard protection, which in conjunction with a web application firewall solution, allows you to have full functionality to mitigate distributed denial-of-service attacks.

Azure Virtual WAN: introduction to the solution

Azure Virtual WAN is a new network service that allows you to optimize and automate the branch-to-branch connectivity through Azure. Thanks to this service you can connect and configure network devices in branch to allow communication with Azure (branch-to-Azure). This article examines the components involved in Azure Virtual WAN and shows the procedure to be followed for its configuration.

 

Figure 1 – Azure Virtual WAN overview

The Azure Virtual WAN configuration includes the creation of the following resources.

 

Virtual WAN

The Virtual WAN resource represents a virtual layer of Azure network and collect different components. It is a layering that contains links to all the virtual hubs that you want to have inside the Virtual WAN. Virtual WAN resources are isolated and cannot contain common hubs.

Figure 2 – Start the process of creating Azure Virtual WAN

Figure 3 – Creating Azure Virtual WAN

When creating the Virtual WAN resource you are prompted to specify a location. In reality it is a global resource that does not reside in a particular region, but you are prompted to specify it just to be able to manage and locate more easily.

By enabling the option Network traffic allowed between branches associated with the same hub allows traffic between the various sites (VPN or ExpressRoute) associated with the same hub (branch-to-branch).

Figure 4 – Branch-to-branch connectivity option

 

Site

The site represents the on-prem environment. You will need to create as many sites as are the physical location. For example, if you have a branch office in Milan, one in New York and one in London, you will need to create three separate sites, which contain their endpoints of network devices used to establish communication. If you are using Virtual WAN partner network equipment, provides solutions to natively export this information into the Azure environment.

Figure 5 – Creating a site

In the advanced settings you can enable BGP, which if activated becomes valid for all connections created for the specific site . Among the optional fields you can specify device information, that may be of help to the Azure Team in case of any future enhancements or Azure support.

 

Virtual Hub

A Virtual Hub is a Microsoft-managed virtual network. The hub is the core component of the network in a given region and there can be only one hub for Azure region. The hub contains different service endpoints to allow to establish connectivity with the on-prem environment. Creating a Virtual Hub involves the generation of a new VNet and optionally a new VPN Gateway. The Hub Gateway is not a classic virtual network gateway that is used for ExpressRoute connectivity and VPN and it is used to create a Site-to-site connection between the on-prem environment and the hub.

Figure 6 – Creating a Hub

Figure 7 -Association of the site with a Hub

The Hubs should be associated with sites residing in the same region where there are the VNet.

 

Hub virtual network connection

The resource Hub virtual network connection is used to connect the hub with the virtual network. Currently you can create connections (peering) with virtual networks that reside in the same region of the hub.

Figure 8 – Connection of the VNet to a hub

Configuring the VPN device on-prem

To configure the VPN on-prem device, you can proceed manually, or if you are using Virtual WAN partner solutions, the configuration of the VPN devices can occur automatically. In the latter case the device controller gets the configuration file from Azure and applies the configuration to devices, avoiding the need to proceed with manual configurations. It all feels very comfortable and effective, saving time. Among the various virtual WAN partners we find: Citrix, Riverbed, 128 Technology, Barracuda, Check Point, NetFoundry and Paloalto. This list is intended to expand soon with more partners.

By selecting Download VPN configuration creates a storage account in the resource group 'microsoft-network-[location]’ from which you can download the configuration for the VPN device on-prem. That storage account can be removed after retrieving the configuration file.

Figure 9 - Download the VPN configuration

Figure 10 – Download the configuration file on the storage account

After configuration of the on-prem device, the site will be connected, as shown in the following figure:

Figure 11 - State of the connected site

It also provides the ability to establish ExpressRoute connections with Virtual WAN, by associating the circuit ExpressRoue to the hub. It also provides for the possibility of having Point-to-Site connections (P2S) towards the virtual Hub. These features are now in preview.

The Health section contains useful information to check the connectivity for each Hub.

Figure 12 – Check Hub health

 

Conclusions

Virtual WAN is the new Azure service that enables centralized, simple and fast connection of several branch, with each other and with the Microsoft public cloud. This service allows you to get a great experience of connectivity, taking advantage of the Microsoft global network, which can boast of reaching different region around the world, more than any other public cloud providers.

Azure Networking: characteristics of Global VNet peering

The Virtual Networks in Azure are logically isolated, to allow you to securely connect different Azure resources. The Global VNet peering in Azure provides the possibility of connecting virtual networks residing on different regions of Azure. This article discusses the benefits and current constraints imposed by the Global VNet peering. It will also show the procedure for activating a Global VNet peering.

Figure 1 - Sample connection of two Azure VNet in different regions

When you configure the peering between two virtual networks that reside in different regions of Azure, you expand the logical boundary and virtual machines attested on these VNet can communicate with each other with their own private IP addresses, without having to use gateway and public IP addresses. Furthermore, you can use the hub-and-spoke network model, to share resources such as firewalls or virtual appliances, even connecting virtual networks in different regions of Azure, through the Global VNet peering.

Benefits of Global VNet peering

The main benefits that can be obtained using the Global VNet peering to connect virtual networks are:

Figure 2 – Microsoft's backbone network

  • You can use a low-latency connectivity and with a high bandwidth.
  • Setup is simple and does not require gateway to establish VPN tunnel between different networks.

 

Current constraints of Global VNet peering

Currently there are some constraints that should be taken into account when making a Global VNet peering:

  • In the presence of a Global peering VNET you can not use the remote gateway (option "use remote gateways") and you can't allow the gateway transit (option "allow gateway transit") on the virtual network. These options are currently usable only when you make a virtual network peering with virtual networks residing within the same region of Azure. It follows then the Global VNet peering are not transitive, then the downstream VNet in a region cannot communicate, using this methodology, with the VNet to another region. For example,, assuming a scenario where between the vNet1 and the vNet2 there is a Global VNet peering and between vNet2 and vNet3 there is another Global VNet peering. In this case there will be no communication between the vNet1 and the vNet3. If needed, you can create an additional Global VNET peering to put them in communication.
  • For a resource that resides on a virtual network, is not allowed to communicate using the IP address of an internal Azure load balancer that resides on the virtual network in peer. This type of communication is allowed only if the source IP and the IP address of the Load Balancer are in the same VNet.
  • The Global VNet peering can be created in all Azure public regions, but not with VNet residing in Azure national clouds.
  • The creation of the Global VNet peering is allowed between VNet residing in different subscriptions, as long as they are associated with the same Active Directory tenant.
  • The virtual networks can inserted into peering if there is no overlapping in its address space. Furthermore, after the creation of the peering, if you need to modify the address space you must remove the peering.

 

Global VNet peering configuration

Configuring Global VNet peering is extremely simple. In the following images are documented the steps to connect two virtual networks created in different regions, in this case West Europe and Southeast Australia.

Figure 3 - Adding peering from VNet settings

Figure 4 - Configure the peering parameters

Selecting "Allow virtual network access", allows communication between two virtual networks. With this setting the address space of the VNet in peer is added to the tag Virtual_Network.

Figure 5 – Peering added, in state Initiated

The same operations, documented in Figure 3 and figure 4, must be repeat even on the Virtual Network that resides in the other region and with whom you want to configure the Global VNet peering. The communication will be activated when the status of the peering will be "Connected"on both VNet.

Figure 6 – Peering in state Connected

Selecting a virtual machine, attested on a virtual network configured with the global VNET Peering, you will see a specific route for VNet associated, as shown in the following figure:

Figure 7 – Effective route of the VNet in Global Peering

The Global VNet peering involves costs for inbound and outbound network traffic in transit in the peering and the cost varies depending on the areas covered. For more details you can refer to the official page of costs.

 

Conclusions

The Global VNet peering allows a great flexibility in managing in a simple and efficient way as various workloads can be connected, allowing to expand the possible implementation scenarios on Azure, without having to consider the geographical boundaries as a limit. Significant benefits can be obtained in particular in data replication and disaster recovery architectures.

Azure Networking: introduction to the Hub-Spoke model

A network topology increasingly adopted by Microsoft Azure customers is the network topology defined Hub-Spoke. This article lists the main features of this network architecture, examines the most common use cases, and shows the main advantages that can are obtained thanks to this architecture.

The Hub-Spoke topology

In a Hub-Spoke network architecture, theHub is a virtual network on Azure that serves as the point of connectivity to the on-premises network. This connectivity can be done through VPN Site to site or through ExpressRoute. The Spoke are virtual networks running the peering with the Hub and can be used to isolate workloads.

The architecture basic scheme:

Figure 1 – Hub-Spoke basic network architecture

This architecture is also designed to position in the Hub network a network virtual appliance (NVA) to control the flow of network traffic in a centralized way.

Figure 2 - Possible architecture of Hub vNet in the presence of NVA

In this regard it should be noted that Microsoft recently announced the availability of the’Azure Firewall, a new managed service and fully integrated into the Microsoft public cloud, that allows you to secure the resources present on the Virtual Networks of Azure. At the moment the service is in preview, but soon it will be possible to assess the adoption of Azure Firewall to control centrally, through policy enforcement, network communication streams, all cross subscriptions and cross virtual networks. This service, in the presence of Hub-Spoke network architectures , lends itself to be placed in the Hub network, in order to obtain complete control of network traffic.

Figure 3 - Positioning Azure Firewall in the Hub Network

For additional details on Azure Firewall you can see Introduction to Azure Firewall.

When you can use the Hub-Spoke topology

The network architecture Hub-Spoke is typically used in scenarios where these characteristics are required in terms of connectivity:

  • In the presence of workloads deployed in different environments (development, testing and production) which require access to the shared services such as DNS, IDS, Active Directory Domain Services (AD DS). Shared services will be placed in the Hub virtual network, while the various environments (development, testing and production) will be deployed in Spoke networks to maintain a high level of insolation.
  • When certain workloads must not communicate with all other workloads, but only with shared services.
  • In the presence of reality that require a high level of control over aspects related to network security and needing to make a segregation of the network traffic.

Figure 4 – Hub-Spoke architecture design with its components

The advantages of the Hub-Spoke topology

The advantages of this Azure network topology can be summarized as:

  • Cost savings, because shared services can be centralized in one place and used by multiple workloads, such as the DNS server and any virtual appliances. It also reduces the VPN Gateways to provide connectivity to the on-premises environment, with a cost savings for Azure.
  • Granular separation of tasks between IT (SecOps, InfraOps) and workloads (Devops).
  • Greater flexibility in terms of management and security for the Azure environment.

Useful references for further reading

The following are the references to the Microsoft technical documentation useful to direct further investigation on this topic:

Conclusions

One of the first aspects to consider when you implement solutions in the cloud is the network architecture to be adopted. Establish from the beginning the most appropriate network topology allows you to have a winning strategy and avoid to be in the position of having to migrate workloads, to adopt different network architectures, with all the complications that ensue.

Each implementation requires a careful analysis in order to take into account all aspects and to make appropriate assessments. It is therefore not possible to assert that the Hub-Spoke network architecture is suitable for all scenarios, but certainly it introduces several benefits that make it effective for obtaining certain characteristics and have a high level of flexibility.

What's new in Virtual Machine Manager 1807

Following the first announcement of the Semi-Annual Channel release of System Center, took place in February with the version 1801, in June has been released the new update release: System Center 1807. This article will explore the new features introduced in System Center Virtual Machine Manager (SCVMM) the update release 1807.

Networking

Information related to the physical network

SCVMM 1807 introduced an important improvement in the field of network. In fact,, using the Link Layer Discovery Protocol (LLDP), SCVMM can provide information regarding connectivity to the physical network of Hyper-V hosts. These are the details that SCVMM 1807 can retrieve:

  • Chassis ID: Switch chassis ID
  • Port ID: The Switch port to which the NIC is connected
  • Port Description: Details on the port, such as the Type
  • System Name Manufacturer: Manufacturer and Software version details
  • System Description
  • Available Capabilities: Available features of the system (such as switching, Routing)
  • Enabled Capabilities: Features that are enabled on the system (such as switching, Routing)
  • VLAN ID: Virtual LAN identifier
  • Management Address: management IP address

The consultation of this information can be via Powershell or by accessing the SCVMM console: View > Host > Properties > Hardware Configuration > Network adapter.

Figure 1 – Information provided by SCVMM 1807 regarding the physical connectivity of Hyper-V hosts

These details are very useful for having visibility on the physical network and to facilitate troubleshooting steps. This information will be made available for Hyper-V hosts that meet the following requirements:

  • Windows Server Operating System 2016 or later.
  • Features DataCenterBridging and DataCenterBridging-LLDP-Tools enabled.

Conversion SET in Logical Switch

SCVMM 1807 can convert a Hyper-V Virtual Switch, created in Switch Embedded Teaming mode (SET), in a logical switch, using directly the SCVMM console. In previous versions of SCVMM this operation was feasible only through PowerShell commands. This conversion can be useful to generate a Logical Switch, that can be used as a template on different Hyper-V hosts that are managed by SCVMM. For more information on Switch Embedded Teaming (SET) I invite you to consult the article Windows Server 2016: the new Virtual Switch in Hyper-V

Support for host VMware ESXi v6.5

SCVMM 1807 introduced support for VMware ESXi host v 6.5 within its fabric. For what is my experience, even in environments that consist of multiple hypervisors, hardly you use SCVMM to manage VMware host. This support is important because it introduces the ability to convert VMs hosted on VMWare ESXi host 6.5 in Hyper-v VMs.

 

Storage

Support for selecting the CSV to use when adding a new disk

SCVMM 1807 allows you to specify, during the addition of a new virtual disk to an existing VM, in which cluster shared volumes (CSV) place it. In previous releases of VMM this possibility was not provided and by default the new virtual disks were placed on the same CSV where the disks that have been associated with the virtual machine were present. In some circumstances, such as in the presence of CSV with little free space available, this behavior could be inadequate and inflexible.

Figure 2 – Adding a new disk to a virtual machine by selecting which CSV place it

Support for the update of cluster Storage Spaces Direct (S2D)

In Virtual Machine Manager 2016 there is support for making the deployment of Storage Spaces Direct cluster (S2D). With SCVMM 1807 has also introduced the possibility of patching and update of Storage Spaces Direct cluster nodes, orchestrating the entire update process, which will use the baseline configured in Windows Server Update Services (WSUS). This feature allows you to more effectively manage the Storage Spaces Direct environment, cornerstone of the Software Defined Storage of Microsoft, that leads to the achievement of the Software Defined Data Center.

 

Statement of support

Support for SQL Server 2017

In SCVMM 1807 was introduced the support for SQL Server 2017 to host its database. This allows you to upgrade from SQL Server 2016 to SQL Server 2017.

 

Conclusions

The update release 1807 introduces several innovations in Virtual Machine Manager that greatly enrich it in terms of functionality. Furthermore, This update also addresses a number of issues listed in Microsoft's official documentation. It is therefore recommended to evaluate an update of the Virtual Machine Manager deployments, for greater stability and to take advantage of the new features introduced. Remember that the release belonging to the Semi-Annual Channel have a support period of 18 months.

To try out System Center Virtual Machine Manager, you must access to theEvaluation Center and after the registration you can start the trial period.

Introduction to Azure Firewall

Microsoft recently announced the availability of a long-awaited service required by the users of systems in the Azure environment , it is the’Azure Firewall. The Azure Firewall is a new managed service and fully integrated into the Microsoft public cloud, that allows you to secure the resources present on the Virtual Networks of Azure. This article will look at the main features of this new service, currently in preview, and it will indicate the procedure to be followed for its activation and configuration.

Figure 1 – Positioning of Azure Firewall in network architecture

The Azure Firewall is a type of firewall stateful, which makes it possible to centrally control, through policy enforcement, network communication streams, all cross subscriptions and cross virtual networks. This service, in the presence of type of network architectures hub-and-spoke, lends itself to be placed in the Hub network, in order to obtain a complete control of the traffic.

The Azure Firewall features, currently available in this phase of public preview, are the following:

  • High availability (HA) Built-in: high availability is integrated into the service and are not required specific configurations or add-ons to make it effective. This is definitely an element that distinguishes it compared to third-party solutions that, for the configuration of Network Virtual Appliance (NVA) in HA, typically require the configuration of additional load balancers.
  • Unrestricted cloud scalability: Azure Firewall allows you to scale easily to adapt to any change of network streams.
  • FQDN filtering: you have the option to restrict outbound HTTP/S traffic towards a specific list of fully qualified domain names (FQDN), with the ability to use wild card characters in the creation of rules.
  • Network traffic filtering rules: You can create rules to allow or of deny to filter the network traffic based on the following elements: source IP address, destination IP address, ports and protocols.
  • Outbound SNAT support: to the Azure Firewall is assigned a public static IP address, which will be used by outbound traffic (Source Network Address Translation), generated by the resources of the Azure virtual network, allowing easy identification from remote Internet destinations.
  • Azure Monitor logging: all events of Azure Firewall can be integrated into Azure Monitor. In the settings of the diagnostic logs you are allowed to enable archiving of logs in a storage account, stream to an Event Hub, or set the sending to a workspace of OMS Log Analytics.

Azure Firewall is currently in a managed public preview, which means that to implement it is necessary to explicitly perform the enable via the PowerShell command Register-AzureRmProviderFeature.

Figure 02 – PowerShell commands for enabling the public preview of Azure Firewall

Feature registration can take up to 30 minutes and you can monitor the status of registration with the following PowerShell commands:

Figure 03 – PowerShell commands to verify the status of enabling Azure Firewall

After registration, you must run the following PowerShell command:

Figure 04 – Registration command of Network Provider

To deploy the Azure Firewall on a specific Virtual Network requires the presence of a subnet called AzureFirewallSubnet, that must be configured with a sunbnet mask at least /25.

Figure 05 – Creation of the subnet AzureFirewallSubnet

To deploy Azure Firewall from the Azure portal, you must select Create a resource, Networking and later See all:

Figure 06 - Search Azure Firewall in Azure resources

Filtering for Firewall will also appear the new resource Azure Firewall:

Figure 07 – Microsoft Firewall resource selection

By starting the creation process you will see the following screen that prompts you to enter the necessary parameters for the deployment:

Figure 08 – Parameters required for the deployment of the Firewall

Figure 09 – Review of selected parameters and confirmation of creation

In order to bring outbound traffic of a given subnet to the firewall you must create a route table that contains a route with the following characteristics:

Figure 10 - Creation of the Rule of traffic forwarding to the Firewall Service

Although Azure Firewall is a managed service, you must specify Virtual appliance as next hop. The address of the next hop will be the private IP of Azure Firewall.

The route table must be associated with the virtual network that you want to control with Azure Firewall.

Figure 11 - Association of the route table to the subnet

At this point, for systems on the subnet that forwards the traffic to the Firewall, is not allowed outgoing traffic, as long as it is not explicitly enabled:

Figure 12 – Try to access blocked website from Azure Firewall

Azure Firewall provides the following types of rules to control outbound traffic.

Figure 13 – The available rule Types

  • Application rules: to configure access to specific fully qualified domain names (FQDNs) from a given subnet.

Figure 14 - Creating Application rule to allow access to a specific website

  • Network rules: enable the configuration of rules that contain the source address, the protocol, the address and port of destination.

Figure 15 – Creating Network rule to allow traffic on port 53 (DNS) towards a specific DNS Server

Conclusions

The availability of a fully integrated firewall in the Azure fabric is certainly an important advantage that helps to enrich the capabilities provided natively by Azure. At the time are configurable basic operations, but the feature set is definitely destined to get rich quickly. Please note that this service is currently in preview, and no service level agreement is guaranteed and is not recommended to use it in production environments.

Azure Application Gateway: monitoring with Log Analytics

Azure Application Gateway is an application load balancer (OSI layer 7) for web traffic, available in Azure environment, that manages HTTP and HTTPS traffic of the applications. This article is discussed how to monitor of Azure Application Gateway using Log Analytics provides.

Figure 1 - Azure Application Gateway basic schema

Using the Azure Application Gateway you can take advantage of the following features:

  • URL-based routing
  • Redirection
  • Multiple-site hosting
  • Session affinity
  • Secure Sockets Layer (SSL) termination
  • Web application firewall (WAF)
  • Native support for WebSocket and HTTP/2 protocols

For more details on Azure Application Gateway can be found in the Microsoft's official documentation.

Configuring Diagnostics logs for the Application Gateway

The Azure Application Gateway can send diagnostic logs to a workspace of Log Analytics . This feature is very useful for checking the performance, to detect any errors and is essential for troubleshooting steps, in particular in the presence of the WAF module. To enable the diagnostic from the Azure portal you can select the Application Gateway resource and go to the "Diagnostics logs":

Figure 2 – Starting configuration of Diagnostics logs

Figure 3 – Configuring Diagnostics logs

After choosing your Log Analytics workspace where to send diagnostics data, in the Log section, you can select which type of log collecting among the following:

  • Access log (ApplicationGatewayAccessLog)
  • Performance log (ApplicationGatewayPerformanceLog)
  • Firewall log (ApplicationGatewayFirewallLog): these logs are generated only if the Web Application Firewall is configured on the Application Gateway.

In addition to these logs are also collected by default Activity Log generated by Azure. These logs are maintained for 90 days in the store of the Azure event logs. For more details you can refer this specific document.

Azure Application Gateway analytics solution of Log Analytics

Microsoft offers the solution Azure Application Gateway analytics that can be added to the workspace of Log Analytics by following these simple steps:

Figure 4 - Launching the procedure of adding the solution to the OMS workspace

Figure 5 – Selection of the Azure Application Gateway analytics solution

Figure 6 - Addition of the solution in the selected workspace

After enabling the sending of diagnostics logs into the workspace of Log Analytics and adding the solution to the same, by selecting the tile Azure Application Gateway analytics in the Overview page, you can see an overview of the collected log data from the Application Gateway:

Figure 7 – Screen overview of the Azure Application Gateway analytics solution

You can also view the details for the following categories.

  • Application Gateway Access logs:
    • Client and server errors for Application Gateway access logs
    • Requests per hour for each Application Gateway
    • Failed requests per hour for each Application Gateway
    • Errors by user agent for Application Gateways

Figure 8 - Screenshot of the Application Gateway Access logs

  • Application Gateway performance:
    • Host health for Application Gateway
    • Maximum and 95th percentile for Application Gateway failed requests

Figure 9 – Screenshot of the Application Gateway performance

Customized dashboard of Log Analytics for the Application Gateway monitor

In addition to this solution can also be convenient to use a special dashboard of Log Analytics, specifically for the monitoring of the Application Gateway, available at this link. The deployment of the dashboard is via ARM template and requires also in this case the Diagnostics logs of the Application Gateway enabled, as described above. The various queries of Log Analytics, used by the dashboard, are documented in this blog. Thanks to these queries the dashboard shows several additional information exposed by the diagnostic of the Application Gateway.

Figure 10 – Custom dashboard of Log Analytics for Application Gateway monitoring

Query of Log Analytics to monitor the Firewall Log

Using the solution Azure Application Gateway analytics of Log Analytics or the custom dashboard (stated in the previous paragraph) are not contemplated at the time the Firewall log, generated when is active the Web Application Firewall (WAF) on the Application Gateway. The WAF is based on rules of OWASP Core Rule Set 3.0 or 2.2.9 to intercept attacks, for the web applications, that exploit the known vulnerabilities. To name a few, we find for example the SQL injection and attacks cross site scripting.

In this case, if you decide to check the Firewall log, you must directly query the Log Analytics, for example:

Figure 11 – The Query to retrieve blocked requests by the WAF module, over the past 7 days, for a specific URI, divided by RuleID

To see the list of rules of the WAF, by associating the RuleId to its description, you can consult this document.

The descriptive message of the rule is also listed within the results returned by the query:

Figure 12 – The Query to retrieve blocked requests by the WAF module, over the past 7 days, for a specific URI and for a specific RuleId

Conclusions

In my experience, in Azure architectures that require secure publishing of web services to Internet, is often used Azure Application Gateway service with the WAF module active. With the ability to send diagnostic logs of this component to Log Analytics you have the option of having a qualified monitor, that is fundamental to analyse any error conditions and to assess the state of the component in all its facets.

Microsoft Azure: network monitoring solutions overview

Microsoft Azure provides several solutions that allow you to monitor network resources, not only for cloud environments, but even in the presence of hybrid architectures. That are cloud-based features, to check the health of your network and connectivity to your applications. Furthermore, they give detailed information about network performance. This article will be made an overview of the various solutions such as the main features, needed to orient the use of the network monitor tools most appropriate for your needs.

Network Performance Monitor (NPM) is a suite that includes the following solutions:

  • Performance Monitor
  • ExpressRoute Monitor
  • Service Endpoint Monitor

In addition to the tools included in the Network Performance Monitor (NPM) you can use Traffic Analytics and DNS Analytics.

Performance Monitor

The most commonly used approach is to have hybrid environments with heterogeneous networking, that allows you to connect your own on-premises infrastructure with the environment implemented in the public cloud. In some cases you may also have different cloud providers, that make the network infrastructure even more complicated . These scenarios require the use of flexible monitor tools that can work across on-premises, in cloud (IaaS), and in hybrid environments. Performance Monitor has all of these characteristics and thanks to the use of synthetic transactions, provides the ability to monitor, almost in real time, the network parameters to get performance information, like packet loss and latency. Furthermore, this solution allows to easily locate the source of a problem in a specific network segment or identifying a particular device. The solution requires the presence of the OMS agent and keeping track of the retransmission packets and the roundtrip time, is able to return a graph of easy and immediate interpretation.

Figure 1 - Hop-by-hop chart provided by Performance Monitor

Where to install the agents

The installation of the agent of Operations Management Suite (OMS) is necessary on at least one node connected to each subnet from which it intends to monitor the connectivity to other subnets. If you plan to monitor a specific network link you must install agents on both endpoints of the link. In cases where you do not know the exact network topology, one possible approach is to install agents on all servers that hold critical workloads and for which you need to monitor your network performance.

The Cost of the Solution

The cost of the feature Performance Monitor in NPM is calculated on the basis of the combination of these two elements:

  • Monitored Subnet link. To obtain the costs for monitoring of a single subnet link for one month, you can see Ping Mesh.
  • Data volume.

For more details please visit the Microsoft's official page.

ExpressRoute Monitor

Using ExpressRoute Monitor it is possible to monitor the end-to-end connectivity and verify the performance between on-premises environment and Azure, in the presence of ExpressRoute connectivity with Azure Private peering and Microsoft peering connections. The key features of this solution are:

  • Auto-detection of the circuit ExpressRoute associated with your subscription Azure.
  • Detection of network topology.
  • Capacity planning and bandwidth usage analysis.
  • Monitoring and alerting both the primary and the secondary path of the circuit ExpressRoute.
  • Monitoring connectivity towards the Azure services such as Office 365, Dynamics 365 using ExpressRoute as connectivity.
  • Detection of possible deterioration of connectivity with the various virtual network.

Figure 2 – Topology view of a VM on Azure (left) connected to a VM on-prem (right), via ExpressRoute

Figure 3 - Trend on the use of the bandwidth and latency on the ExpressRoute circuit

Where to install the agents

In order to use ExpressRoute Monitor you need to install an Operations Management Suite agent on a system that resides on Azure virtual network and at least one agent on a machine attested on the subnet on-premises, connected via private peering of ExpressRoute.

The Cost of the Solution

The cost of ExpressRoute Monitor solution is calculated based on the volume of data generated during the monitoring operations. For more details please visit the specific section in the cost page of NPM .

Service Endpoint Monitor

Using this solution, you have the ability to monitor and test the reachability of your services and your applications, almost in real time, simulating user access. You also have the ability to detect network side performance problems and identify the problematic network segment.

Here are reported the main features of the solution:

  • It does the monitor end-to-end of the network connections to your applications. The monitor can be done by any endpoint "TCP-capable" (HTTP, HTTPS, TCP, and ICMP), as websites, SaaS applications, PaaS applications, and SQL databases.
  • It correlates application availability with network performance, to precisely locate the degradation point on the network, starting from the user's request until the application.
  • It tests applications reachability from different geographical location .
  • It determines the network latencies and lost packets to reach the applications.
  • It detects hot spots on the network that can cause performance problems.
  • It does the monitor of the availability of applications Office 365, through specific built-in test for Microsoft Office 365, Dynamics 365, Skype for Business and other Microsoft services.

Figure 4 - Creating of a Service Connectivity Monitor test

Figure 5 – Diagram showing the topology of the network, generated by different nodes, for a Service Endpoint

Where to install the agents

To use Service Endpoint Monitor you must install the Operations Management Suite agent on each node where you want to monitor network connectivity to a specific service endpoint.

The Cost of the Solution

The cost for using Service Endpoint Monitor is based on these two items:

  • Number of connections, where the connection is understood as reachability test of a single endpoint, from a single agent, for the entire month. In this regard you can see Connection Monitoring in the cost page.
  • Volume of data generated by the monitor. The cost is obtained from cost page of Log Analytics, in the section Data Ingestion.

Traffic Analytics

Traffic Analytics is a totally cloud-based solution, allowing you to have an overall visibility on network activities that are undertaken in the cloud environment. In Azure to allow or deny network communication to the resources connected with Azure Virtual Networks (vNet) it uses the Network Security Group (NSG), containing a list of access rules. The NSGs are applied to network interfaces connected to the virtual machines, or directly to the subnet. The platform uses NSG flow logs to maintain the visibility of inbound and outbound network traffic from the Network Security Group. Traffic Analytics is based on the analysis of NSG flow logs and after an appropriate aggregation of data, inserting the necessary intelligence concerning security, topology and geographic map, can provide detailed information about the network traffic of your Azure cloud environment.

Using Traffic Analytics you can do the following:

  • View network activities cross Azure subscriptions and identify hotspots.
  • Intercept potential network security threats, in order to take the right remedial actions. This is made possible thanks to the information provided by the solution: which ports are open, what applications attempt to access to Internet and which virtual machines connect to unauthorized networks.
  • Understand network flows between different Azure regions and Internet, in order to optimize their deployment for network performance and capacity.
  • Identify incorrect network configurations that lead to having incorrect communication attempts.
  • Analysis of the VPN gateway capabilities or other services, to detect problems caused by over-provisioning and underutilization.

Figure 6 – Traffic Analytics overview

Figure 7 - Map of Active Azure Regions on the subscription

DNS Analytics

DNS Analytics solution is able to collect, analyze and correlate logs of DNS and provides administrators the following features:

  • Identifies clients that try to resolve domains considered malevolent.
  • Finds records that belong to obsolete resources.
  • It highlights domain names frequently questioned.
  • View the load of requests received by the DNS server.
  • It does the monitor of dynamic DNS registrations failed.

Figure 8 – Overview of DNS Analytics solution

Where to install the agents

The solution requires the presence of the OMS agent or the Operations Manager agent installed on each DNS server to be monitored.

Conclusions

With increasing complexity of network architectures in hybrid environments, consequently increases the need to be able to use tools able to contemplate different network topologies. Azure provides several cloud based tools and integrated into the fabric, such as those described in this article, that allow you to fully and effectively monitor the networking of these environments. Remember to test and evaluate free Operations Management Suite (OMS) you can access this page and select the mode that is most appropriate for your needs.

Everything you need to know about new Azure Load Balancer

Microsoft recently announced the availability in Azure of Standard Load Balancer. They are load balancers Layer-4, for TCP and UDP protocols that, compared to Basic Load Balancer, introduce improvements and give you more granular control of certain features. This article describes the main features of the Standard Azure Load balancers, in order to have the necessary elements to choose the most suitable type of balancer for your needs.

Any scenario where you can use the SKU Basic of Azure Load balancers, can be satisfied using the Standard SKU, but the two types of load balancers have important differences in terms of scalability, functionality, guaranteed service levels and cost.

Scalability

The Standard Load balancers have higher scalability, compared to Basic Load Balancer, as regards the maximum number of instances (IP Configuration) that can be configured in the backend pool. The SKU Basic allows you to have up to 100 instances, while using the Standard SKU the maximum number of instances is equal to 1000.

Functionality

Backend pool

With regard to the Basic Load Balancer, in the backend pool, can reside exclusively:

  • Virtual machines that are located within an availability set.
  • A single standalone VM.
  • Virtual Machine Scale Set.

Figure 1 – Possible associations in the Basic Load Balancer backend pool

In Standard Load Balancer instead, it is allowed to enter into backend pool any virtual machine attested on a particular virtual network. The integration scope, in this case, is not in fact the availability set, as for the Basic load balancer , but it is the virtual network and all its associated concepts. A requirement to consider, in order to insert into the backend pool of Standard Load Balancer the virtual machines, is that these should not have associated public IP or must have Public IP with Standard SKU.

Figure 2 Standard Load Balancer backend pool association

Availability Zones

Standard Load Balancers provide integration scenarios with Availability Zones, in the regions that include this feature. For more details you can refer this specific Microsoft document, that shows the main concepts and implementation guide lines.

Ports High Availability

The load balancers with Standard SKU, of type "Internal", allow you to balance the TCP and UDP flows on all ports simultaneously. To do that, in the rule of load-balancing, there is the possibility to enable the "HA Ports" option:

Figure 3 - Configuring the load balancing rule with "HA Ports" option enabled

The balancing is done for flow, which is determined by the following elements: source IP address, source port, destination IP address, destination port, and protocol. This is particularly useful in scenarios where are used Network Virtual Appliances (NVA's) requiring scalability. This new feature improves the tasks that are required for NVAs implementations.

Figure 4 - Network architecture which provides the use of LB with "HA Ports" option enabled

Another possible use for this feature is when you need to balance a large number of ports.

For more details on the option "HA Ports" you can see the official documentation.

Diagnostics

Standard Load Balancer introduce the following features in terms of diagnostic capability:

  • Multi-dimensional metrics: You can retrieve various metrics that allow you to see, in real time, usage status of load balancer, internal and public. This information is particularly useful for troubleshooting.

Figure 5 – Load Balancer metrics from the Azure Portal

  • Resource Health: in Azure Monitor you have the opportunity to consult the health status of Standard Load Balancer (currently only available for Standard Load Balancer, type Public).

Figure 6 – Resource health of Load Balancer in Azure Monitor

You can also consult the history of the health state :

Figure 7 – Health history of Load Balancer

All details related to diagnostics, of the Standard Load Balancer, can be found in the official documentation.

Security

The Load Balancer with standard SKU are configured to be secure by default in fact, in order to operate, you must have a Network Security Group (NSG) where the traffic flow is explicitly allowed. As previously reported, the Load Balancer standards are fully integrated into the virtual network, which is characterized by the fact that it is private and therefore closed. The Standard Load Balancer and the public Standard IP are used to allow the access to the virtual network from outside and now by default you must configure a Network Security Group (closed by default) to allow the desired traffic. If there is no a NSG, on the subnet or on the NIC of the virtual machine, you will not be allowed the access by the network stream from the Standard Load Balancer.

The Basic Load Balancers by default are opened and the configuration of a Network Security Group is optional.

Outbound connections

The Load Balancer on Azure support both inbound and outbound connectivity scenarios. The Standard Load Balancer, compared to the Load Balancer Basic, behave differently with regard to outbound connections.

To map the internal and private IP address of the virtual network to the public IP address of the Load Balancer it uses the Source Network Address Translation technique (SNAT). The Load Balancer Standard introduce a new algorithm to have stronger SNAT policies, scalable and accurate, that allow you to have more flexibility and have new features.

Using the Standard Load Balancer you should consider the following aspects with regard to outbound scenarios:

  • Must be explicitly created to allow outgoing connectivity to virtual machines and are defined on the basis of incoming balancing rules.
  • Balancing rules define how occur the SNAT policies.
  • If there are multiple frontend, It uses all the frontend and for each of these multiply the preallocated SNAT ports available.
  • You have the option to choose and control whether a specific frontend you don't want to use for outbound connections.

Basic Load Balancers, in the presence of more public frontend IP, it is selected a single frontend to be used in outgoing flows. This selection can not be configured and occurs randomly.

To designate a specific IP address, you can follow the steps in this section of the Microsoft documentation.

Management operations

Standard Load balancers allow enabling management operations more quickly, much to bring the execution times of these operations under 30 seconds (against the 60-90 seconds to the Load Balancer with Basic SKU). Editing time for the backend pools are also dependent on the size of the same.

Other differences

At the moment, Public Standard Load Balancer cannot be configured with a public IPv6 address:

Figure 8 – Public IPv6 for Public Load Balancer

Service-Level Agreements (SLA)

An important aspect to consider, in choosing the most appropriate SKU for different architectures, is the level of service that you have to ensure (SLA). Using the Standard Load Balancer ensures that a Load Balancer Endpoint, that serve two or more instances of health virtual machines, will be available in time with an SLA of 99.99%.

The Load Balancer Basic does not guarantee this SLA.

For more details you can refer to the specific article SLA for Load Balancer.

 

Cost

As for Basic Load Balancer are not expected cost, for Standard Load Balancer there are usage charge provided on the basis of the following elements:

  • Number of load balancing rules configured.
  • Number of inbound and outbound data processed.

There are no specific costs for NAT rules.

In the Load Balancer cost page can be found the details.

 

Migration between SKUs

For Load Balancer is not expected to move from the Basic SKU to the Standard SKU and vice versa. But it is necessary to provide a side-by-side migration, taking into consideration the previously described functional differences.

Conclusions

The introduction of the Azure Standard Load Balancer allows you to have new features and provide greater scalability. These characteristics may help you avoid having to use, in specific scenarios, balancing solutions offered by third party vendors. Compared to traditional Load balancers (Basic SKU) change operating principles and have distinct characteristics in terms of costs and SLAS, this is good to consider in order to choose the most suitable type of Load Balancer, on the basis of the architecture that you must accomplish.