Category Archives: Microsoft Azure

Azure Site Recovery: disaster recovery of Virtual Machines in Azure

In azure, there is the possibility of using Azure Site Recovery (ASR) to implement easily an efficient disaster recovery strategy by enabling replication of virtual machines among different regions of Azure. Although in Azure are present integrated mechanisms to deal with localized hardware failures, it may be appropriate to implement a solution that can ensure applications compliance , performed on virtual machines in Azure, against both catastrophic events, such as earthquakes or hurricanes, that software issues that may impact on the functioning of an entire region of Azure. This article will show you how to configure a virtual machine replication and how to enable a disaster recovery scenario.

This feature has been defined one-click replication because of its simplicity, it is currently in public preview and it is usable in all the Azure regions where ASR is available.

Before you enable this functionality is essential to ensure that the necessary requirements are met and to do that you can see the compatibility matrix for the replication scenario of virtual machines among different regions.

By accessing the Azure Portal it is possible to select the virtual machine that you intend to replicate and perform the configuration in the section Disaster recovery:

Figure 1 – Disaster Recovery Section of the VM

Selecting Disaster Recovery shows the following configuration panel:

Figure 2 – VM replication configuration panel

The first required parameter is the target region where you want to replicate the virtual machine. The replication activation process also create the necessary Azure artifacts (Resource Group, Availability Set if used by the selected VM, Virtual Network and Storage accounts) or you can select them at will if they were created earlier.

Figure 3 – The resources needed in the region target

The replication process also requires the presence of a Cache Storage Accounts in the source region that is used as a temporary repository to store changes before they are reported in the storage account defined in the target region. This is done to minimize the impact on production applications that reside on the replicated VM.

Figure 4 - Cache Storage Account in the replication process

Always in the configuration panel is required which is proposed Vault Recovery Services use creating a replication policy that defines the recovery point retention and the rate at which consistent snapshots are made at the application level.

By selecting Enable Replication will begin the creation process of Azure resources required, the VM is registered in the selected Recovery Services Vault and replication process is activated.

The Disaster Recovery section lists details about the replication and it is possible to perform a failover or a test failover:

Figure 5 - Details relating to the replication process of the VM and activation of the failover process

The procedure Test Failover Specifies which recovery point using between: latest, latest processed, latest app-consistent or custom. In addition it is possible to select in which virtual network attest the virtual machine during the test failover in order to perform the test without generating any impact on the production systems.

Figure 6 – Test Failover of a VM

Similar the Failover panel that allows you to specify only which recovery point to use as the network on which attest the machine has already been defined in the configuration phase.

Figure 7 – Failover of a VM

Only when you start the Failover process affected virtual machines are created on the target resource group, attested to the target vNet and configured in the availability set appropriate when used.

Figure 8 – Failover process

Conclusions

Thanks to this new feature introduced in Azure Site Recovery it is possible to activate with ease replication of virtual machines in different Azure regions, without the necessity of having expensive secondary infrastructure to activate a disaster recovery plan.

What's New in Azure Automation: Inventory, Change Tracking and Update Management

In Azure Autiomation were recently introduced new features, currently in preview, which make it possible to manage the distribution of updates, collect inventory information about the applications installed on the systems and keep track of changes made on the machines. This article will show you how to configure the Azure Automation Account to take advantage of these new features and it will show their main characteristics.

In order to use each of these features it is necessary that the Automation Account is associated with a Log Analytics Wokspace.

If the Automation Account where you want to enable these new features is not linked to any Workspace of Log Analytics is requested, in the process of activation, the binding to an existing Workspace or it propose the creation of a new Workspace:

Figure 1 - Association of Automation Account to Log Analytics Workspace

The capabilities of Change Tracking and Inventory are enabled simultaneously by the Azure portal and at the end of the activation will appear the following notification:

Figure 2 – Notification after enabling Change Tracking and Inventory features

For enabling Update management you will need to perform the same operation.

Figure 3 – Enabling the Update Management feature

At the end of these activities in the Log Analytics Workspace will be present the following solution:

Figure 4 – Solution added in Log Analytics

After the completion of the activation, the solution begins to show the data of machines already connected to the OMS Workspace associated with the Automation Account. You could also get the onboarding by further machines directly from the relevant sections of the Azure Portal:

Figure 5 - Adding additional systems

This process requires the installation of the OMS agent on systems and can be done either on Windows and Linux. If the machines are on the Azure fabric the OMS agent installation process is integrated and can happen quickly with a simple click from the Azure Portal. Otherwise you can still associate the systems by manually installing the OMS agent, independently from their location (on-premises or others cloud).

For the functionality of Inventory and Change Tracking you can access the settings (common among the two solutions) to customize the registry key information, the files under Windows and Linux that you plan to inventory and monitor:

Figure 6 – Edit your settings

Figure 7 - Personalization of the configuration

 

Inventory

This feature allows you to retrieve inventory information relating to: installed software, files, Windows Registry keys, Windows Services and Linux Daemons. All this can be accessed easily directly from the Azure portal and it is possible to apply search filters:

Figure 8 - Search the inventory data

 

Change Tracking

The functionality of Change Tracking monitors changes made to systems relatively to Daemons, File, Registry, software and services on Windows . This feature can be very useful to diagnose specific problems and to enable alerts against unexpected changes.

Figure 9 - Consultation of changes

By accessing the Log Analytics console you can also carry out more targeted searches:

Figure 10 – Log Search Analytics

Also in the Change Tracking there is the possibility to connect theAzure Activity Log of an Azure subscription to collect also changes you make in Azure side.

Figure 11 – Azure Activity Log connection

 

Update Management

The solution of Update Management allows a total visibility on the update compliance for both Windows and Linux systems:

Figure 12 - Global status of compliance of the updates on managed systems

Using the search panel you can quickly identify missing updates:

Figure 13 – Identify missing updates

The solution is not only very useful for consultation, It also allows you to schedule the deployment to install the updates within a specific maintenance window.

Figure 14 – Deplyment schedule

Very soon, even the ability to deploy on Linux systems. Among the features offered there is the ability to exclude specific updates from the deployment.

Figure 15 - Deployment Settings

Scheduled deployments and their execution status can be monitored in real time directly from the Azure Portal:

Figure 16 – List of scheduled update deployments

Figure 17 – Update Deployment in progress

Figure 18 – Update Deployment successfully completed

Selecting the deployment completed you will be sent to a well-structured and easy-to-use dashboard that allows you to check the details of the deployment:

Figure 19 – Deployment dashboard

Also useful the ability to retrieve logs that are related to deployment for troubleshooting purposes.

Conclusions

These are features that give you the ability to control and manage easily, and efficiently environments composed of few units in the cloud up to contemplate hybrid scenarios with a large number of systems. These features are currently in preview therefore intended to further expand their potential. In particular the functionality of Update Management to manage and orchestrate the updates deployment in complex environments in an efficient and flexible way will have to evolve, but it is definitely in a good point of the develop. For more details of Azure Automation I invite you to consult official documentation.

How to create a Docker environment in Azure using VM Extension

Docker is a software platform that allows you to create, manage and execute isolated applications in containers. A container is nothing more than a methodology for creating software packages in a format that allows it to be run independently and isolated on a shared operating system. Unlike the virtual machine containers do not include a complete operating system, but only the libraries and settings needed to run the software. Therefore there are a series of advantages in terms of size, speed, portability and resource management.

Figure 1 – Diagram of containers

 

In the world of Microsoft Azure there are different configuration and use possibilities about Docker containers that I list synthetically:

  • VM Extension: through a specific Extension you can implement Docker inside a virtual machine.
  • Azure Container Service: deploys quickly into a cluster environment Azure Docker Swarm, DC/OS or Kubernetes ready for production. This is the most complete solution for the orchestration of containers.
  • Docker EE for Azure: is a template available on the Azure Marketplace, a collaboration between Microsoft and Docker, which makes it possible to provision a clustered Docker Enterprise Edition integration with Azure VM Scale Sets, the Azure Load balancers and the Azure Storage.
  • Rancheros: is a Linux distribution designed to run containers available as template within the Marketplace Azure Docker.
  • Web App for Containers: you have the option of using containers, making the deployment in the Azure App Service managed platform as a Web App running in a Linux environment.
  • Azure Container Instances (currently in preview): is definitely the easiest and quickest way to run a container Docker in the Azure platform, without the need to create virtual machines, ideal in scenarios where containers blocks.
  • Azure Service Fabric: supports the containers in both the Windows and Linux. The platform contains natively support for Docker Wrote (currently in preview), allowing you to orchestrate applications based on containers in the Azure Service Fabric.
  • DC/OS on Azure: This is a managed cloud service that provides an environment for the deployment of workloads in cluster using DC/OS platform (Datacenter Operating System).

All these possibilities enable, according to the needs and to the scenario that you must implement, choosing the most appropriate deployment methodology in the container for execution environment Azure Docker.

In this article we will create a Docker environment in a Virtual Machine using the Docker Extension. Starting from a virtual machine in Azure, you can add the Docker Extension which installs and configures the daemon Docker, the client Docker and Docker Wrote.

This extension is supported for the following Linux distributions:

  • Ubuntu 13 or higher.
  • CentOS 7.1 or higher.
  • Red Hat Enterprise Linux (RHEL) 7.1 or higher.
  • CoreOS 899 or higher.

Adding the extension from the Azure Portal can be done via the following steps. The section Extensions Select the virtual machine button Add:

Figure 2 – Adding Extensions to the VM from the Azure Portal

 

Then shows the list of Extensions available, you stand onExtension Docker and press the button Create.

Figure 3 – Selection of Extension Docker

 

To enable secure communication with the Docker system implemented in your Azure environment you should use certificates and keys issued by a trusted CA. If you do not have a CA to generate these certificates you can follow the instructions in section Create a CA, Server and client keys with OpenSSL present in the official documentation of Docker.

 

Figure 4 – Communication scheme docker by encrypted protocol TLS

 

The Extension wizard requires first to enter the communications port of the Engine Docker (2376 is the default port). Also the CA's certificate is requested, your Server certificate and Server Key, in base64-encoded format:

Figure 5 – Parameters required by the wizard to add the Docker VM Extension

 

Adding the Extension Docker takes several minutes at the end of which the virtual machine will be installing the latest stable version of Docker Engine and daemon Docker will listen on the specified port using certificates entered in the wizard.

Figure 6 – Details of the Extension Docker

 

In case you need to allow Docker communication from outside the vNet where is attested the VM with Docker you must configure appropriate rules in Network Security Group used:

Figure 7 – Configuration example NSG to allow communication Docker (door 2376)

 

At this point the Docker environment is ready to be used and from a remote client you can start the communication:

Figure 8 – Docker command run from a remote client to retrieve information

 

Conclusions

The Azure Docker VM extension is ideal to implement easily, in a reliably and securely mode, a dev or production Docker environment on a single virtual machine. Microsoft Azure offers a wide range of possibilities in the choice of implementation related to the Docker platform, with a lot of flexibility by letting you choose the most appropriate solution for your needs.

Log Analytics: a major update evolves the solution

Last week Microsoft began releasing what may be termed the most significant update Log Analytics from date of issue. Among the main changes introduced in the new version of Log Analytics are a powerful new query language, the introduction of the new Advanced Analytics portal and greater integration with Power BI. In this article we will see how to upgrade and the main features of the new features.

How to update Log Analytics

The upgrade process is very simple and is gradually affecting the workspace who present in all region of Azure. When the update is available for your workspace you will see a notification in the portal banner OMS or directly in the Log Analytics of the portal Azure:

Figure 1 – Banners that notifies the availability of Log Analytics

With a simple click on the banner leads to the following summary screen that summarizes the changes introduced by the update and that you use to start the upgrade process by selecting the appropriate button:

Figure 2 – Upgrade of Log Analytics

The upgrade must be performed by an administrator of the workspace who and the upgrade process takes a few minutes, at the end of which all artifacts like saved searches, the alert rule, computer groups and views created by using the View Designer are automatically converted to the new language of Log Analytics. The research included in the solution are not converted automatically during the upgrade, but would like to convert on the fly and transparently to the user at the time of the opening of the same.

During the upgrade process creates a full backup of the workspace, useful in case there is a need to revert to the previous version. Recovery is possible directly from the portal OMS:

Figure 3 – Restore the workspace Log Analytics legacy

When this update is optional, but in the future will be forced by Microsoft by talking to advance the date of the conversion of the workspace.

New query building language

After upgrading you can take advantage of the potential of the new language for creating queries. We carry the main features:

  • This is a simple and easy-to-understand language where you can use constructs closer to natural language.
  • The output of a query can be correlated (piped) with other commands in order to create more complex queries than was possible with the previous language.
  • Supports the use of extended field calculated in real time and can be used to compose complex queries.
  • Improved advanced features that allow you to join tables based on multiple fields, inner join, outer joins and join using the extended field.
  • Are made available more functionality for operations involving functions based on date and time.
  • Use advanced algorithms for evaluation of patterns in dataset and compare different sets of data.
  • Supports inserting comments in queries, always useful when troubleshooting and to facilitate understanding of queries written by others.

Listed above are just some of the many new features that are introduced, but for more details about the new build Log Analytics query language I invite you to consult the official site specially created that contains a complete guide, tutorials and examples.

Figure 4 -Example of query written in the new language that creates a chart with daily alerts by severity

For those who already have a good familiarity with the previous generation of query language, you can use the converter that is added when upgrading your workspace and that converts queries written with language legacy in new language:

Figure 5 -Example of converting a query

Useful also Legacy to new Azure Log Analytics Query Language cheat sheet that allows you to make a quick comparison between the two languages bringing some statement of the most widely used.

Advanced Analytics Portal

With the introduction of new Advanced Analytics you can perform useful tasks when writing code that cannot be done directly from the portal of Log Analytics. Access to the portal Advanced Analytics can take place by selecting one of the following icons from Log Analytics Portal:

Figure 6 – Advanced Analytics Portal login

Thanks to this portal you get a better experience in interactive writing queries using a multi-line editing, emphasis on the context-aware syntax and a powerful integrated Viewer. The whole thing is very useful when troubleshooting, Diagnostics, trend analysis and to generate reports quickly.

Figure 7 – Query that computes and graphically displays the result of the CPU usage of a specific machine

With ease you can also create a quick visualization of the portal Advanced Analytics and make the pin in the same on a shared Azure Dashboard.

Integration with Power BI

Following this update you get even closer integration with Power BI, like Application Insights:

Figure 8 – Log Analytics integration scheme with Power BI

Through this integration you can use Power BI reports, publish and share them on PowerBI.com and enable automatic generation. For more details about I invite you to read the document Export Log Analytics data to Power BI.

 

Conclusions

This major upgrade of Log Analytics increases the potential of the tool allowing you to perform complex searches in a targeted and easy thanks to the new language introduced and enhances the potential of the solution due to better integration with Power BI. This new language and Advanced Analytics are already being used in Application Insights and this allows a homogeneous and consistent monitoring experience for different Azure services.

Azure Multi-Factor Authentication: Introduction to Solution

To help secure access to critical data and applications may be necessary to provide multifactor authentication which generally requires the use of at least two of the following test methods:

 

  • Something you know (typically a password).
  • Something that you own (a unique device and not easily duplicable, such as a phone).
  • A biometric recognition system that aims to identify a person based on one or more biological or behavioral characteristics (Biometrics).

 

Microsoft allows you to adopt a two-factor authentication solution using theAzure Multi-Factor Authentication (MFA) which provides for the adoption of a second method of verifying during the authentication process. Using this solution can be configured the following additional authentication factors:

 

  • Phone call: a call is made to the phone registered to users. In this case the user will be prompted to answer the call and to verify that you can access by pressing the button # or entering a PIN code.
  • Text message (SMS): is sent to the user's mobile phone an SMS that contains a pin code of 6 figures who must be entered during the authentication process.
  • Notified by Mobile App: the user's smartphone is sent through Mobile App a challenge that must be approved by the user to complete the authentication process.
  • Verification code via Mobile App: in this user's smartphone Mobile App generates a code of 6 digits each 30 seconds. The user would then put the latest code at the time that authenticates.
  • Party OATH token: There is the possibility to configure Azure Multi-Factor Authentication to accept verification methods provided by third-party solution.

 

The Azure Multi-Factor Authentication (MFA) provides for two possible deployment models:

 

  • MFA as a solution entirely in the Cloud.
  • MFA system installed and configured on-premises systems.

 

To locate the most appropriate deployment model you need to consider several aspects: What I'm putting in security, where are the users who need access to your solution and what features I really need.

 

What you are trying to protect?

This is the first question you should ask yourself whose answer we can already point to a specific deployment template. If indeed there is a need to enable the dual factor authentication for IIS applications that are not published by Azure App Proxy or remote access solutions (VPN or Remote Desktop Gateway) You must use the server Azure MFA implemented on-premises systems.

 

Figure 1 – What is secured by MFA

 

Where the users are located?

Another important aspect to consider is where the users are located on the basis of the Identity model adopted, come mostra la figura 2.

 

Figure 2 – Location of users

 

What features are needed?

Depending on the type of deployment selected (MFA in the cloud or local MFA) different capabilities that we could opt for a choice rather than another, come mostra la figura 3.

 

Figure 3 Available in two models – MFA

 

Requirements for the use of MFA

In order to use the Azure Multi-Factor Authentication (MFA) You must have access to a subscription Azure. If you want to test your service you can possibly use a trial subscription of Azure.

 

The hardware requirements as regards Multi-Factor Authentication Server Azure are minimal (200 MB disk space and 1 GB RAM), While the following software features:

 

  • Operating System: Windows Server 2008 R2 or higher
  • Microsoft .NET 4.0 Framework
  • IIS 7.0 or higher if you want to install the User Portal or the Web Service SDK

 

Each server MFA must be able to communicate on port 443 outbound to the following web address:

 

  • HTTPS://pfd.phonefactor.net
  • HTTPS://pfd2.phonefactor.net
  • HTTPS://css.phonefactor.net

 

Also if there are firewall policy to block outbound to the door 443 You must open the IP address range are documented in section "Azure Multi-Factor Authentication Server firewall requirements"the Microsoft's official documentation.

 

Azure Multi-Factor Authentication in the cloud

Enabling MFA cloud scenario is very simple and is done per user. To do so you need to access the service Azure Active Directory, figura 4, from the Azure Portal:

 

Figure 4 – Step 1: enabling MFA to the cloud

 

After selecting the Directory, in the "Users and groups" select "Multi-Factor Authentication":

 

Figure 5 – Step 2: enabling MFA to the cloud

 

You will be redirected to another website where selecting the specific user, figura 6, You can enable the MFA:

 

Figure 6 – Step 3: enabling MFA to the cloud

 

At this point the user is mail-enabled MFA. The same thing can also be done by selecting multiple users simultaneously and by the same portal you can configure various settings of Azure Multi-Factor Authentication. For more details about I invite you to consult Microsoft's official documentation.

 

The same thing can be accomplished by using the cmdlets PowerShell for Azure to which allow us to easily make enabling MFA to more users with just a few lines of code, as shown in the following example:

 

$users = “user1@ugisystemcenter.org”,”user2@ugisystemcenter.org”,”user3@ugisystemcenter.org”

foreach ($user $users)

{

$St = New-Object-TypeName Microsoft StrongAuthenticationRequirement. Online. Administration.

$St. Relyingparty = “*”

$St. State = "Enabled"

$is = @($St)

$User-StrongAuthenticationRequirements-MsolUser-UserPrincipalName $sta set

}

 

Azure Multi-Factor Authentication on-premises

On-premises deployment of Azure Multi-Factor Authentication Server requires you to download the setup installer direct from the Azure Portal. If you want to dismiss Azure Multi-Factor Authentication as a standalone service with user authentication and billing options you need to create a new Classic Azure Portal Multi-Factor Auth Provider (This feature will soon be available on the new Azure Portal).

 

Figure 7 — Creating new Multi-Factor Auth Providers

 

By selecting the button Manage you will be redirected towards the Azure Portal Multi-Factor Authentication, figura 8, from where can I donwload the setup and build the service activation credentials.

 

Figure 8 – Multi-Factor Authentication Server downloads and generation credentials

 

If you want to use the bundled license to Enterprise Mobility Suite, Azure to Premium or Enterprise Cloud Suite is not necessary to create a Multi-Factor Auth Provider but simply log into the Azure Portal Multi-Factor Authentication to directly download the setup.

 

After coming into possession of the setup you can install the Azure MFA Server. During setup you will be asked only the installation path, figura 9.

 

Figure 9 – Setup Azure MFA Server

 

Figure 10 – Setup Azure MFA Server

 

At this point you must run on Multi-Factor Authentication Server you just installed that will guide us in the activation process.

 

Figure 11 – Applying Multi-Factor Authentication Server

 

Figure 12 – Step 1: How to activate Multi-Factor Authentication Server

 

On the following screen you must enter the logon credentials that are generated by the Azure Portal Multi-Factor Authentication (see Figure 8).

 

Figure 13 – Step 2: How to activate Multi-Factor Authentication Server

 

After completing the first server Configuration Wizard cannot start Azure MFA, figura 14, to enable replication across multiple servers highly available service and configure the MFA Azure.

 

Figure 14 Multi-server MFA – Configuration Wizard

 

In the scenario where the Multi-Factor Authentication Server is enabled on multiple systems, the servers communicate with each other via RPC calls MFA Azure and to make sure that everything happens safely must authenticate with each other. This authentication process can occur either through specific security group membership in Active Directory (named Phone Factor Admins) is through the use of SSL certificates.

 

Now that you have configured the server Azure MFA there is the ability to easily import users from Active Directory, figura 15, and enable the desired authentication dual factor.

 

Figure 15 -Import users from Active Directory

 

In the scenario of use of Azure Multi-Factor Authentication (MFA) Server is good to specify that user data is saved on-premises systems and no data is stored permanently on the cloud. In fact, when a user places the process of multi-factor authentication the server Azure MFA sends the following data to the Azure Cloud service MFA to verify and reporting purposes:

 

  • Unique ID of the user (username or internal MFA server ID)
  • Name and surname (Optional)
  • Email Address (Optional)
  • Phone number (in the case of a telephone call or send SMS)
  • Token device (When using authentication via mobile app)
  • Authentication method
  • Authentication result
  • Name and IP address of the server Azure MFA
  • Client IP (If available)
  • Result verification (success or denied) and motivation if deny

 

In addition to targeted different import users from Active Directory on which you want to enable the dual factor authentication you can integrate with the Active Directory Directory service server Azure MFA and set up a targeted and scheduled import of users according to certain criteria. For details please visit the official documentation Directory integration between Active Directory and server Azure MFA.

 

Solution Licensing models

Azure Multi-Factor Authentication is available as standalone service, with user authentication and billing options, or in bundle with Azure Ad Premium, Enterprise Mobility Suite and Enterprise Cloud Suite. Azure Multi-Factor Authentication is available through a Microsoft Enterprise agreement, the Open Volume License program, the program Cloud Solution Provider and a Direct contract, as annual per user model. The service is also available with a model based on consumption per-user and per-authentication, billed every month according to the Azure monetary commitmen.

 

For more information on costs of the solution you can consult the following document: Prices of Multi-Factor Authentication.

 

Conclusions

Azure Multi-Factor Authentication is a simple solution to use, scalable and reliable that offers the possibility of introducing a second method of validation so that users are able to access more securely to your data and applications, both present on-premises cloud environments. For those interested in trying out the service can easily activate a subscription Azure for free by going to Free Trial of Azure.

OMS Log Analytics: How to monitor Azure networking

Inside there is the possiblity to Log Analytics of solution specifications that allow the monitor to some components of the network infrastructure that exists in Microsoft Azure.

Among these solutions are Network Performance Monitor (NPM) which was deepened in the article Monitor network performance with the new solution of OMS and that lends itself very well to monitor the health status of, the availability and accessibility of the networking of Azure. They are also currently available in the following gallery of Operations Management Suite solution that enrich the monitoring potential side OMS:

  • Azure Application Gateway Analytics
  • Azure Network Security Group Analytics

Enabling Solution

By accessing the portal who you can easily add these solutions present in the gallery by following the steps that are documented in the following article: Add Azure Log Analytics management solutions to your workspace (OMS).

Figure 1 – Analytics Solution Azure Application Gateway

Figure 2 – Analytics Solution Azure Network Security Group

Azure Application Gateway Analytics

The Azure Application Gateway is a service that you can configure in Azure environment can provide Application Delivery functionality ensuring an application layer balance 7. For more information regarding ’ Azure Application Gateway can be found in the official documentation.

In order to collect diagnostic logs in Log Analytics you need to position yourself in the Azure portal resource Application Gateway that you want to monitor, and then under Diagnostics logs Select sending the logs to the workspace Log Analytics should:

Figure 3 – Application Gateway Diagnostics settings

For the Application Gateway you can select the following log collection:

  • Logs that are related to active logins
  • Performance data
  • Firewall log (If the Application Gateway has the Web Application Firewall enabled)

After you complete these simple steps designed to walk easily installed solution who data sent from the platform:

Figure 4 – Azure Application Gateway Analytics Overview who Portal

All ’ within the solution, you can view a summary of the collected information and selecting the individual charts you access details about the following categories:

  • Application Gateway Access log
    • Client and server errors in access log of Application Gateway
    • Applications received by Application Gateway for now
    • Failed requests per hour
    • Errors detected for user agent

Figure 5 – Application Gateway Access log

  • Application Gateway performance
    • State of health of the hosts that meet the requirements of the Application Gateway
    • Failed requests of Application Gateway expressed as maximum number and with the 95 percentile

Figure 6 – Application Gateway performance

  • Application Gateway Firewall log

 

Azure Network Security Group Analytics

In Azure, you can check the network communication via the Network Security Group (NSG) which aggregates a set of rules (ACL) to allow or deny network traffic based on direction (inbound or outbound), the protocol, the address and the source port or the destination address and port. The NSG are used to control and protect the virtual network or network interfaces. For all the details about the NSG please visit the Microsoft's official documentation.

In order to collect diagnostic logs of Network Security Group in Log Analytics you need to position yourself in the Azure Portal Resource Network Security Group that you want to monitor, and then under Diagnostics logs Select sending the logs to the workspace Log Analytics should:

Figure 7 – Enabling NSG Diagnostics

Figure 8 – Diagnostic configuration NSG

On the Network Security Group you can collect the following types of logs:

  • Events
  • Counters related to rule

At this point in the OMS portal home page you can select the tile by Overview of solution Azure Network Security Group Analytics to access data from the NSG collected by platform:

Figure 9 – Azure Network Security Group Analytics Overview OMS Portal

The solution provides a summary of the logs collected by splitting them into the following categories:

  • Network security group blocked flows
    • Rules of the Network Security Group with blocked traffic
    • Network routes with traffic blocked

Figure 10 – Network security group blocked flows

  • Network security group allowed flows
    • Rules of the Network security group with allowed traffic
    • Directives of the network with traffic rules allowed

Figure 11 – Network security group allowed flows

The methodology of sending diagnostic logs of Application Gateway and Network Security Group of Azure to Log Analytics has changed recently by introducing the following advantages:

  • Writing the log in log Analytics takes place directly without having to use the storage account as repository. You can choose to save the diagnostic logs on the storage account, but it is not necessary for the ’ sending data to OMS.
  • The latency between the time of log generation and their consultation in Log Analytics has been reduced.
  • Have been greatly simplified the steps required to configure.
  • All Azure Diagnostics were harmonised as format.

Conclusions

Thanks to a more complete integration between Azure and Operations Management Suite (OMS) You can monitor and control the status of the components of the network infrastructure built on Azure comprehensively and effectively, all with simple, intuitive steps. This integration of platform Azure with OMS is surely destined to be enriched with new specific solutions for other components. For those interested to further deepen this and other features of the who remember that you can try the OMS for free.

Windows Server 2016: Configuring the Failover Cluster Witness in the Cloud

In the article Windows Server 2016: What's New in Failover Clustering all were thorough main innovations introduced with Windows Server 2016 in the failover clustering. In this article we will detail the configuration of the cluster witness in Microsoft Azure cloud, analyzing the possible scenarios and the benefits of this new feature.

 

Possible scenarios supported by Witness Cloud

Among the supported scenarios that lend themselves more to this type of configuration are:

  • Multi-site stretched cluster.
  • Failover Cluster that does not require shared storage (SQL Always On, Exchange DAGs, etc).
  • Failover Cluster composed of nodes hosted on Microsoft Azure, other public or private cloud.
  • Scale-out type cluster File Server.
  • Cluster made actually small branch-office.

 

Cloud Witness configuration

We begin by specifying that a requirement to configure the cluster to use the Cloud Witness is that all nodes that make up the cluster has an internet access to Azure. Cloud Witness in fact uses the HTTPS protocol (door 443) to establish a connection with the Azure blob Storage Service Account.

 

Configuring the subscription requires a Witness Azure Cloud in which to configure a Storage Account that will be used as Witness and Cloud on which are written the blob file used for the arbitration of the cluster.

 

From the Azure portal you must create a storage account type Genaral Purpose. For this purpose is incorrect, create it with a performance level standard as they are not necessary for high performance that is provided with the use of SSDS. After selecting the most suitable location and replication policies you can proceed with the process of creating.

 

Figure 1 – Storage Account creation

 

After you create your storage account you must retrieve its required access key for authentication, which will be required in configuration steps.

 

Figure 2 – Account Storage access keys

 

At this point you can change the settings of the cluster Quorum from Failover Cluster Manager by following the steps below:

 

Figure 3 – Failover Cluster Quorum Settings Configuration Manager

 

Figure 4 – Witness Quorum selection

 

Figure 5 – Selection of Cloud Witness

 

Figure 6 – Storage Account name and access key

 

After successful configuration will be present among the various cluster resources also Cloud Witness:

 

Figure 7 – Cloud Resource Witness

 

Azure Storage Account is created a container named msft-cloud-witness, within which there will be a single blob file that has as its name the ID I joined the cluster. This means that you can use the same Microsoft Azure Storage Account to set up the different Cloud cluster Witness, where there will be a blob file for each cluster.

 

Figure 8 – Container inside of the Storage Account and its contents

 

Advantages of using Cloud Witness

The use of Cloud Witness gets the following benefits:

  • Eliminates the need to have an additional separate data center for certain cluster configurations by using Microsoft Azure.
  • Cancels the administrative effort required to maintain an additional virtual machine cluster witness role.
  • Given the small amount of data written to the Storage Account service charge is ridiculous.
  • The same Microsoft Azure Storage Account can be used as a witness to different clusters.

 

Conclusions

In the Windows Server failover cluster 2016 proves ready for integration with the cloud. With the introduction of cloud cluster systems more easily is possible Witness substantially reducing overall costs for implementing, the management effort and increasing flexibility of architecture cluster.

How to migrate to Microsoft Azure systems using OMS Azure Site Recovery

In the article OMS Azure Site Recovery: solution overview Azure Site Recovery characteristics were presented and examined aspects that make it an effective and flexible solution for creating business continuity and disaster recovery strategies for your data center. In this article we will see how to use Azure Site Recovery to migrate even potentially heterogeneous environments to Microsoft Azure. Increasingly we are opposite the ’ need not only to create new virtual machines in Microsoft public cloud, but also to migrate existing systems. To perform these migrations you can adopt different strategies, including also appears Azure Site Recovery (ASR) that allows us to easily migrate virtual machines on Hyper-V, VMware, physical systems and workloads of Amazon Web Services (AWS) to Microsoft Azure.

The following table shows what is possible migration scenarios deal with ASR:

Source Destination Supported Guest OS type
Hyper-V 2012 R2 Microsoft Azure All supported guest OSS in Azure
Hyper-V 2008 R2 SP1 and 2012 Microsoft Azure Windows and Linux *
VMware and physical servers Microsoft Azure Windows and Linux *
Amazon Web Services (Windows AMIs) Microsoft Azure Windows Server 2008 R2 SP1 +

* Limited support to Windows Server 2008 R2 SP1 +, CentOS 6.4, 6.5, 6.6, Oracle Enterprise Linux 6.4, 6.5, SUSE Linux Enterprise Server 11 SP3

When you need to perform migration task is usually critically important respect the following points:

  • Minimize downtime of production workloads during the migration process.
  • Have the opportunity to test and validate the solution works in the target environment (Azure in the specific case) before the migration.
  • Make a single migration of data useful to the validation process for the actual migration.

With ASR this is possible by following this simple flow of operations:

Figure 1 – Migration flow with ASR

 

Let us now see in detail what are the operations to be carried out in a migration scenario of virtual machines on a Hyper-V host 2012 R2 to Microsoft Azure.

First, since you have to create an Azure Portal Recovery Service Vault in the subscription to which you want to migrate virtual machines:

Figure 2 – Creating Recovery Service Vault

Afterwards you must prepare l ’ order to use Azure infrastructure Site Recovery. All you can do so by following the wizard proposal from the Azure Portal:

Figure 3 – Infrastructure preparation

After declaring your migration scenario (virtual machines on Hyper-V is not managed by SCVMM to Azure), assign a name to the site Hyper-V and agree to it l ’ Hyper-V host that holds the virtual machines:

Figure 4 – Preparing Source: step 1.1

Figure 5 – Preparing Source: step 1.2

At this point you need to install on ’ Hyper-V hosts the Microsoft Azure Site Recovery. During the installation ’ you can specify a proxy server and your registration key to the vault, which you need to download it directly from the Azure Portal:

Figure 6 – Provider installation ASR

Figure 7 – Configuring access to the vault

Figure 8 – Proxy settings

Figure 9 – Registration vault ASR

After waiting a few moments on the Hyper-V server registered at Azure vault Site Recovery will appear on the Azure Portal:

Figure 10 – Preparing Source: step 2

The next step requires you to specify on which subscription Azure will create virtual machines and the deployment model (Azure Resource Manager – ARM in the following case). At this point it is important to verify that a storage account and a virtual network that attest to the virtual machines:

Figure 11 – Target Preparation

The next step is where you specify which replication policy associate with the site. If there are no previously created policy you should configure a new policy by setting the following parameters, best suited to your environment:

Figure 12 – Setting replication policy

Figure 13 – Replication policy Association

L ’ last ’ ’ steps of preparation of infrastructure require the implementation of the Capacity Planner, very useful tool to estimate bandwidth usage and storage l ’. It also allows you to evaluate a series of other aspects that you need to take well into account replication scenarios to avoid problems. The tool can be downloaded directly from the Azure Portal:

Figure 14 – Capacity planning

At this point you have completed all the configuration and preparation of ’ infrastructure and you can continue selecting which machines you want to replicate from site previously configured:

Figure 15 – Enabling replication

In the next step you can select the replicated machine configurations in terms of Resource Group, Storage Account and Virtual Network – Subnet:

Figure 16 – Target recovery settings

Between all the machines that are hosted on the ’ Hyper-V you should select to whom you want to enable replication to Azure:

Figure 17 – VMs selection to be replicated

For each selected virtual machine, you must specify the guest operating system (Windows or Linux), What ’ is the disk that holds the operating system and what data disks you want to replicate:

Figure 18 – Properties of VMs in replica

After completing the configuration of all steps will begin the replication process according to the settings configured in the policy specifies:

Figure 19 – Replication steps

After the initial replication is recommended to verify that the virtual machine still works correctly in Microsoft Azure environment by placing an “Test Failover” (point 1 Dell ’ image below) and after appropriate checks should be “Planned Failover” (point 2) to have the virtual machine available and ready to be used in production environment. When this is done can be considered completed the migration to Azure of your system and you can remove the replication configuration all ’ within the Recovery Service Vault (point 3).

Figure 20 – Finalization of the migration process

Conclusions

Azure Site Recovery with simple guided steps allows us to easily migrate, safely and with minimum downtime systems that are located in our datacenter or workloads found in Amazon Web Services (AWS) to Microsoft Azure. I remind you that the functionality of Azure Site Recovery can be tested by activating a trial of environment Operations Management Suite or of Microsoft Azure.

Windows Server 2016: What's New in Failover Clustering

Very frequently in order to ensure the high availability and business continuity for critical applications and services you need to implement a Failover Cluster running Microsoft. In this article we'll delve into the main innovations introduced with Windows Server 2016 in the failover clustering and analyse the advantages in adopting the latest technology.

Cluster Operating System Rolling Upgrade

In Windows Server 2016 introduces an important feature that allows you to upgrade the nodes of a Hyper-V cluster or Scale-Out File Server from Windows Server 2012 R2 to Windows Server 2016 without any disruption and avoiding to stop it hosted workloads.

The upgrade process involves these steps:

  • Put the node that you want to update paused and move all the virtual machine or the other workloads on the other nodes in the cluster
  • Remove the node from the cluster and perform a clean installation of Windows Server 2016
  • Add the node Windows Server 2016 the existing cluster. By this time the Mixed mode cluster with both Windows Server nodes 2012 R2 and nodes Windows Server 2016. In this connection it is well to specify that the cluster will continue to provide the services in Windows Server 2012 R2 and will not be yet available features introduced in Windows Server 2016. At this stage you can add and remove nodes is Windows Server 2012 R2 and nodes Windows Server 2016
  • Upgrading of all the cluster nodes in the same way as previously described
  • Only when all cluster nodes have been upgraded to Windows Server 2016 You can change the functional level to Windows Server cluster 2016. This operation is not reversible and to complete it you must use the PowerShell Update-ClusterFunctionalLevel. After you run this command you can reap all the benefits introduced in Windows Server 2016 stated below

Cloud Witness

Windows Server 2016 introduces the ability to configure the cluster witness directly in Microsoft Azure cloud. Cloud Witness, just like the tall types of witness, will provide a vote by participating in the calculation of quorum arbitrary.


Figure 1 – Cloud Witness in Failover Cluster Manager

Configuring the Cloud Witness involves two simple steps:

  • Creating a subscription to an Azure Storage Account that you will use Azure Cloud Witness
  • Configuring the Cloud Witness in one of the following ways

PowerShell

Failover Cluster Manager


Figure 2 – Cloud Witness Configuration Step 1


Figure 3 – Cloud Witness Configuration Step 2

 


Figure 4 – Cloud Witness Configuration Step 3

The use of Cloud Witness gets the following benefits:

  • Leverages Microsoft Azure eliminating the need for an additional separate data center for certain cluster configurations
  • Working directly with a Microsoft Azure Blob Storage canceling this way the administrative effort required to keep a virtual machine in a public cloud
  • The same Microsoft Azure Storage Account can be used for multiple clusters
  • View the mole little data that is written to the Storage Account service charge is ridiculous

Site-Aware Failover Clusters

Windows Server 2016 introduces the concept of clustered failover site-aware and is able to gather groups of nodes in a cluster based on the geographical location configuration stretched (site). During the lifetime of a cluster site-aware placement policies, the heartbeat between nodes and failover operations and calculation of the quorum are designed and improved for this particular cluster environment configuration. For more details about I invite you to consult the article Site-aware Failover Clusters in Windows Server 2016.

Multi-domain and workgroup Cluster

In Windows Server 2012 R2 and in previous versions of Windows, all nodes in a cluster must necessarily belong to the same Active Directory domain. With Windows Server 2016 removes these barriers and provides the ability to create a Failover Cluster without Active Directory dependencies.

In Windows Server 2016 supports the following configurations:

  • Single-domain Cluster: clusters where all nodes are in the same domain
  • Multi-domain Cluster: cluster composed of nodes joined to different Active Directory domains
  • Workgroup Cluster: cluster with nodes in WFWG (not joined to a domain)

In this regard it is good to specify what are the supported workloads and its limitations to Multi-domain and Workgroup cluster:

Cluster Workload

Support

DettagliMotivazione

SQL Server

Supported

Recommended SQL Server authentication.

File Server

Supported, but not recommended

Kerberos authentication (not available in these environments) is the recommended authentication protocol Server Message Block traffic (SMB).

Hyper-V

Supported, but not recommended

Does not support Live Migration, but only the Quick Migration.

Message Queuing (MSMQ)

Not supported

Message Queuing save property in AD DS.

Diagnostic in Failover Clustering

In Windows Server 2016 the following innovations have been introduced to facilitate troubleshooting if problems arise cluster environment:

SMB Multichannel and Multi-NIC Cluster Network

In Windows Server 2016 There are several new features in the network regarding the clustered environment that help ease configuration and get better performance.

The main benefits introduced in Windows Server 2016 can be summarised in the following points:

  • SMB Multichannel is enabled by default
  • Failover cluster can recognize automatically the NIC attested on the same subnet as the same switch
  • A single resource IP Address is configured for each Access Point Cluster (Zip code) Network Name (NN)
  • The network with Link-Local IPv6 addresses only (FE80) are recognized as private networks (cluster only)
  • The cluster validation does not report more warning messages in case there are more NIC attested on the same subnet

For more information I refer you to the Microsoft documentation: Simplified SMB Multichannel and Multi-NIC Cluster Networks.

Conclusions

Windows Server 2016 introduces major changes in the Failover Clustering making the solution more flexible and opening up new configuration scenarios. Furthermore the upgrade process allows us to easily update existing clusters to take advantage of all the benefits introduced by Windows Server 2016 for different workloads.

OMS Azure Site Recovery: solution overview

To have an adequate business continuity and disaster recovery strategy that helps keep running applications and restore normal working conditions when it is necessary to perform maintenance activities planned or unplanned stoppages is crucial.

Azure Site Recovery promotes l ’ implementation of these strategies by orchestrating the replicas of virtual machines and physical servers present in your data center. You have the option of replicating servers and virtual machines that reside on a local primary data center to the cloud (Microsoft Azure) or to a secondary data center.

If you experience interruptions in the primary data center you can initiate a failover process to keep workloads accessible and available. When will it be possible to use the resources in the primary data center will handle the failback process.

Replication scenarios

The following scenarios are covered in Azure replication Site Recovery:

  • Hyper-V virtual machine replication

In this scenario if Hyper-V virtual machines are managed by System Center Virtual Machine Manager (VMM) You can expect the replica to a secondary data center and Microsoft Azure. If the virtual machines are managed through VMM, the replica will be possible only to Microsoft Azure.

  • Replication of VMware virtual machines

The virtual machines on VMware can be replicated to a secondary data center using a data channel of InMage Scout to Microsoft Azure.

  • Replication of physical servers Windows and Linux

The physical servers can be replicated to a secondary data center (using InMage Scout data channel) that to Microsoft Azure.

Figure 1 – Replication scenarios of ASR

Azure configuration Site Recovery

The following table lists the documents with the specifications that you must follow to configure Azure Site Recovery in different scenarios:

Typology of the systems to be replicated Replication target
VMware virtual machines Microsoft Azure

Secondary data center

Managed Hyper-V virtual machines in VMM clouds Microsoft Azure

Secondary data center

Managed Hyper-V virtual machines in VMM clouds, with storage on SAN Secondary data center
Hyper-V virtual machines without VMM Microsoft Azure
Local Windows/Linux physical servers Microsoft Azure

Secondary data center

 

The main advantages in adopting Azure Site Recovery

After reviewing what can I do with Azure Site Recovery and what steps to follow to implement recovery plans are those that are some of the major benefits that you may have with the adoption of this solution:

  • Using the tools of Azure Site Recovery it simplifies the process of creating business continuity and disaster recovery plans. Recovery plans and runbooks can include scripts present in Azure Automation so you can shape and customize your application with DR procedures for complex architectures.
  • You can have a high degree of flexibility thanks to the potential of the solution that enables you to orchestrate replicas of physical servers and virtual machines running on Hyper-V and VMware.
  • With the ability to replicate the work loads directly on Azure in some cases you may want to completely delete a secondary data center made just for business continuity and disaster recovery.
  • You have the option to periodically perform failover test to validate the effectiveness of the recovery plans implemented, without giving any impact to production application environment.
  • It is possible to integrate with other technologies existing company ASR BCDR (for example Sql Server AlwaysOn or SAN replication).

 

Types of Failover on Azure Site Recovery

After creating a plan of recovery you can perform different types of failover. The following table lists the various types of failover and for each is specified its purpose and what action causes the execution process.

Conclusions

Azure Site Recovery is a powerful and flexible solution for creating business continuity and disaster recovery strategies for your data center, able to orchestrate and manage complex and heterogeneous infrastructures. All this makes ASR an appropriate tool for most environments. For those wishing to explore the field of Azure Site Recovery features can activate a trial of environment Operations Management Suite or of Microsoft Azure.