Category Archives: Operations Management Suite

How to connect third-party security solutions at OMS

Between the various features of Operations Management Suite (OMS) There is a possibility to collect events generated in standard form Common Event Format (CEF) and events generated by Cisco ASA devices. Many vendors of security solutions generate events and log files matching the syntax defined in the standard CEF for interoperability with other solutions. Configuring the sending of data in this format to who and adopting the solution OMS Security and Audit You can correlate the different information collected, leverage the powerful search engine of OMS to monitor your infrastructure, retrieve audit information, detect problems and use Threat Intelligence.

This article will be fleshed out the necessary steps to integrate the logs generated by Cisco Adaptive Security Appliance (ASA) within the who. Before you can configure this integration you must have a Linux machine with installed agent OMS (version 1.2.0-25 or later) and configure it to forward the logs are received by the who to the workspace. For installation and onboard Linux agent I refer you to the official Microsoft documentation: Steps to install the OMS Agent for Linux.

Figure 1 – Architecture for collecting logs from Cisco ASA in OMS

Cisco ASA apparatus must be configured to forward events to the Linux machine defined as collector. To do this you can use Cisco ASA device management tools such as Cisco Adaptive Security Device Manager:

Figure 2 – Syslog Server configuration example Cisco ASA

On the Linux machine must be running the syslog daemon will send events to UDP port 25226 local. The agent who is listening on this port for all incoming events.

For this configuration, you must create the file Security-config-omsagent. conf respecting the following specifications depending on the type of Syslog running on Linux machine. For example, a sample configuration to send all events with facility local4 the agent who is as follows:

  • If daemon rsyslog the file must be present in the directory /etc/d/rsyslog. with the following content:
#OMS_facility = local4

local4.* @ 127.0.0.1:25226
  • If daemon syslog-ng the file must be present in the directory /etc/syslog-ng/ with the following content:
#OMS_facility = local4  

filter f_local4_oms { facility(local4); };  

destination security_oms { TCP("127.0.0.1" port(25226)); };  

log { source(src); filter(f_local4_oms); destination(security_oms); };  

The next step is the creation of the configuration file Fluentd named security_events. conf that lets you collect and make parsing of events received by the agent who. The file you can download it from GitHub repository and must be copied into the directory /etc/opt/microsoft/omsagent/<workspace id>/conf/d/omsagent..

Figure 3 – Configuration file Fluentd the agent OMS

At this point, to make the changes, You must restart the syslog daemon and agent who through the following commands:

  • Restarting Syslog daemon:
sudo service rsyslog restart or sudo/etc/init.d/syslog-ng restart
  • Restart agent OMS:
sudo/opt/microsoft/omsagent/bin/service_control restart

Complete these steps the agent who should view the log to see if there are any errors using the command:

tail/var/opt/microsoft/omsagent/<workspace id>/logs/omsagent.log

After finishing the configuration from the who portal you can type in the query Log Search Type = CommonSecurityLog to analyze data collected from the Cisco ASA:

Figure 4 – Query to see Cisco ASA events collected at OMS

Log collection is enriched by Threat Intelligence present in solution Security & Compliance Thanks to an almost real-time correlation of data collected in the repository OMS with information from leading vendor of Threat Intelligence and with the data provided by the Microsoft security centers allows you to identify the nature and results of any attacks involving our systems, including the network equipment.

By accessing the solution Security And Audit from the OMS section appears Threat Intelligence:

Figure 5 – Information of Threat Intelligence

By selecting the tile Detected threat types You can see details about intrusion attempts that in the following case involving the Cisco ASA:

Figure 5 – Detected threat on Cisco ASA

In this article you entered the configuration details of Cisco ASA, but similar configurations you can make them for all solutions that support the generation of events in standard form Common Event Format (CEF). To configure the integration of Check Point Securtiy Gateway with who I refer you to the document Configuring your Check Point Security Gateways to send logs to Microsoft OMS.

Conclusions

Using Operations Management Suite there is a chance to consolidate and to correlate events from different products that provide security solutions allowing you to have a complete overview of your infrastructure and respond quickly and accurately to any incident of security.

OMS Azure Backup: new reporting functionality via Power BI

Recently introduced the ability to Backup to generate the reports necessary to Azure easily check the status of resource protection, details on the different backup jobs configured, the actual storage utilization and status of its alert. Particularly interesting is the ability to generate reports also cross subscription and cross Azure vault. All this is made possible by using Power BI allowing you to have a high degree of flexibility in the generation and customization of reports. To see the benefits of this feature and evaluate how to analyze data through Power BI I invite you to consult the post "Gain business insights using Power BI reports for Azure Backup". In this article we will analyze the necessary steps to configure reporting Azure Backup.

By logging in from Azure portal to Recovery Service Vault that contains the resources protected the message of the availability of this new feature:

Figure 1 – Notice of availability of the new reporting capabilities in Recovery Services vault

To enable the functionality you need to position yourself in the settings of Recovery Services vault and under "Monitoring and Reports" select "Backup Reports":

Figure 2 – Configuring reports

By selecting the "Configure" button starts the configuration process that requires two distinct steps. In the first step you must select the Storage Account on which will store the information needed for the generation of reports. There is also the possibility to send this information to a Log Analytics workspace. For each type of log, you can select the retention period, which applies only to information that is located on the Storage Account and not to those invited to the workspace OMS. By setting to 0 days retention period the data is never removed from the Storage Account.

Figure 3 – Step 1: Diagnostics configuration

All ’ in the storage account is created an appropriate container to save logs from Azure Backup named insights-logs-azurebackupreport:

Figure 4 – Azure Backup container for jobs of Azure Backup

The second step of Setup requires access to the portal Power BI and the addition of’Azure Backup Content Pack performing the following steps:

Figure 5 – Adding Azure Backup Content Pack from the portal Power BI

At this point you must enter the name of the Account on which Storage, during the step 1 the configuration, you have chosen to save backup information.

Figure 6 – Storage Account name

In the next step, we require authentication using the account access key storage:

Figure 7 – Insert key to authenticate the Storage Account

After you complete these steps by the portal Power BI using the dashboard, you can consult all information relating to Azure Backup and if necessary customize reports to suit your needs.

Figure 8 – Azure Backup Dashboards present in the portal Power BI

Having chosen to send diagnostic information to Log Analytics you can, by accessing the portal OMS, query the repository by using the following query, and retrieve information about Azure Backup:

Figure 9 – Search for events of Azure Backup in Log Analytics

Using the View Designer, you can also build a custom view of OMS using data from Azure Backup collected in Log Analytics.

 

Conclusions

This new feature enables you to have complete control of your infrastructure and backup status to easily keep tabs on SLAs in compliance with the corporate compliance. All in easy way and using a fully managed cloud solution. All you need is a subscription account Azure storage and Power BI. Analyzing data collected through Power BI is also extremely flexible thanks to the extensive customization capabilities provided by the tool.

OMS Log Analytics: the Update Management solution for Linux systems

Using the Operations Manager Update Management Solution Suite (OMS) you have the ability to centrally manage and control the update status of systems in heterogeneous environments both Windows and Linux machines and independently from their placement, on-premises rather than in the cloud. In this article, we explored aspects of solution regarding Linux systems.

The Update Management solution allows you to quickly assess the status of updates available on all servers with the OMS agent installed and is able to start the process of installing the missing updates. Linux systems are configured to use this solution require in addition to the presence of ’ agent who Powershell Desired State Configuration (DSC) for Linux andHybrid Runbook Automation Worker (installed automatically).

The solution currently supports the following Linux distributions:

  • CentOS 6 (x 86/x 64) and CentOS 7 (x64).
  • Red Hat Enterprise 6 (x 86/x 64) and Red Hat Enterprise 7 (x64).
  • SUSE Linux Enterprise Server 11 (x 86/x 64) and SUSE Linux Enterprise Server 12 (x64).
  • Ubuntu 12.04 LTS and later (x 86/x 64).

In addition to work correctly you need the Linux system has access to an update repository. In this regard it is worth noting that at the moment there is a chance by who to select which updates to apply, but ’ all available updates are available from the update repository configured on the machine. To have more control over updates to apply you may evaluate the ’ using a custom update repository created and contains only the updates that you want to approve.

The following diagram shows the flow of operations being carried out by the solution to move towards compliance status and the workspace who to apply the missing updates:

Figure 1 – Flow of operations performed on Linux systems

  1. The agent who for Linux scans each 3 hours to detect missing updates and reports the outcome of the scan to the workspace who.

Figure 2 – OMS Dashboard Update Management solution

  1. The operator using the dashboard OMS can refer to update assessments and define the schedule for deployment of updates:

Figure 3 – Management of Update Deployment

Figure 4 – OMS Dashboard Update Management solution

In creating the Update Deployment is defined a name, the list of systems to be involved, that can be provided explicitly or by using a query of Log Analytics, and scheduling.

  1. The component Hybrid Runbook Worker running on Linux systems checks for maintenance Windows and the availability of any deployment to apply. In this regard it is good to specify that enabling the solution to Update Management every Linux system connected to the workspace who is automatically configured as Hybrid Runbook Worker to perform runbook created to deploy updates. Also every system managed by the solution is a Hybrid Runbook Worker Group within the Automation OMS Account following the naming convention Hostname_GUID:

Figure 5 – Hybrid Worker Groups

  1. If a machine has an Update Deployment (as a direct member or because it belongs to a specific group of computers) on it starts the package manager (Yum, Apt, Zypper) to install updates. Installing updates is driven by who through specific runbook Automation within Azure. These are not visible in Azure runbook Automation and require no configuration by the administrator.

Figure 6 – Azure Automation Account used by the solution of Update Management

  1. After Setup OMS agent for Linux and the basic status of Update Deployment and compliance to the workspace who.

Conclusions

Microsoft Operations Management Suite is a tool that lets you manage and monitor heterogeneous environments. Still today, unfortunately, you are faced to the debate on the real need to maintain regularly updated Linux systems, but considering some recent security incident caused by outdated systems, It is evident that it is good to have a solution that allows you to manage updates for Linux machines. The solution to Update Management of OMS is constantly evolving, but already today enables us to control and manage the distribution of updates also on Linux systems in a simple and efficient way.

For more details, I invite you to consult Microsoft's official documentation Solution for Update Management of OMS.

To further explore this and other features you can activate free OMS.

 

OMS Security: Antimalware solution Assessment presentation

Microsoft Operations Management Suite (OMS) offers an interesting solution named Antimalware Assessment with which you can monitor the status of anti-malware protection on the entire infrastructure and easily detect potential threats.

In order to use the Antimalware solution Assessment you must subscribe to l ’ offer "Security & Compliance "OMS. The installation of the solution can be made by following the procedure described at the beginning of the article OMS Security: Threat Intelligence or by going directly to theAzure Marketplace. After having activated the OMS is not required no further configuration and is ready to be used.

La solution thanks to an easy-to-navigate dashboard shows real-time antimalware protection systems without active and is able to show a status in OMS antimalware for the following products:

  • Windows Defender on Windows 8, Windows 8.1, Windows 10 and Windows Server 2016.
  • Windows Security Center (WSC) on Windows 8, Windows 8.1, Windows 10, Windows Server 2016.
  • System Center Endpoint Protection (version 4.5.216 or later).
  • Antimalware extension and Windows Malicious Software Removal Tool (MSRT) activated on the VMS in Azure.
  • Symantec Endpoint 12. x and 14 x.
  • Trend Micro Deep 9.6.

At the moment only detects installations of some solutions of third party vendors such as Symantec and Trend Micro, but probably this list is set to increase.

On monitored systems by who is made an assessment about security by checking the status of the antimalware product, performing analysis on a regular basis, and if you are using signatures from as little as seven days.

The portal home page who is the tile that reports a summary Assessment of the State of anti-malware infrastructure:

Figure 1 – Antimalware Assessment tile

By selecting this tile leads to Antimalware solution dashboard Assessment that categorizes the information collected and reported in 4 different tile:

  • Threat Status
  • Detected Threats
  • Protection Status
  • Type of Protection

Figure 2 – Antimalware Dashboard Assessment

The first two tile focus on observations of infections with the type of malware intercepted, infected systems and highlighting situations where the antimalware ’ was not able to clean your system from ’ infection.

Selecting the infected machine or the name of the malware is returned on the page Log Search where you can see the details of the threat detected:

Figure 3 – Details of the threat detected

Selecting the link View next to the name of threat you are directed to the Microsoft malicious software encyclopedia:

Figure 4 – Search all internal Microsoft ’ encyclopedia of malware

By selecting the name of the malware you can consult the card with all details about all ’ infection:

Figure 5 – Card with malware information

The remaining tile shows useful information on the State of infrastructure security:

  • Which machines are not protected and why (agent disabled, signature not updated or not scan made recently) so you can take corrective action.
  • The list of machines detected on antimalware solutions.

From these tile you can easily do a drill down to see the list of affected machines, such as the list of machines without a real time protection enabled:

Figure 6 – Machines with no real time protection

Conclusions

You can count on a tool that can quickly identify systems with antimalware protection not sufficient or compromised machines from malware is crucial to mitigate attempts at compromise of corporate data and avoid major incidents of security. Microsoft Operations Management Suite (OMS) In addition to these features it includes other important solutions in this area making it a great tool to ensure the security and compliance of your infrastructure. To further explore this and other features you can try the OMS for free.

OMS Log Analytics: How to monitor Azure networking

Inside there is the possiblity to Log Analytics of solution specifications that allow the monitor to some components of the network infrastructure that exists in Microsoft Azure.

Among these solutions are Network Performance Monitor (NPM) which was deepened in the article Monitor network performance with the new solution of OMS and that lends itself very well to monitor the health status of, the availability and accessibility of the networking of Azure. They are also currently available in the following gallery of Operations Management Suite solution that enrich the monitoring potential side OMS:

  • Azure Application Gateway Analytics
  • Azure Network Security Group Analytics

Enabling Solution

By accessing the portal who you can easily add these solutions present in the gallery by following the steps that are documented in the following article: Add Azure Log Analytics management solutions to your workspace (OMS).

Figure 1 – Analytics Solution Azure Application Gateway

Figure 2 – Analytics Solution Azure Network Security Group

Azure Application Gateway Analytics

The Azure Application Gateway is a service that you can configure in Azure environment can provide Application Delivery functionality ensuring an application layer balance 7. For more information regarding ’ Azure Application Gateway can be found in the official documentation.

In order to collect diagnostic logs in Log Analytics you need to position yourself in the Azure portal resource Application Gateway that you want to monitor, and then under Diagnostics logs Select sending the logs to the workspace Log Analytics should:

Figure 3 – Application Gateway Diagnostics settings

For the Application Gateway you can select the following log collection:

  • Logs that are related to active logins
  • Performance data
  • Firewall log (If the Application Gateway has the Web Application Firewall enabled)

After you complete these simple steps designed to walk easily installed solution who data sent from the platform:

Figure 4 – Azure Application Gateway Analytics Overview who Portal

All ’ within the solution, you can view a summary of the collected information and selecting the individual charts you access details about the following categories:

  • Application Gateway Access log
    • Client and server errors in access log of Application Gateway
    • Applications received by Application Gateway for now
    • Failed requests per hour
    • Errors detected for user agent

Figure 5 – Application Gateway Access log

  • Application Gateway performance
    • State of health of the hosts that meet the requirements of the Application Gateway
    • Failed requests of Application Gateway expressed as maximum number and with the 95 percentile

Figure 6 – Application Gateway performance

  • Application Gateway Firewall log

 

Azure Network Security Group Analytics

In Azure, you can check the network communication via the Network Security Group (NSG) which aggregates a set of rules (ACL) to allow or deny network traffic based on direction (inbound or outbound), the protocol, the address and the source port or the destination address and port. The NSG are used to control and protect the virtual network or network interfaces. For all the details about the NSG please visit the Microsoft's official documentation.

In order to collect diagnostic logs of Network Security Group in Log Analytics you need to position yourself in the Azure Portal Resource Network Security Group that you want to monitor, and then under Diagnostics logs Select sending the logs to the workspace Log Analytics should:

Figure 7 – Enabling NSG Diagnostics

Figure 8 – Diagnostic configuration NSG

On the Network Security Group you can collect the following types of logs:

  • Events
  • Counters related to rule

At this point in the OMS portal home page you can select the tile by Overview of solution Azure Network Security Group Analytics to access data from the NSG collected by platform:

Figure 9 – Azure Network Security Group Analytics Overview OMS Portal

The solution provides a summary of the logs collected by splitting them into the following categories:

  • Network security group blocked flows
    • Rules of the Network Security Group with blocked traffic
    • Network routes with traffic blocked

Figure 10 – Network security group blocked flows

  • Network security group allowed flows
    • Rules of the Network security group with allowed traffic
    • Directives of the network with traffic rules allowed

Figure 11 – Network security group allowed flows

The methodology of sending diagnostic logs of Application Gateway and Network Security Group of Azure to Log Analytics has changed recently by introducing the following advantages:

  • Writing the log in log Analytics takes place directly without having to use the storage account as repository. You can choose to save the diagnostic logs on the storage account, but it is not necessary for the ’ sending data to OMS.
  • The latency between the time of log generation and their consultation in Log Analytics has been reduced.
  • Have been greatly simplified the steps required to configure.
  • All Azure Diagnostics were harmonised as format.

Conclusions

Thanks to a more complete integration between Azure and Operations Management Suite (OMS) You can monitor and control the status of the components of the network infrastructure built on Azure comprehensively and effectively, all with simple, intuitive steps. This integration of platform Azure with OMS is surely destined to be enriched with new specific solutions for other components. For those interested to further deepen this and other features of the who remember that you can try the OMS for free.

Monitor network performance with the new solution of OMS

In this article we will see how it works and what are the main features of the new OMS solution called Network Performance Monitor (NPM). This solution is able to check the status of your network even in the presence of hybrid architectures allowing you to quickly identify any network segment or device at any given time is causing or has caused outages or performance problems network side. This new service makes the network monitor application centric and this feature makes it different than conventional monitor solutions on the market that tends to have a particular focus on the control of network devices.

Figure 1 – Overview of solution NPM

Using the solution Network Performance Monitor OMS you can have total visibility in terms of availability, latency and performance of their network infrastructure. The activation process and operation is as follows:

  • By accessing the portal who adds the solution "Network Performance Monitor (NPM)"in the gallery of the solution of OMS. To do so you can follow the steps that are documented in the following article: Add Azure Log Analytics management solutions to your workspace (OMS)
  • The solution requires the agent OMS installed on the machines on each subnet that you want to monitor. This is the traditional agent who is not prompted to install any additional part.
  • The cars carrying the agent who will download by who of Network Monitoring Intelligence Pack which is used to detect the subnet on which stood the machine and upload this information to the workspace who.
  • The agent retrieves in turn from who and network configurations are made of probe to detect packet loss and network performance. Network Performance Monitor (NPM) uses synthetic transactions to calculate how many packets are lost and the latency in mind for the various network links. Probe packets that are sent between various agents who carry out the assessment and to monitor the status of your network can be TCP (packages TCP SYN followed by a TCP handshake) or ICMP (messages ICMP ECHO as those generated by traditional utility Ping). Using the ICMP protocol to carry out the probe is useful in environments where network devices because of certain restrictions are not able to respond to TCP type probe.
  • All data is sent to the workspace who and are aggregated to show in clear and understandable terms the network status. In fact, thanks to the Topology Map provides a graphical view of all network paths exist between the various endpoints that helps quickly locate network problems. The topology map are interactive and allow you to drill down on various network links this hop-by-hop topology details. Also you can set filters based on the State of health of link, zoom on network segments and customize the topology.

Figure 2 – Network Topology

The main features of the solution are as follows:

  • The solution is agnostic in terms of the network devices and related vendor and is able to monitor any IP network.
  • The solution is able to monitor connectivity between:
    • Data-center located at different sites and connected via public or private network.
    • Public clouds like Azure and AWS, on-premises networks and user stations.
    • Virtual networks present at public cloud and on-premises.

      Figure 3 – Components monitored by solution NPM

  • NPM helps identify accurate and detailed the network path that is causing a malfunction or degradation of performance, regardless of the complexity of the network, monitoring model adopted:

    Figure 4 – Monitoring model

  • Thanks to a feature called Network State Recorder can not only see the current state of health of the network, but to evaluate it even at a certain time in the past, useful for investigating reports of transitional issues.

    Figure 5 – Network State Recorder

  • Using alerting functionality included in who you can configure sending e-mail alerts to problems encountered by solution NPM. Also you can trigger remediation actions through runbook or set up webhooks to integrate with an existing solution for service management.

    Figure 6 – NPM alerting

  • The solution not only supports Windows Server but the agent also works for client operating systems (Windows 10, Windows 8.1, Windows 8 and Windows 7) and there is also support for Linux operating systems (servers and workstations).

Regarding the cost and licensing model the solution Network Performance Monitor (NPM) is part of OMS Insight & Analytics. On page Prices for Microsoft Operations Management Suite find all the details related to pricing of OMS.

 

Conclusions

In IT environments they see increasingly complex architectures it is useful to have a tool to effectively monitor the status of your network and allows you to isolate with precision the source of any problems. Using the solution Network Performance Monitor (NPM) OMS you have full visibility of the network even in hybrid architecture and you can act proactively identify potential problems. NPM is also a suitable tool not only for network administrators, but thanks to its features can be very useful and easy to use even by those who manage the infrastructure and applications. For those interested to further deepen this and other features of the who remember that you can try the OMS for free. For more information about solution Network Performance Monitor (NPM) you can see the official documentation.

OMS Security: Threat Intelligence

Among the various features offered by Operations Management Suite (OMS) There is the possibility to activate the solution called Security & Compliance that identifies, evaluate and mitigate potential risks of security on our systems. The solution you can turn it on easily with just a few steps:

  1. I log into the portal who and I select the tile "Solutions Gallery"

Figure 1 – Step 1: activating solution Security & Compliance

  1. Among the various solutions offered have the ability to add "Security & Compliance " that currently includes the solution "Antimalware Assessment"e"Security and Audit"

Figure 2 – Step 2: activating solution Security & Compliance

  1. Select the Workspace who and by pressing the button Create the solution is added and made available for use

Figure 3 – Step 3: activating solution Security & Compliance

As a result of the activation of the ’ solution who will connect to systems with the agent installed to perform a security assessment that may initially require up to several hours, then return the processed data in the portal. The solution is able to examine both Windows and Linux machines and helps protect l ’ infrastructure be it on-premises or in the cloud. In this article we'll delve into the functioning of the mechanism of Threat Intelligence.

Figure 4 – Architecture Threat Intelligence

Threat Intelligence plays a vital role in ’ solution scope of security of OMS thanks to a nearly real-time correlation of data collected in the repository OMS with information from leading vendor of Threat Intelligence and with the data provided by the Microsoft security centers. Let us not forget that Microsoft is constantly working to protect their services in the cloud and therefore has a unique visibility and widespread threats that can potentially affect our systems. Providing this functionality Microsoft enables its customers to benefit easily of his knowledge to protect resources, detect attacks and act the same with a quick response without having to resort to complex integration scenarios.

Threat Intelligence is able to provide the following information that enable teams of security to make the necessary actions and to understand the possible level of impairment of their systems:

  • Detect the nature of the attack
  • Determines the intent of the attack, useful to understand if it is a targeted attack at your organization to acquire specific information or if it is a random and massive attack
  • Identifies where the attack
  • Intercepts any compromised systems and reports the server performing traffic considered malevolent outwards
  • Reports which files have been possibly accessed

To access the information in the main portal dashboard Threat Intelligence who select the tile "Security and Audit":

Figure 5 – Tile Security and Audit

On the dashboard "Security and Audit" is the section Threat Intelligence then reset:

Figure 6 – Information of Threat Intelligence

In tile Server with outbound malicious traffic monitored server systems are reported that are generating malicious traffic from the Internet. If they are reported immediately should undertake in this tile systems of remedies.

In tile Detected threat types shows a summary of threat detected recently:

Figure 7 – Tile Detected threat types

By selecting the tile you can also obtain more details about:

Figure 8 – Details about the threat detected

Threat Intelligence also provides the map display of the attacks which enables you to quickly identify which part of the globe are made. Orange arrows indicate the presence of incoming malicious traffic, While Red arrows indicate malicious traffic outbound to certain location. By selecting a specific arrow you will get more details about the source of the attack:

Figure 9 – Threat Intelligence map

Conclusions

Detect potential attacks and respond quickly and effectively to security incidents that occur in your environment is crucial. Activating the solution "Security & Compliance"the Microsoft Operations Management Suite (OMS) You can use Threat Intelligence to enhance the effectiveness of its strategies in security and have a powerful tool that can minimize the amount of potential incidents of security. For those interested to further deepen this and other features of the who remember that you can try the OMS for free.

How to migrate to Microsoft Azure systems using OMS Azure Site Recovery

In the article OMS Azure Site Recovery: solution overview Azure Site Recovery characteristics were presented and examined aspects that make it an effective and flexible solution for creating business continuity and disaster recovery strategies for your data center. In this article we will see how to use Azure Site Recovery to migrate even potentially heterogeneous environments to Microsoft Azure. Increasingly we are opposite the ’ need not only to create new virtual machines in Microsoft public cloud, but also to migrate existing systems. To perform these migrations you can adopt different strategies, including also appears Azure Site Recovery (ASR) that allows us to easily migrate virtual machines on Hyper-V, VMware, physical systems and workloads of Amazon Web Services (AWS) to Microsoft Azure.

The following table shows what is possible migration scenarios deal with ASR:

Source Destination Supported Guest OS type
Hyper-V 2012 R2 Microsoft Azure All supported guest OSS in Azure
Hyper-V 2008 R2 SP1 and 2012 Microsoft Azure Windows and Linux *
VMware and physical servers Microsoft Azure Windows and Linux *
Amazon Web Services (Windows AMIs) Microsoft Azure Windows Server 2008 R2 SP1 +

* Limited support to Windows Server 2008 R2 SP1 +, CentOS 6.4, 6.5, 6.6, Oracle Enterprise Linux 6.4, 6.5, SUSE Linux Enterprise Server 11 SP3

When you need to perform migration task is usually critically important respect the following points:

  • Minimize downtime of production workloads during the migration process.
  • Have the opportunity to test and validate the solution works in the target environment (Azure in the specific case) before the migration.
  • Make a single migration of data useful to the validation process for the actual migration.

With ASR this is possible by following this simple flow of operations:

Figure 1 – Migration flow with ASR

 

Let us now see in detail what are the operations to be carried out in a migration scenario of virtual machines on a Hyper-V host 2012 R2 to Microsoft Azure.

First, since you have to create an Azure Portal Recovery Service Vault in the subscription to which you want to migrate virtual machines:

Figure 2 – Creating Recovery Service Vault

Afterwards you must prepare l ’ order to use Azure infrastructure Site Recovery. All you can do so by following the wizard proposal from the Azure Portal:

Figure 3 – Infrastructure preparation

After declaring your migration scenario (virtual machines on Hyper-V is not managed by SCVMM to Azure), assign a name to the site Hyper-V and agree to it l ’ Hyper-V host that holds the virtual machines:

Figure 4 – Preparing Source: step 1.1

Figure 5 – Preparing Source: step 1.2

At this point you need to install on ’ Hyper-V hosts the Microsoft Azure Site Recovery. During the installation ’ you can specify a proxy server and your registration key to the vault, which you need to download it directly from the Azure Portal:

Figure 6 – Provider installation ASR

Figure 7 – Configuring access to the vault

Figure 8 – Proxy settings

Figure 9 – Registration vault ASR

After waiting a few moments on the Hyper-V server registered at Azure vault Site Recovery will appear on the Azure Portal:

Figure 10 – Preparing Source: step 2

The next step requires you to specify on which subscription Azure will create virtual machines and the deployment model (Azure Resource Manager – ARM in the following case). At this point it is important to verify that a storage account and a virtual network that attest to the virtual machines:

Figure 11 – Target Preparation

The next step is where you specify which replication policy associate with the site. If there are no previously created policy you should configure a new policy by setting the following parameters, best suited to your environment:

Figure 12 – Setting replication policy

Figure 13 – Replication policy Association

L ’ last ’ ’ steps of preparation of infrastructure require the implementation of the Capacity Planner, very useful tool to estimate bandwidth usage and storage l ’. It also allows you to evaluate a series of other aspects that you need to take well into account replication scenarios to avoid problems. The tool can be downloaded directly from the Azure Portal:

Figure 14 – Capacity planning

At this point you have completed all the configuration and preparation of ’ infrastructure and you can continue selecting which machines you want to replicate from site previously configured:

Figure 15 – Enabling replication

In the next step you can select the replicated machine configurations in terms of Resource Group, Storage Account and Virtual Network – Subnet:

Figure 16 – Target recovery settings

Between all the machines that are hosted on the ’ Hyper-V you should select to whom you want to enable replication to Azure:

Figure 17 – VMs selection to be replicated

For each selected virtual machine, you must specify the guest operating system (Windows or Linux), What ’ is the disk that holds the operating system and what data disks you want to replicate:

Figure 18 – Properties of VMs in replica

After completing the configuration of all steps will begin the replication process according to the settings configured in the policy specifies:

Figure 19 – Replication steps

After the initial replication is recommended to verify that the virtual machine still works correctly in Microsoft Azure environment by placing an “Test Failover” (point 1 Dell ’ image below) and after appropriate checks should be “Planned Failover” (point 2) to have the virtual machine available and ready to be used in production environment. When this is done can be considered completed the migration to Azure of your system and you can remove the replication configuration all ’ within the Recovery Service Vault (point 3).

Figure 20 – Finalization of the migration process

Conclusions

Azure Site Recovery with simple guided steps allows us to easily migrate, safely and with minimum downtime systems that are located in our datacenter or workloads found in Amazon Web Services (AWS) to Microsoft Azure. I remind you that the functionality of Azure Site Recovery can be tested by activating a trial of environment Operations Management Suite or of Microsoft Azure.

OMS Azure Site Recovery: solution overview

To have an adequate business continuity and disaster recovery strategy that helps keep running applications and restore normal working conditions when it is necessary to perform maintenance activities planned or unplanned stoppages is crucial.

Azure Site Recovery promotes l ’ implementation of these strategies by orchestrating the replicas of virtual machines and physical servers present in your data center. You have the option of replicating servers and virtual machines that reside on a local primary data center to the cloud (Microsoft Azure) or to a secondary data center.

If you experience interruptions in the primary data center you can initiate a failover process to keep workloads accessible and available. When will it be possible to use the resources in the primary data center will handle the failback process.

Replication scenarios

The following scenarios are covered in Azure replication Site Recovery:

  • Hyper-V virtual machine replication

In this scenario if Hyper-V virtual machines are managed by System Center Virtual Machine Manager (VMM) You can expect the replica to a secondary data center and Microsoft Azure. If the virtual machines are managed through VMM, the replica will be possible only to Microsoft Azure.

  • Replication of VMware virtual machines

The virtual machines on VMware can be replicated to a secondary data center using a data channel of InMage Scout to Microsoft Azure.

  • Replication of physical servers Windows and Linux

The physical servers can be replicated to a secondary data center (using InMage Scout data channel) that to Microsoft Azure.

Figure 1 – Replication scenarios of ASR

Azure configuration Site Recovery

The following table lists the documents with the specifications that you must follow to configure Azure Site Recovery in different scenarios:

Typology of the systems to be replicated Replication target
VMware virtual machines Microsoft Azure

Secondary data center

Managed Hyper-V virtual machines in VMM clouds Microsoft Azure

Secondary data center

Managed Hyper-V virtual machines in VMM clouds, with storage on SAN Secondary data center
Hyper-V virtual machines without VMM Microsoft Azure
Local Windows/Linux physical servers Microsoft Azure

Secondary data center

 

The main advantages in adopting Azure Site Recovery

After reviewing what can I do with Azure Site Recovery and what steps to follow to implement recovery plans are those that are some of the major benefits that you may have with the adoption of this solution:

  • Using the tools of Azure Site Recovery it simplifies the process of creating business continuity and disaster recovery plans. Recovery plans and runbooks can include scripts present in Azure Automation so you can shape and customize your application with DR procedures for complex architectures.
  • You can have a high degree of flexibility thanks to the potential of the solution that enables you to orchestrate replicas of physical servers and virtual machines running on Hyper-V and VMware.
  • With the ability to replicate the work loads directly on Azure in some cases you may want to completely delete a secondary data center made just for business continuity and disaster recovery.
  • You have the option to periodically perform failover test to validate the effectiveness of the recovery plans implemented, without giving any impact to production application environment.
  • It is possible to integrate with other technologies existing company ASR BCDR (for example Sql Server AlwaysOn or SAN replication).

 

Types of Failover on Azure Site Recovery

After creating a plan of recovery you can perform different types of failover. The following table lists the various types of failover and for each is specified its purpose and what action causes the execution process.

Conclusions

Azure Site Recovery is a powerful and flexible solution for creating business continuity and disaster recovery strategies for your data center, able to orchestrate and manage complex and heterogeneous infrastructures. All this makes ASR an appropriate tool for most environments. For those wishing to explore the field of Azure Site Recovery features can activate a trial of environment Operations Management Suite or of Microsoft Azure.

OMS Log Analytics: Collect Custom logs

In some scenarios, there may be a need to collect logs from applications that do not use traditional methods such as the Windows Event Log or Syslog for Linux systems to write information, and any errors. Log Analytics allows us to collect these events in text file on both Windows and supported Linux distribution on the different.

2016_ 11_09_loganalytics-01

Figure 1 – The collection process custom log

The new entry written to the custom log Log Analytics are collected by each 5 minutes. The agent is also able to store what's the last entry collected in such a way that even if the agent stops for some time no data will be lost, but when he comes running resumes processing from the point where you left off.

In order to collect the log files, the following requirements must be met using Log Analytics:

  • The log must have a single entry for each line of the file, or each entry must begin with a timestamp that meets one of the following formats:
  • YYYY-MM-DD HH:MM:SS
  • M/D/YYYY HH:MM:SS AM/PM
  • Mon DD,YYYY HH:MM:SS
  • YYMMDD HH:mm:SS
  • ddmmyy HH:mm:SS
  • MMM d hh:mm:SS
  • Dd/MMM/yyyy:HH:mm:SS zzz
  • The log file must not be configured to be overwritten with circular updates.

Defining a custom log

In order to collect the information of the custom log you must follow these simple steps.

  1. Open the wizard of custom Log:
    1. Log into OMS
    2. Settings – Date
    3. Custom Logs
    4. Add +
2016_ 11_09_loganalytics-02

Figure 2 – Custom Log Wizard

By default all changes that have been made in section Custom Logs are sent automatically to all agents who. For Linux is sent a configuration file to the data collector Fluentd. If you want to manually edit this file on Linux you need to remove the flag "Apply below configuration to my Linux machines".

  1. Upload and parse a log example:
2016_ 11_09_loganalytics-03

Figure 3 – Upload a sample log file

Select the method that should be used to delimit each record of the file. Default is proposed to delimit the file by rows. This method can be used when the log file contains a single entry for each line of the file. Alternatively, you can select the Timestamp to delimit each record in the log file if it starts with a timestamp in a supported format. If the Timestamp is used to delimit the various records the "TimeGenerated" of each record stored in the who will be populated with the specified date and time in log file. If you are using the alternative method (New Line) the "TimeGenerated" is enhanced with the date and time of harvesting the value of Log Analytics.

2016_ 11_09_loganalytics-04

Figure 4 – Parsing of the log with New Line method

2016_ 11_09_loganalytics-04-bis

Figure 4A – Parsing the log Timestamp method

  1. Add the log path to collect:
    1. Select Windows or Linux to specify the format of the path should
    2. Specify the path and add it with the button +
    3. Repeat the process for each path to add
2016_ 11_09_loganalytics-05

Figure 5 – Routes from where collecting logs

When you insert a path you can also specify a value containing a wildcard in the name, useful to support applications that create new log files each day or to achieve a certain size.

  1. Assign a name and description to the configured log.
2016_ 11_09_loganalytics-06

Figure 6 – Name and description of the custom log

The suffix _ CL default is added.

  1. Validate the configuration.

When Log Analytics began collecting the custom log (You may have to wait until 1 now from the moment of activation this first data) You can consult them by accessing the who Portal Log Search. What Type You must specify the name assigned to the custom Log (example Type = nginx_error_CL).

2016_ 11_09_loganalytics-07

Figure 7 – Log search

After configuring the collection of custom log (each entry is saved as RawData) You can make parsing each record within the log into individual fields using Custom Fields present in Log Analytics. This allows us to analyze them and to search more effectively.

Conclusions

Once Log Analytics is a powerful and flexible solution which allows us to collect data directly from custom log, for both Windows and Linux machines, all by following simple, intuitive guided steps. For those who wish to learn more about this and other features of who I remind you that you can try the OMS for free.