Category Archives: Linux

Azure Backup: the protection of Linux on Azure

Azure Backup is a Microsoft cloud-based data protection solution that, making available several components, allows you to back up your data, regardless of their geographical location (on-premises or in the cloud) toward a Recovery Service vault in Azure. This article will examine the main aspects concerning the protection of Linux virtual machines present in Microsoft Azure, using Azure Backup.

In the security scenario of Azure Iaas virtual machines (Infrastructure as a Service) do not need any backup server, but the solution is completely integrated into the Azure fabric and are supported all Linux distributions approved to run in Azure environment, with the exception of Core OS. The protection of other Linux distributions is also allowed provided that there is the possibility to install the virtual machine VM agent and there is support for Python.

How Azure back up Linux VM

On Linux systems are installed, during the execution of the first backup job, a specific extension called VMSnapshotLinux, through which Azure Backup, during job execution, pilot taking snapshots that are transferred to the Recovery Service vault.

Figure 1 – Principles of backing up Azure IaaS VM with Azure Backup

To have an effective data protection you should be able to make consistent backups at the application layer. Azure Backup by default for Linux virtual machines creates consistent backups at file system level but can also be configured to create application-consistent backup. On Windows systems this is done using the VSS component, while for Linux VM it is made available one scripting framework through which you can run the pre-scripts and post-scripts to control the backup execution.

Figure 2 – Application-consistent backups in Linux VM on Azure

Azure Backup before starting the virtual machine snapshot creation process invokes the pre-script, if this is completed successfully the snaspshot is created, at the end of which runs the post-script. The scripts are fully customizable by the user and they need to be created according to specific characteristics of the application present on the virtual machine. For more details please visit the Microsoft's official documentation.

How to enable the backup of Linux virtual machines running on Azure

Recently it has been introduced the possibility to enable from the Azure portal the protection of virtual machines already from the moment of creation:

Figure 3 - Enabling backup when creating the VM

Alternatively you can enable the protection after creating the virtual machine by selecting it from the Recovery Service vault or by accessing the blade of the VM in the section OperationsBackup. From the same panel, you can view the status of backups.

File Recovery of Linux virtual machine on Azure

Azure Backup, besides the possibility to restore the entire virtual machine, also allows for Linux systems to restore individual files using the File Recovery feature. To do this you can follow these steps below.

From the Azure portal, you select the virtual machine for which you need to restore the files and in the Backup section you start the task of File Recovery:

Figure 4 - Starting the process of File Recovery

At this point will appear the panel where you must select the recovery point that you want to use for the restore operation. Then press the button Download Script which generates a script with extension .sh, and password, that is used to mount the recovery point as system's local disk.

Figure 5 – Recovery Point selection and script download

The script must be copied on the Linux machine and to do that you can use WinSCP:

Figure 6 – Copy of the script on the Linux machine

By accessing the Linux system in terminal mode, you must assign execution permission to the copied script , using the command chmod +x and then you can run the script:

Figure 7 – Script for File Recovery

At the time of the execution the script requires the password which is shown in the Azure portal and then proceed with steps for making your recovery point connection via iSCSI channel and mount it as file system.

Now you can access the mount point path which exposes the selected recovery point and restore or consult the necessary files:

Figure 8 – Access to the path of the mount point

After completing the restore operation is appropriate to make an unmount of the discs through the appropriate button from the Azure portal (in any case the connection to the mountpoint is closed forcefully after 12 hours) and you need to run the script with the parameter -clean to remove the path of the recovery point from the machine.

Figure 9 – Unmount disks and removing mount points from the machine

If in the VM for which you want to restore the files are present LVM partitions, or RAID arrays you must perform the same procedure, but on a different Linux machine to avoid conflicts in the discs.

Conclusions

Azure Backup is a fully integrated solution in the Azure fabric that allows you to protect easily and with extreme effectiveness even Linux virtual machines present on Azure. All this happens without the need to implement complex infrastructure for the data protection. Azure Backup also helps to protect many large-scale systems and to maintain a centralized control of the data protection architecture.

OMS and System Center: What's New in January 2018

The new year has begun with different ads from Microsoft regarding what's new in Operations Management Suite (OMS) and System Center. This article summarizes briefly with the necessary references in order to learn more about.

Operations Management Suite (OMS)

Log Analytics

The release of theIT Service Management Connector (ITSMC) for Azure provides a bi-directional integration between Azure monitoring tools and ITSMC solutions such as: ServiceNow, Provance, Cherwell, and System Center Service Manager. With this integration you can:

  • Create or update work-items (event, alert, incident) in ITSM solutions on the basis of alerts present in Azure (Activity Log Alerts, Near real-time metric alerts and Log Analytics alerts).
  • Consolidate in Azure Log Analytics data related to Incident and Change Request.

To configure this integration you can consult the Microsoft's official documentation.

Figure 1 – ITSM Connector dashboard of the Log Analytics solution

Agent

This month the new version ofOMS agent for Linux systems fixes important bugs also introducing an updated version of the components SCX and OMI. Given the large number of bug fixes included in this release the advice is to consider the adoption of this upgrade. To obtain the updated version of the OMS agent you can access to the official GitHub page OMS Agent for Linux Patch v 1.4.3-174.

Figure 2 – Bug fixes and what's new for the OMS agent for Linux

Azure Backup

During the process of creating virtual machines from Azure portal now there is the ability to enable the protection via Azure Backup:

Figure 3 – Enabling backup while creating a VM

This ability improves in a considerable way the experience of creation of the virtual machine from the Azure Portal.

Azure Site Recovery

Azure Site Recovery allows you to handle different scenarios to implement Disaster Recovery plans, including replication of VMware virtual machines to Azure. In this context the following important changes have been introduced:

  • Release of a template in the format Open Virtualization Format (OVF) to deploy the Configuration Server. This allows you to deploy the template in your virtualization infrastructure and have a system with all the necessary software already preinstalled, with the exception of MySQL Server 5.7.20 and VMware PowerCLI 6.0, to speed up the deployment and the registration to Recovery Service Vault of the Configuration Server.
  • Introduced in Configuration Server a web portal to drive the main configuration actions necessary such as proxy server settings, details and credentials to access the vCenter server and the management of the credentials to install or update the Mobility Service on virtual machines involved in the replication process.
  • Improved the experience for deploying the Mobility Service on virtual machines. Since the 9.13.xxxx.x version of the Configuration Server would be used VMware tools to install and update the Mobility Service on all VMware virtual machines protected. This means that you no longer need to open firewall ports for WMI and for File and Printer Sharing services on Windows systems, previously used to perform the push installation of the Mobility Service.

The monitoring features included natively in Azure Site Recovery have been greatly enriched for having a complete and immediate visibility. The Panel Overview of Recovery Service Vault is now structured, for the section Site Recovery, as follows:

Figure 4 – Azure Site Recovery dashboard

These the various sections, which are updated automatically every 10 minutes:

  1. Switch between Azure Backup and Azure Site Recovery dashboards
  2. Replicated Items
  3. Failover test success
  4. Configuration issues
  5. Error Summary
  6. Infrastructure view
  7. Recovery Plans
  8. Jobs

For more details on the various sections you can see the official documentation or view this short video.

Known Issues

Please note the following possible problem in the execution of backup of Linux VMs on Azure. The error code returned is UserErrorGuestAgentStatusUnavailable and you can follow this workaround to resolve the error condition.

System Center

System Center Configuration Manager

Released the version 1801 for the branch Technical Preview of System Center Configuration Manager: Update 1801 for Configuration Manager Technical Preview Branch.

Among the new features in this release there are:

  • Ability to import and run signed scripts and monitor the execution result.
  • The distribution point can be moved between different primary sites and from a secondary site to a primary site.
  • Improvement in the client settings for the Software Center, with the ability to view a preview before the deployment.
  • New settings for Windows Defender Application Guard (starting with Windows 10 version 1709).
  • Ability to view a dashboard with information about the co-management.
  • Phased Deployments.
  • Support for hardware inventory string longer than 255 characters.
  • Improvements in the scheduling of Automatic Deployment Rule.

Please note that the releases in the Technical Preview Branch help you evaluate the new features of SCCM and it is recommended to apply these updates only in test environments.

In addition to System Center Configuration Manager current branch, version 1710 was issued an update rollup that contains a large number of bug fixes.

Evaluation of OMS and System Center

Please remember that in order to test and evaluate for free Operations Management Suite (OMS) you can access this page and select the mode that is most appropriate for your needs.

To test the various components of System Center 2016 you can access to the’Evaluation Center and after the registration you can start the trial period.

Integration between Service Map and System Center Operations Manager

Service Map is a solution that you can enable in Operations Management Suite (OMS) able to automatically carry out the discovery of application components, on both Windows and Linux systems, and to create a map that shows almost real-time communications between the various services. All this allows you to view the servers as interconnected systems that deliver services.

In System Center Operations Manager (SCOM) there is the possibility to define Distributed Application to provide an overall view of the health status of an application consists of different objects. The Distributed Application does not provide additional monitor functionality, but merely to relate the state of the objects in the system monitor, to provide the overall health status of the application.

Through integration between Service Map and System Center Operations Manager, you can automatically create in SCOM diagrams that represent the Distributed Application based on the detected dependencies from the Service Map solution.

This article will examine the procedure to be followed to activate this integration bringing back the main features.

Prerequisites

This kind of integration is possible if the following requirements are verified:

  • Environment System Center Operations Manager 2012 R2 or later.
  • Workspace OMS with Service Map solution enabled.
  • The presence of a Service Principal with access to the Azure subscription that contains the OMS workspace.
  • Operations Manager-managed servers and that send data to Service Map.

Supports both Windows and Linux systems, but with one important distinction.

For Windows systems you can evaluate the use of the scenario of integration between SCOM and OMS, as described in the article Integration between System Center Operations Manager and OMS Log Analytics and simply add the Dependencing Agent of Service Map on the various servers.

For Linux systems you cannot collect directly data of agents managed by Operations Manager in Log Analytics. It will therefore always required the presence of both the SCOM agent and the OMS agent. At the moment, in a Linux environment, the two agents share some binaries, but these are distinct agents that can coexist on the same machine as long as the SCOM agent is at least version 2012 R2. OMS agent installation on a Linux system managed by Operations Manager updates the OMI and the SCX SCX. We recommend that you always install the SCOM agent first and then the OMS agent, otherwise you need to edit the configuration file of OMI (/etc/opt/omi/conf/omiserver.conf) by adding the parameter httpsport=1270. After the editing you must restart the OMI Server component using the following command: sudo /opt/omi/bin/service_control restart.

The process for activating the integration

The first step required is the import, using the System Center Operations Manager console, of the following management packs (now in Public Preview), contained within the bundle that you can download to this link:

  • Microsoft Service Map Application Views.
  • Microsoft System Center Service Map Internal.
  • Microsoft System Center Service Map Override.
  • Microsoft System Center Service Map.

Figure 1 – Start importing the Management Pack

Figure 2 – Install the Management Pack for the integration with Service Map

After completing the installation of the management pack you will display the new node Service Map, in the workspace Administration, within the section Operations Management Suite. From this node you can start the integration configuration wizard:

Figure 3 – Configuration of the OMS workspace where there is the Service Map solution

At the moment you can configure the integration with a single OMS workspace.

The wizard prompts you to specify a Service Principal for read access to the Azure subscription that contains the OMS workspace, with the Service Map solution enabled. To create the Service Principal you can follow the procedure in Microsoft's official documentation.

Figure 4 – OMS workspace connection parameters

Based on the permissions assigned to the Service Principal the wizard shows the Azure subscriptions and its associated OMS workspaces:

Figure 5 - Selection of the Azure subscription, OMS Resource Group and OMS workspace

At this point you are prompted to select which groups of machines in Service Map you want to synchronize in Operations Manager:

Figure 6 – Selection of the Service Map Machine Group to synchronize in SCOM

On the next screen you are prompted to select which servers in SCOM synchronize with information retrieved from Service Map.

Figure 7 – Selection of items of SCOM

In this regard, in order to make sure that this integration is able to create the diagram of the Distributed Application for a server, this must be managed by SCOM, by Service Map and must be present within the Service Map group previously selected .

Then you are prompted to select an optional Management Server Resource Pool for communication with OMS and if necessary a proxy server:

Figure 8 - Optional configuration of a Management Server Resource Pool and a proxy server

Registration takes few seconds after which the following screen appears and Operations Manager performs the first synchronization of Service Map, by taking the data from the OMS workspace.

Figure 9 – Addition of the OMS workspace successfully completed

The synchronization of Service Map data occurs by default every 60 minutes, but you can change this frequency going to act with an override on a rule named Microsoft.SystemCenter.ServiceMapImport.Rule.

Result of the integration between Service Map and SCOM

The result of this integration is visible from the Operations Manager console in the dashboard Monitoring. It is in fact created a new Service Map folder that contains :

  • Active Alerts: any active alert regarding communication between SCOM and Service Map.
  • Servers: list of servers under the monitor for which the information is synchronized from Service Map.

Figure 10 - Servers with synchronized information from Service Map

  • Machine Group Dependency Views: Displays a Distributed Application for each Service Map group selected for the synchronization.

Figure 11 – Machine Group Dependency View

  • Server Dependency Views: shows a Distributed Application for each server that synchronizes information from Service Map.

Figure 12 – Server Dependency View

 

Conclusions

Many reality that they are going to use, or have already implemented the Service Map solution also have on-premises an environment with System Center Operations Manager (SCOM). This integration will enrich the information in SCOM allowing you to have full visibility of applications and dependencies of the various systems. This is an example of how you can use the power provided by OMS actually even with SCOM, without renouncing to investments made on the instrument, such as the possible integration with IT service management solutions (ITSM).

How to create a Docker environment in Azure using VM Extension

Docker is a software platform that allows you to create, manage and execute isolated applications in containers. A container is nothing more than a methodology for creating software packages in a format that allows it to be run independently and isolated on a shared operating system. Unlike the virtual machine containers do not include a complete operating system, but only the libraries and settings needed to run the software. Therefore there are a series of advantages in terms of size, speed, portability and resource management.

Figure 1 – Diagram of containers

 

In the world of Microsoft Azure there are different configuration and use possibilities about Docker containers that I list synthetically:

  • VM Extension: through a specific Extension you can implement Docker inside a virtual machine.
  • Azure Container Service: deploys quickly into a cluster environment Azure Docker Swarm, DC/OS or Kubernetes ready for production. This is the most complete solution for the orchestration of containers.
  • Docker EE for Azure: is a template available on the Azure Marketplace, a collaboration between Microsoft and Docker, which makes it possible to provision a clustered Docker Enterprise Edition integration with Azure VM Scale Sets, the Azure Load balancers and the Azure Storage.
  • Rancheros: is a Linux distribution designed to run containers available as template within the Marketplace Azure Docker.
  • Web App for Containers: you have the option of using containers, making the deployment in the Azure App Service managed platform as a Web App running in a Linux environment.
  • Azure Container Instances (currently in preview): is definitely the easiest and quickest way to run a container Docker in the Azure platform, without the need to create virtual machines, ideal in scenarios where containers blocks.
  • Azure Service Fabric: supports the containers in both the Windows and Linux. The platform contains natively support for Docker Wrote (currently in preview), allowing you to orchestrate applications based on containers in the Azure Service Fabric.
  • DC/OS on Azure: This is a managed cloud service that provides an environment for the deployment of workloads in cluster using DC/OS platform (Datacenter Operating System).

All these possibilities enable, according to the needs and to the scenario that you must implement, choosing the most appropriate deployment methodology in the container for execution environment Azure Docker.

In this article we will create a Docker environment in a Virtual Machine using the Docker Extension. Starting from a virtual machine in Azure, you can add the Docker Extension which installs and configures the daemon Docker, the client Docker and Docker Wrote.

This extension is supported for the following Linux distributions:

  • Ubuntu 13 or higher.
  • CentOS 7.1 or higher.
  • Red Hat Enterprise Linux (RHEL) 7.1 or higher.
  • CoreOS 899 or higher.

Adding the extension from the Azure Portal can be done via the following steps. The section Extensions Select the virtual machine button Add:

Figure 2 – Adding Extensions to the VM from the Azure Portal

 

Then shows the list of Extensions available, you stand onExtension Docker and press the button Create.

Figure 3 – Selection of Extension Docker

 

To enable secure communication with the Docker system implemented in your Azure environment you should use certificates and keys issued by a trusted CA. If you do not have a CA to generate these certificates you can follow the instructions in section Create a CA, Server and client keys with OpenSSL present in the official documentation of Docker.

 

Figure 4 – Communication scheme docker by encrypted protocol TLS

 

The Extension wizard requires first to enter the communications port of the Engine Docker (2376 is the default port). Also the CA's certificate is requested, your Server certificate and Server Key, in base64-encoded format:

Figure 5 – Parameters required by the wizard to add the Docker VM Extension

 

Adding the Extension Docker takes several minutes at the end of which the virtual machine will be installing the latest stable version of Docker Engine and daemon Docker will listen on the specified port using certificates entered in the wizard.

Figure 6 – Details of the Extension Docker

 

In case you need to allow Docker communication from outside the vNet where is attested the VM with Docker you must configure appropriate rules in Network Security Group used:

Figure 7 – Configuration example NSG to allow communication Docker (door 2376)

 

At this point the Docker environment is ready to be used and from a remote client you can start the communication:

Figure 8 – Docker command run from a remote client to retrieve information

 

Conclusions

The Azure Docker VM extension is ideal to implement easily, in a reliably and securely mode, a dev or production Docker environment on a single virtual machine. Microsoft Azure offers a wide range of possibilities in the choice of implementation related to the Docker platform, with a lot of flexibility by letting you choose the most appropriate solution for your needs.

OMS Log Analytics: the Update Management solution for Linux systems

Using the Operations Manager Update Management Solution Suite (OMS) you have the ability to centrally manage and control the update status of systems in heterogeneous environments both Windows and Linux machines and independently from their placement, on-premises rather than in the cloud. In this article, we explored aspects of solution regarding Linux systems.

The Update Management solution allows you to quickly assess the status of updates available on all servers with the OMS agent installed and is able to start the process of installing the missing updates. Linux systems are configured to use this solution require in addition to the presence of ’ agent who Powershell Desired State Configuration (DSC) for Linux andHybrid Runbook Automation Worker (installed automatically).

The solution currently supports the following Linux distributions:

  • CentOS 6 (x 86/x 64) and CentOS 7 (x 64).
  • Red Hat Enterprise 6 (x 86/x 64) and Red Hat Enterprise 7 (x 64).
  • SUSE Linux Enterprise Server 11 (x 86/x 64) and SUSE Linux Enterprise Server 12 (x 64).
  • Ubuntu 12.04 LTS and later (x 86/x 64).

In addition to work correctly you need the Linux system has access to an update repository. In this regard it is worth noting that at the moment there is a chance by who to select which updates to apply, but ’ all available updates are available from the update repository configured on the machine. To have more control over updates to apply you may evaluate the ’ using a custom update repository created and contains only the updates that you want to approve.

The following diagram shows the flow of operations being carried out by the solution to move towards compliance status and the workspace who to apply the missing updates:

Figure 1 – Flow of operations performed on Linux systems

  1. The agent who for Linux scans each 3 hours to detect missing updates and reports the outcome of the scan to the workspace who.

Figure 2 – OMS Dashboard Update Management solution

  1. The operator using the dashboard OMS can refer to update assessments and define the schedule for deployment of updates:

Figure 3 – Management of Update Deployment

Figure 4 – OMS Dashboard Update Management solution

In creating the Update Deployment is defined a name, the list of systems to be involved, that can be provided explicitly or by using a query of Log Analytics, and scheduling.

  1. The component Hybrid Runbook Worker running on Linux systems checks for maintenance Windows and the availability of any deployment to apply. In this regard it is good to specify that enabling the solution to Update Management every Linux system connected to the workspace who is automatically configured as Hybrid Runbook Worker to perform runbook created to deploy updates. Also every system managed by the solution is a Hybrid Runbook Worker Group within the Automation OMS Account following the naming convention Hostname_GUID:

Figure 5 – Hybrid Worker Groups

  1. If a machine has an Update Deployment (as a direct member or because it belongs to a specific group of computers) on it starts the package manager (Yum, Apt, Zypper) to install updates. Installing updates is driven by who through specific runbook Automation within Azure. These are not visible in Azure runbook Automation and require no configuration by the administrator.

Figure 6 – Azure Automation Account used by the solution of Update Management

  1. After Setup OMS agent for Linux and the basic status of Update Deployment and compliance to the workspace who.

Conclusions

Microsoft Operations Management Suite is a tool that lets you manage and monitor heterogeneous environments. Still today, unfortunately, you are faced to the debate on the real need to maintain regularly updated Linux systems, but considering some recent security incident caused by outdated systems, It is evident that it is good to have a solution that allows you to manage updates for Linux machines. The solution to Update Management of OMS is constantly evolving, but already today enables us to control and manage the distribution of updates also on Linux systems in a simple and efficient way.

For more details, I invite you to consult Microsoft's official documentation Solution for Update Management of OMS.

To further explore this and other features you can activate free OMS.

 

OMS Log Analytics: Collect Custom logs

In some scenarios, there may be a need to collect logs from applications that do not use traditional methods such as the Windows Event Log or Syslog for Linux systems to write information, and any errors. Log Analytics allows us to collect these events in text file on both Windows and supported Linux distribution on the different.

2016_ 11_09_loganalytics-01

Figure 1 – The collection process custom log

The new entry written to the custom log Log Analytics are collected by each 5 minutes. The agent is also able to store what's the last entry collected in such a way that even if the agent stops for some time no data will be lost, but when he comes running resumes processing from the point where you left off.

In order to collect the log files, the following requirements must be met using Log Analytics:

  • The log must have a single entry for each line of the file, or each entry must begin with a timestamp that meets one of the following formats:
  • YYYY-MM-DD HH:MM:SS
  • M/D/YYYY HH:MM:SS AM/PM
  • Mon DD,YYYY HH:MM:SS
  • YYMMDD HH:mm:SS
  • ddmmyy HH:mm:SS
  • MMM d hh:mm:SS
  • Dd/MMM/yyyy:HH:mm:SS zzz
  • The log file must not be configured to be overwritten with circular updates.

Defining a custom log

In order to collect the information of the custom log you must follow these simple steps.

  1. Open the wizard of custom Log:
    1. Log into OMS
    2. Settings – Date
    3. Custom Logs
    4. Add +
2016_ 11_09_loganalytics-02

Figure 2 – Custom Log Wizard

By default all changes that have been made in section Custom Logs are sent automatically to all agents who. For Linux is sent a configuration file to the data collector Fluentd. If you want to manually edit this file on Linux you need to remove the flag "Apply below configuration to my Linux machines".

  1. Upload and parse a log example:
2016_ 11_09_loganalytics-03

Figure 3 – Upload a sample log file

Select the method that should be used to delimit each record of the file. Default is proposed to delimit the file by rows. This method can be used when the log file contains a single entry for each line of the file. Alternatively, you can select the Timestamp to delimit each record in the log file if it starts with a timestamp in a supported format. If the Timestamp is used to delimit the various records the "TimeGenerated" of each record stored in the who will be populated with the specified date and time in log file. If you are using the alternative method (New Line) the "TimeGenerated" is enhanced with the date and time of harvesting the value of Log Analytics.

2016_ 11_09_loganalytics-04

Figure 4 – Parsing of the log with New Line method

2016_ 11_09_loganalytics-04-bis

Figure 4A – Parsing the log Timestamp method

  1. Add the log path to collect:
    1. Select Windows or Linux to specify the format of the path should
    2. Specify the path and add it with the button +
    3. Repeat the process for each path to add
2016_ 11_09_loganalytics-05

Figure 5 – Routes from where collecting logs

When you insert a path you can also specify a value containing a wildcard in the name, useful to support applications that create new log files each day or to achieve a certain size.

  1. Assign a name and description to the configured log.
2016_ 11_09_loganalytics-06

Figure 6 – Name and description of the custom log

The suffix _ CL default is added.

  1. Validate the configuration.

When Log Analytics began collecting the custom log (You may have to wait until 1 now from the moment of activation this first data) You can consult them by accessing the who Portal Log Search. What Type You must specify the name assigned to the custom Log (example Type = nginx_error_CL).

2016_ 11_09_loganalytics-07

Figure 7 – Log search

After configuring the collection of custom log (each entry is saved as RawData) You can make parsing each record within the log into individual fields using Custom Fields present in Log Analytics. This allows us to analyze them and to search more effectively.

Conclusions

Once Log Analytics is a powerful and flexible solution which allows us to collect data directly from custom log, for both Windows and Linux machines, all by following simple, intuitive guided steps. For those who wish to learn more about this and other features of who I remind you that you can try the OMS for free.

OMS Log Analytics: Linux systems management

In a world increasingly heterogeneous and constantly evolving IT is essential to have a tool that can handle hybrid IT architectures and distributed between on-premises and cloud systems public providers. Operations Management Suite is the Microsoft solution, supplied directly from the cloud, able to manage not only Windows systems, But even the major Linux distributions. Continue reading