Author Archives: Francesco Molfese

About Francesco Molfese

Francesco is a consultant, trainer, technical writer and Microsoft MVP focusing on public cloud, hybrid cloud, virtualization and datacenter management. Francesco has over 10 years of experience in architecting, implementing and managing IT solutions and he is currently employed as a Senior Consultant at Progel Spa an IT consulting company and Microsoft Certified Partner. Francesco is the Community Lead of the Italian User Group of System Center and Operations Management Suite ( and he is a frequent speaker at leading IT Pro conferences in Italy.

Azure Backup: the protection of SQL Server in Azure Virtual Machines

Azure Backup is enriched with an important new feature that allows you to natively protect SQL workload, running in IaaS virtual machines that reside in Azure. In this article we will explore the benefits and the characteristics of this new feature.

Figure 1 - Protection with Azure backups of SQL Server in Azure VMs

Azure Backup has always been with an approach cloud-first allowing you to protect your systems quickly, safe and effective. The SQL Server protection in Azure IaaS virtual machines provides the only solution of its kind, characterised by the following elements:

  • Zero-backup infrastructure: you do not need to maintain a classic backup infrastructure, composed from the backup server, by various agents installed on systems and storage that host backups. In addition, nor is it required to use backup scripts, often needed in other backup solutions, to protect SQL Server.
  • Monitor backups by Recovery Services Vault: Using the dashboard, you can easily and intuitively monitor various backup jobs for all types of workloads protected: Azure IaaS VMs, Azure Files and SQL server databases. You can also set up email notifications against unsuccessful backup or restore.
  • Centralized management: you have the option to configure common protection policy, usable for databases residing on separate servers, where is defined the scheduling and the retention for short-term and long-term backup.
  • Restore DB to a precise date and time: an intuitive graphical interface allows the operator to restore the most appropriate recovery point for the selected date. Azure Backup will take care of managing the restoration of full backups, differential and log backup chain in order to get the database at the selected time.
  • Recovery Point Objective (RPO) of 15 minutes: You can back up the transaction log every 15 minutes.
  • Pay as you go service (PAYG): billing takes place monthly on the basis of consumption and there are no upfront costs for the service.
  • Native integration with SQL Server APIs: Azure Backup invokes the native APIs of the solution to ensure a high efficiency and reliability of the operations performed. Backup jobs can be viewed using SQL Server Management Studio (SSMS).
  • Support for Always On Availability Group: the solution is able to back up databases that reside within an Availability Group (AG), ensuring the protection in case of failover events, honoring the preference backup set at the AG level.

This new feature supports the following versions of the operating system and SQL Server, independently that are VMs are generated by a marketplace image or less (SQL Server installation done manually).

Supported operating systems

  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016

Linux, at the moment, is not supported.

Supported SQL Server VersionsEditions

  • SQL 2012 Enterprise, Standard, Web, Developer, Express
  • SQL 2014 Enterprise, Standard, Web, Developer, Express
  • SQL 2016 Enterprise, Standard, Web, Developer, Express
  • SQL 2017 Enterprise, Standard, Web, Developer, Express

To take advantage of this feature, the following requirements must be met:

  • Have a Recovery Services vault in the same region where resides the Azure virtual machine hosting SQL databases to be protected.
  • The virtual machine with SQL Server need connectivity to Azure public IPs.
  • On the virtual machine that holds the databases to be protected must be present specific settings. Azure Backup requires the presence of the VM extension AzureBackupWindowsWorkload. This extension is installed in the virtual machine during the process of discovery and enables communication with Azure Backup. The extension installation involves the creation in the VM, by Azure Backup, of the Windows virtual service account named NT Service AzureWLBackupPluginSvc. This virtual service account needs permissions of log in and sysadmin on SQL side, to protect your databases.

To enable the backup of SQL workloads in virtual machine in Azure it is necessary to carry out a process of discovery and later you can configure the protection.

Discovery process

This paragraph shows the procedure to be followed, by accessing the Azure Portal, to enable discovery of databases:

Figure 2 – Initiation of the discovery process

Figure 3 – Discovery in progress

Figure 4 – Discovery of DBs on selected systems


Configuring SQL backup

After the discovery phase of the the databases you can proceed with the configuration of SQL Server backup.

Figure 5 - Start the backup configuration, post DBs discovery inside the VMs

Figure 6 – Selection of DBs to be protected

Figure 7 – Creation of the policy that defines the type of SQL backup and data retention

Figure 8 – Enabling backup


Backup monitor and restore process

Figure 9 – Dashboard of the Recovery Service vault

Figure 10 - Number of backup items of SQL in Azure VMs

Figure 11 – SQL backup status

By selecting the single DB you can start the restore process.

Figure 12 - Starting the restore process of the DB

Figure 13 – Selecting the destination where to restore the DB

Figure 14 – Selecting the restore point to use

Figure 15 – Restore settings and directories where to place the files

Figure 16 – Starting the restore job


The Cost of the Solution

The cost for the protection of SQL Server in Azure Backup is calculated on the number of instances protected (individual Azure VMs or Availability Groups). The cost of a single protected instance depends on the size, which is determined by the overall size of the various protected DBs (without considering the amount of compression and encryption). At this cost it has to be added the cost of Azure storage actually consumed. This is Block Blob Storage including locally redundant storage (LRS) or geo-redundant storage (GRS). For more details on costs please visit the Microsoft's official page.



Azure Backup is enhanced with an important feature and confirms to be a great enterprise solution for systems protection, wherever they are. With this feature, Azure differs from any other public cloud, providing a solution for the protection of SQL Server in IaaS virtual machines, totally integrated into the platform. For more information on Azure Backup solution you can consult the official documentation.

Windows Server Summit: the online conference that tells about the future of Windows Server

Microsoft has announced that will be held the virtual event Windows Server Summit to explore various argoments related to Windows Server, with a particular focus on new features that will be introduced in the coming months. This event will provide a useful platform to explore the themes related to the new Microsoft's operating system and you can formulate questions and receive answers interactively.

The launch of Windows Server 2019, planned for this year, reinforces the importance of Windows Server within Microsoft's strategy, which is strongly oriented to the realization of hybrid architectures.

When and where

The event, half-day, will take place on 26 June 2018 at 9:00 AM (Pacific Time), hours 6:00 PM in Italy. This is a virtual event, therefore accessible exclusively online. You can see the sessions even after.

Topics covered

The agenda of the event is rich in content which will be accompanied by several demos to contextualize the topics covered. Will be available four tracks covering:

  • Hybrid
  • Security
  • Hyper-converged Infrastructure (HCI)
  • Application Platform

Figure 1 – Tracks of the event


The sessions will be given by prominent speakers, among which we find:

Figure 2 – Main speakers at Windows Server Summit


This event will be very helpful to guide and educate IT professionals in the world to the opportunity to develop services and infrastructure present in the data center through integration with cloud services. To get prepared for this event you can join the Windows Insiders program and start evaluating the preview version of Windows Server 2019. Will also be useful to become familiar with Windows Admin Center, that allows you to manage your infrastructure from a central location, through an innovative HTML5-based web console.

Everything you need to know about new Azure Load Balancer

Microsoft recently announced the availability in Azure of Standard Load Balancer. They are load balancers Layer-4, for TCP and UDP protocols that, compared to Basic Load Balancer, introduce improvements and give you more granular control of certain features. This article describes the main features of the Standard Azure Load balancers, in order to have the necessary elements to choose the most suitable type of balancer for your needs.

Any scenario where you can use the SKU Basic of Azure Load balancers, can be satisfied using the Standard SKU, but the two types of load balancers have important differences in terms of scalability, functionality, guaranteed service levels and cost.


The Standard Load balancers have higher scalability, compared to Basic Load Balancer, as regards the maximum number of instances (IP Configuration) that can be configured in the backend pool. The SKU Basic allows you to have up to 100 instances, while using the Standard SKU the maximum number of instances is equal to 1000.


Backend pool

With regard to the Basic Load Balancer, in the backend pool, can reside exclusively:

  • Virtual machines that are located within an availability set.
  • A single standalone VM.
  • Virtual Machine Scale Set.

Figure 1 – Possible associations in the Basic Load Balancer backend pool

In Standard Load Balancer instead, it is allowed to enter into backend pool any virtual machine attested on a particular virtual network. The integration scope, in this case, is not in fact the availability set, as for the Basic load balancer , but it is the virtual network and all its associated concepts. A requirement to consider, in order to insert into the backend pool of Standard Load Balancer the virtual machines, is that these should not have associated public IP or must have Public IP with Standard SKU.

Figure 2 Standard Load Balancer backend pool association

Availability Zones

Standard Load Balancers provide integration scenarios with Availability Zones, in the regions that include this feature. For more details you can refer this specific Microsoft document, that shows the main concepts and implementation guide lines.

Ports High Availability

The load balancers with Standard SKU, of type "Internal", allow you to balance the TCP and UDP flows on all ports simultaneously. To do that, in the rule of load-balancing, there is the possibility to enable the "HA Ports" option:

Figure 3 - Configuring the load balancing rule with "HA Ports" option enabled

The balancing is done for flow, which is determined by the following elements: source IP address, source port, destination IP address, destination port, and protocol. This is particularly useful in scenarios where are used Network Virtual Appliances (NVAs) requiring scalability. This new feature improves the tasks that are required for NVAs implementations.

Figure 4 - Network architecture which provides the use of LB with "HA Ports" option enabled

Another possible use for this feature is when you need to balance a large number of ports.

For more details on the option "HA Ports" you can see the official documentation.


Standard Load Balancer introduce the following features in terms of diagnostic capability:

  • Multi-dimensional metrics: You can retrieve various metrics that allow you to see, in real time, usage status of load balancer, internal and public. This information is particularly useful for troubleshooting.

Figure 5 – Load Balancer metrics from the Azure Portal

  • Resource Health: in Azure Monitor you have the opportunity to consult the health status of Standard Load Balancer (currently only available for Standard Load Balancer, type Public).

Figure 6 – Resource health of Load Balancer in Azure Monitor

You can also consult the history of the health state :

Figure 7 – Health history of Load Balancer

All details related to diagnostics, of the Standard Load Balancer, can be found in the official documentation.


The Load Balancer with standard SKU are configured to be secure by default in fact, in order to operate, you must have a Network Security Group (NSG) where the traffic flow is explicitly allowed. As previously reported, the Load Balancer standards are fully integrated into the virtual network, which is characterized by the fact that it is private and therefore closed. The Standard Load Balancer and the public Standard IP are used to allow the access to the virtual network from outside and now by default you must configure a Network Security Group (closed by default) to allow the desired traffic. If there is no a NSG, on the subnet or on the NIC of the virtual machine, you will not be allowed the access by the network stream from the Standard Load Balancer.

The Basic Load Balancers by default are opened and the configuration of a Network Security Group is optional.

Outbound connections

The Load Balancer on Azure support both inbound and outbound connectivity scenarios. The Standard Load Balancer, compared to the Load Balancer Basic, behave differently with regard to outbound connections.

To map the internal and private IP address of the virtual network to the public IP address of the Load Balancer it uses the Source Network Address Translation technique (SNAT). The Load Balancer Standard introduce a new algorithm to have stronger SNAT policies, scalable and accurate, that allow you to have more flexibility and have new features.

Using the Standard Load Balancer you should consider the following aspects with regard to outbound scenarios:

  • Must be explicitly created to allow outgoing connectivity to virtual machines and are defined on the basis of incoming balancing rules.
  • Balancing rules define how occur the SNAT policies.
  • If there are multiple frontend, It uses all the frontend and for each of these multiply the preallocated SNAT ports available.
  • You have the option to choose and control whether a specific frontend you don't want to use for outbound connections.

Basic Load Balancers, in the presence of more public frontend IP, it is selected a single frontend to be used in outgoing flows. This selection can not be configured and occurs randomly.

To designate a specific IP address, you can follow the steps in this section of the Microsoft documentation.

Management operations

Standard Load balancers allow enabling management operations more quickly, much to bring the execution times of these operations under 30 seconds (against the 60-90 seconds to the Load Balancer with Basic SKU). Editing time for the backend pools are also dependent on the size of the same.

Other differences

At the moment, Public Standard Load Balancer cannot be configured with a public IPv6 address:

Figure 8 – Public IPv6 for Public Load Balancer

Service-Level Agreements (SLA)

An important aspect to consider, in choosing the most appropriate SKU for different architectures, is the level of service that you have to ensure (SLA). Using the Standard Load Balancer ensures that a Load Balancer Endpoint, that serve two or more instances of health virtual machines, will be available in time with an SLA of 99.99%.

The Load Balancer Basic does not guarantee this SLA.

For more details you can refer to the specific article SLA for Load Balancer.



As for Basic Load Balancer are not expected cost, for Standard Load Balancer there are usage charge provided on the basis of the following elements:

  • Number of load balancing rules configured.
  • Number of inbound and outbound data processed.

There are no specific costs for NAT rules.

In the Load Balancer cost page can be found the details.


Migration between SKUs

For Load Balancer is not expected to move from the Basic SKU to the Standard SKU and vice versa. But it is necessary to provide a side-by-side migration, taking into consideration the previously described functional differences.


The introduction of the Azure Standard Load Balancer allows you to have new features and provide greater scalability. These characteristics may help you avoid having to use, in specific scenarios, balancing solutions offered by third party vendors. Compared to traditional Load balancers (Basic SKU) change operating principles and have distinct characteristics in terms of costs and SLAS, this is good to consider in order to choose the most suitable type of Load Balancer, on the basis of the architecture that you must accomplish.

OMS and System Center: What's New in May 2018

Compared to what we were used to seeing in recent months, in the month of may, have been announced by Microsoft a few news about Operations Management Suite (OMS) and System Center. This article will summarize bringing the references needed to conduct further studies.

Operations Management Suite (OMS)

Log Analytics

Microsoft announced the retirement, starting from 8 June 2018, of the following solutions:

This means that, as of this date, you can no longer add this solutions in the Log Analytics workspaces. For those who are currently using it, is appropriate to consider that the solution will still work, but will be missing its support and will not be released new updates.

In this article are reported some important recommendations that should be followed when using the operators "Summarize" and "Join" in Log Analytics and Application Insights query. It is recommended to adjust the syntax of any existing query, using these operators, to comply with the specifications given in the article.

Security and Audit

It should be noted this interesting article where it is shown how you can detect and investigate unusual and potentially malicious activities using Azure Log Analytics and Security Center.

Azure Site Recovery

Microsoft has announced that the following versions of the REST API of Azure Site Recovery will be deprecated since 31 July 2018:

  • 2014-10-27
  • 2015-02-10
  • 2015-04-10
  • 2015-06-10
  • 2015-08-10

You will need to use at least version API 2016-08-10 to interface with Azure Site Recovery. This type of change has no impact on the portal of Azure Site Recovery and to the solution access via PowerShell.

System Center

System Center Orchestrator

The Integration Packs of Orchestrator, version 7.3 for System Center 2016, have been released.
The download can be done at this address and includes the following components:

  • System Center 2016 Integration Pack for System Center 2016 Configuration Manager.
  • System Center 2016 Integration Pack for System Center 2016 Data Protection Manager.
  • System Center 2016 Integration Pack for System Center 2016 Operations Manager.
  • System Center 2016 Integration Pack for System Center 2016 Service Manager.
  • System Center 2016 Integration Pack for System Center 2016 Virtual Machine Manager.

These Integration Packs allow you to develop automation, interfacing directly with the other components of System Center. The Integration Pack for System Center 2016 Operations Manager has been revised to require no more the presence of the Operations Manager console to function correctly.

System Center Operations Manager

Following, are updates released for Operations Manager Management Packs:

  • Active Directory Federation Services version
  • Active Directory Federation Services 2012 R2 version 7.1.10100.1

System Center Service Management Automation

Service Management Automation sees the release ofUpdate Rollup 5. Among the issues addressed are:

  • Runbooks that, using cmdlets of System Center 2016 Service Manager, fail with the error "MissingMethodException".
  • Runbooks that fail with the exception "unauthorized access".

Improvements have also been made in the debug logging.

To see the complete list of issues and the details on how to upgrade, you can access to the specific knowledge base.


Evaluation of OMS and System Center

Please remember that in order to test and evaluate for free Operations Management Suite (OMS) you can access this page and select the mode that is most appropriate for your needs.

To test the various components of System Center 2016 you can access theEvaluation Center and after the registration you can start the trial period.

Azure Backup: as the solution evolves

Microsoft recently announced important news regarding the protection of virtual machines using Azure Backup. Thanks to an update of the backup stack you can get consistent improvements that make the solution more powerful and extend the potential. In this article will be investigated the benefits obtained by the update and will be examined the steps to switch to the new backup stack.

Features introduced by the new backup stack

Instant Recovery points and performance improvements

The Azure Backup job for the protection of virtual machines can be divided into two distinct phases:

  1. Creating a snapshot of the VM.
  2. Snapshot transfer to a Recovery Service vault.

Figure 1 - Steps of the backup job

Updating the backup stack, the recovery point is made available immediately after you create the virtual machine snapshot (Phase 1), and it is usable for restore operations according to the known methods. Unlike before that it was possible to use it only at the end of phase 2. From the Azure portal it is possible to distinguish the type of recovery point, as at the end of phase 1, the recovery point type is defined as "snapshots", while at the end of the snapshot transfer to the backup vault, the recovery point type is marked as "snapshot and vault".

The snapshots created during the backup process are maintained for 7 days. With this change are reduced considerably the execution time of the restore, carried out using the snapshots, which can be used in the same way to the checkpoint created by Hyper-V or VMware.

Support for large disks

The new backup stack also allows you to protect disks of size up to 4TB, both typologies: managed and unmanaged. Previously the limit in the maximum size of protected disks was 1 TB.

Distribution of disks during the recovery of virtual machines

After the upgrade of the backup stack you have the option to choose where to place the disks unmanaged of the virtual machines during the restore process. This reduces the configurations that would be necessary, post restore activities, putting all disks within the same storage account.

The Upgrade process

In order to enjoy the benefits introduced by the new backup stack you must manually upgrade your subscription which owns the Recovery Service Vault in the manner later described.

Consideration Pre-Upgrade

Before dealing with the upgrade of the stack you should consider the following aspects:

  • Since the upgrade is enabled at the Azure subscription level, the method of performing backups is changed for all protected virtual machines, present in the specific subscription. In the future it will be possible to have more granular control of this upgrade process.
  • The snapshots are saved locally to speed up the process of creation of the recovery point and to increase the speed of the restore processes. This means that there will be costs for the storage used by the snapshots preserved for 7 days.
  • The incremental snapshots are saved as page blob. For those who use managed disks there are no additional costs, while those using unmanaged disks must also consider the cost of the saved snapshots (during the 7 days) in the local storage account.
  • In the event of restore of a premium VM, starting from a snapshot recovery point, will be present, when creating the VM performed by the restore process, a temporary storage location.
  • For premium storage account you need to consider an allocation of 10 TB, for snapshots created for the purpose of instant recovery.

How to upgrade

The upgrade can be performed directly from the Azure portal or through PowerShell commands.

By accessing the Recovery Service vault from Azure portal, a notification will appear which indicates the ability to perform the backup stack upgrade.

Figure 2 – Backup stack upgrade notification

Selecting the notification the following message will appear that allows you to start the upgrade process.

Figure 3 - Launch of the backup stack upgrade process

The same operation can be performed using the following Powershell commands:

Figure 4 – Powershell commands to register the subscription to the upgrade process

Upgrade backup stack generally takes several minutes (maximum two hours), but it has no impact on scheduled backups.


This major update of the Azure Backup stack shows that the solution is evolving to expand its capabilities and to ensure higher performance levels. To make a contribution with new ideas or vote the features that are considered most important for Azure Backup you can access this page. For more details on Azure Backup you can see the Microsoft's official documentation.

Windows Server 2019: the new service for the storage migration (Storage Migration Service)

A known issue that rotates from time around Windows Server is the lack of an effective methodology for migrating data on older operating systems and storage on. Due to the fact that the in-place upgrade of the operating system are not feasible and that the manual migration are often slow and require significant service interruptions, the trend is to continue to use older versions of Windows Server. This article will present the features of the new service Storage Migration Service (SMS), included in Windows Server 2019, and it will be examined how this service can migrate storage present on older Windows Server platforms to facilitate its disposal.

Figure 1 – An overview of Storage Migration Server

Storage Migration Service, in this first version, is capable of transferring the content using the SMB protocol (any version) towards different targets, which: traditional hardware and virtual machines on-prem, IaaS VMs running on Azure or on Azure Stack, and Azure File Sync.

The source of the migration may have an operating system of the following:

  • Windows Server 2003
  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016
  • Windows Server 2019 Preview

The new role Storage Migration Service (SMS) can be activate both the Standard and in the Datacenter edition of Windows Server 2019, through Windows Admin Center, PowerShell or Server Manager.

Figure 2 – Installation, from Windows Admin Center, of SMS functionality

The SMS feature consists of a service called Orchestrator (Storage Migration Service Orchestrator Node) and one or more services called proxies. The Orchestrator manages migrations and keep the various results into a repository, while the proxies systems enrich with additional functionality the migration process and allow to obtain higher performance.

The management of the migration workflow, made possible by the role SMS, can be totally orchestrated through Windows Admin Center (also known as Project Honolulu). With this management tool, you have the option to migrate storage simultaneously, that resides on multiple systems, towards new targets present on-premises or in Azure.

Storage Migration Service is able to handle the most common problems that you may have when you are faced with storage migrations, including: file in use, share settings, security settings, network names and addresses, local security principals and encrypted data. All these operations are easily managed from an intuitive graphical interface, that masks the robust necessary automations, based on PowerShell.

In order to manage Storage Migration Service from Windows Admin Center is necessary to install a specific extension in preview.

Figure 3 - Installation in Windows Admin Center of the SMS Extension

After adding the Extension you will be able to create new migration jobs.

Figure 4 - Adding a SMS job

Storage Migration Service allows to approach the storage migration procedure in 3 different phases:

  1. Inventory existing servers (source), in order to retrieve information about the data, its security, SMB shares and network settings.

Figure 5 – Inventory phase


  1. Migration, through the SMB protocol, of the data, of the security and of the network settings to a new system (target).

Figure 6 – Transfer phase

  1. Identity management, by making the decommissioning of the old source, in order to make the migration transparent to users and applications, and without generating inefficiencies. At this stage the identity will be transferred to the new server, managing your network settings, the join to the domain and the rename of the source server. This phase, defined Cutover, today (may 2018) is not yet available to the public.


Storage Migration Server is a new tool present in Windows Server 2019, still under full development, which will be enhanced in future releases with innovative features. The potential shown is really interesting and certainly in the future will be a service widely used to easily deal with the migration of content from obsolete platforms , thus allowing their disposal. For those wishing to test the latest new features about Windows Server 2019 can participate in the program Windows Insider. Please remember that the preview of Windows Server 2019 and Storage Migration Service are not officially supported in production environments.

OMS and System Center: What's New in April 2018

Microsoft announces constantly news about Operations Management Suite (OMS) and System Center. Our community releases this summary monthly, allowing you to have a general overview of the main new features of the month, in order to stay up to date on these arguments and have the necessary references to conduct any insights.

Operations Management Suite (OMS)

Log Analytics

Microsoft has decided to extend the Alerts in Log Analytics from OMS to the Azure Portal, centralizing on Azure Monitor. This process will be done automatically starting from 14 May 2018 (the date has been postponed, Initially it was planned for 23 April), will not result in any change to the configuration of Alerts and related queries, and it does not foresee any downtime for its implementation. For further details please consult the specific article “The extension of Log Analytics Alerts in Azure Monitor“.

Figure 1 – Notification of alerts extension in the OMS portal

To avoid situations where, the resources managed in Log Analytics may send in an unexpected way a high volume of data to the OMS Workspace, is introduced the ability to set a Daily Volume cap. This allows you to limit the data ingestion for your workspace. You can configure the Data volume cap in all regions, accessing to the section Usage and estimated costs:

Figure 2 – Setting the Daily volume cap

The portal also shows the trend of the volume of data in the last 31 days and the total volume of data, grouped by solution:

Figure 3 – Data ingestion for solution (latest 31 days and total)

Log Search API usage, used by the old Log Analytics query language, has been deprecated since 30 April 2018. The Log Search API has been replaced with theAzure Log Analytics REST API, which supports the new query language and introduces greater scalability than the results you can return. For more details on this you can consult theofficial announcement.


This month the new version ofOMS agent for Linux systems resolves a significant number of bugs and introduces new versions of the various components. It also introduced support for Debian 9, AWS 2017 and Open SSL 1.1. To obtain the updated version of the OMS agent you can access to the official GitHub page OMS Agent for Linux Patch v 1.6.0-42.

Figure 4 – Bug fixes and what's new for the OMS agent for Linux

Azure Backup

As for Azure Backup, have been announced the following improvements in service scalability:

  • Ability to create up to 500 recovery services vaults in every subscription for region (previously the limit was 25).
  • The number of virtual machines that can be registered in each vault is increased to 1000 (it was previously 200).

Azure Backup, for the protection of Azure Iaas VM, now supports the storage account secured using storage firewalls and Virtual Networks. More details about this can be found on Microsoft's official blog.

Figure 5 - Protection of Azure Iaas VM in storage protected scenarios

There are different rules to enable the long-term backup for Azure SQL Database . The procedure, to keep the backup of Azure SQL DB up to 10 years, expected saving in an Azure Recovery Vault Service. By introducing this new feature, you have the option to keep the long-term backup directly within an Azure Blob Storage and will terminate the need for a Recovery Vault Service. All this gives you more flexibility and greater control of costs. For more details about it you can see the article SQL Database: Long-term backup retention preview includes major updates.

System Center

System Center Configuration Manager

For System Center Configuration Manager has been released the version 1804 for the Technical Preview branch. In addition to general improvements in the solution this update introduce new features concerning the OSD, the Software Center and the Configuration Manager infrastructure. All the new features included in this update can be found in the article Update 1804 for Configuration Manager Technical Preview Branch. Please note that the releases in the Technical Preview Branch help you evaluate the new features of SCCM and it is recommended to apply these updates only in test environments.

System Center Operations Manager

Microsoft has released theUpdate Rollup 5 (UR5) for System Center 2016 Long-Term Servicing Channel (LTSC). This update does not introduce new features, but fixes several bugs.

Following, are the references, about this update, for each System Center product:

There are no updates regarding Service Provider Foundation.

System Center Operations Manager 1801 introduces support for Kerberos authentication when the protocol WS-Management is used from the management server for the communication with UNIX and Linux systems. This allows you to have a higher level of security, eliminating the need to enable basic authentication for Windows Remote Management (WinRM).

Also in System Center Operations Manager 1801 introduces the following improvements on the management of the Linux log file monitor:

  • Support for Wild Card characters in the name and path of the log file.
  • Support for new match patterns that allow customized searches of log.
  • Support for pluging Fluentd published by fluentd community.

Below there are the news concerning the Management Pack of SCOM:

  • MP for Windows Server Operating System 2016 and 1709 Plus
  • MP for SQL Server 2008-2012
  • MP for SQL Server 2014
  • MP for SQL Server 2016
  • MP for Microsoft SQL Azure Database
  • MP for SQL Server Dashboards
  • MP for UNIX and Linux 7.6.1085.0

Evaluation of OMS and System Center

Please remember that in order to test and evaluate for free Operations Management Suite (OMS) you can access this page and select the mode that is most appropriate for your needs.

To test the various components of System Center 2016 you can access theEvaluation Center and after the registration you can start the trial period.

Storage Replica: What's new in Windows Server 2019 and the management with Windows Admin Center

Storage Replica, in Microsoft home, is a technology introduced in Windows Server 2016 that you use to replicate, synchronously or asynchronously, volumes between servers or clusters, for disaster recovery purposes. This technology also allows you to create stretch failover cluster with nodes spread over two different site, keeping in sync the storage. This article will present the news, regarding Storage Replica, that will be introduced in Windows Server 2019 and you will be shown how to enable Storage Replica by using the new management tool Windows Admin Center.


What's new in Storage Replica in Windows Server 2019

In Windows Server 2016 there is the possibility of using Storage Replica only if you use the Datacenter Edition as operating system version, while in Windows Server 2019 there will be the option to enable Storage Replica also adopting the Standard Edition, but right now, with the following limitations:

  • You can replicate a single volume instead of an unlimited number of volumes.
  • The maximum size of the replicated volume should not exceed 2 TB.
  • The volume in replica can have only one partnership, instead of an unlimited number of partners.

By adopting a new Log format used by Storage Replica (Log v 1.1), imported performance improvements are introduced regarding throughput and latency. You can benefit from these improvements if all systems involved in the replication process will be Windows Server 2019 and will be especially noticeable on all-flash arrays and on Storage Spaces Direct cluster (S2D).

To validate the effectiveness of the replication process, is introduces the ability to perform a Test Failover. Through this new feature it is possible to mount a writable snapshots of the replicated storage. To perform this operation, for testing purposes or backup, you must have a volume, not involved in replication, on the destination server. Testing Failover has no impact on the replication process, which will continue to ensure the protection of the data and the changes to the snapshot will remain circumscribed to the test volume. Upon completion of testing it is appropriate to conduct a discard of the snapshot.

Storage Replica in Windows Admin Center

Windows Admin Center, also known as Project Honolulu, enables via an HTML5-based web console, to manage the infrastructure in a centralized way.

Through Windows Admin Center, you can install on the servers the Storage Replica feature and the related PowerShell module.

Figure 1 - Add the Storage Replica feature from Windows Admin Center

Figure 2 - Confirm the installation of Storage Replica and its dependencies

Figure 3 - Notification that the installation was successful

After the installation, the server requires a restart.

At this point you can configure, through Windows Admin Center, a new partnership of replica. The same thing could be accomplished using the Windows Powershell cmdlet New-SRPartnership.

Figure 4 - Adding new Storage Replica partnership between two replication Groups

Figure 5 - Settings required for the configuration of the Partnership

Windows Admin Center reports, at the end of the configuration, the details of the partnership.

Figure 6 - Details about the replication partnership

In addition, you can manage the replication status (suspend \ resume), switch the direction of synchronization and modify the configurations (add \ remove the replica volumes and settings of the partnership).

Figure 7 - Switch the replication direction

Figure 8 - Changing the partnership settings


Windows Server 2019 will introduce significant changes in Storage Replica service that, In addition to evolve it in terms of performance and effectiveness, will make it even more accessible. The whole is enriched by the possibilities offered by Windows Admin Center to easily, quickly and completely manage Storage Replica. Microsoft is making significant investments in storage and the results are obvious and tangible. For those wishing to test the latest new features about Windows Server 2019 can participate in the program Windows Insider.



Azure Monitor: how to check the health of Azure Services

Azure Monitor, through the service called Azure Service Health, can provide detailed information in case you experience conditions affecting the functioning of your services in the Microsoft cloud. In this article we will examine how Azure Service Health can help to identify the impact of the problems, send notifications and maintain administrators up to date as the issue is resolved. It will also further how this service can help to prepare for planned maintenance and to understand how these might affect the availability of your resources.

To get a visual on the overall state of Azure health, Microsoft offers its status page, that shows in real time the situation of the various products and services, divided into geographical areas. This page shows all the problems, even those who do not have a direct impact on the status of your services.

To obtain a customized view, covering only your own resources, you can use Azure Service Health. In this way is encouraged early detection of information concerning the following aspects:

  • Problems on services: lists the Azure services issues that impact on own resources.
  • Scheduled maintenance: lists the future maintenance affecting the availability of your own services.
  • Health advisories: these are the changes in Azure services that require attention. Possible examples of this can be reports of certain usage quotas are exceeded or when certain features of Azure are deprecated.

Figure 1 – Azure Service Health sections present in the Azure Portal

By accessing the section Azure Service Health – Service issues, in Azure Monitor, you can create custom dashboards. In order to receive notifications only for resources of interest, you are prompted to select the subsbriptions, the regions and the appropriate services. At the end of this selection, you can save the filters by assigning a name.

Figure 2 – Selection of regions

Figure 3 – Selection of Azure services

Figure 4 – Saving and naming

By selecting the button "Pin filtered world map to dashboard" you can see the custom map in the Azure Portal dashboard, so you instantly have a visual impact on the health status of the subscription, of the services and of the selected regions.

Figure 5 – Map, with filters applied, shown in the dashboard

If issues arise that impact your resources on Azure, by accessing the portal, you will receive a notification similar to the following:

Figure 6 - Reporting an ongoing issue that impacts your services

Selecting the custom map you will be sent in the Azure Service Health of Azure Monitor. This dashboard shows the relevant details and the list of your own resources, that potentially could be impacted by the issue, in addition to its status updates.

Figure 7 - Summary of the issue

From this page you can also download the PDF documentation (in some cases also in CSV format) describing the problem, in order to be sent to those who have no direct access to the Azure Portal. There are also useful links to contact Microsoft support if error condition persist after the issue is reported as solved.

Figure 8 – Resources potentially impacted by the issue

The section Health history shows past problems encountered on Azure services and that have had an impact on the health of your own resources.

Figure 9 - List of problems reported in the Health history

Azure Service Health, in the section Resource health, also displays the state of health of the resources by type.

Figure 10 – Resources Health by type

Selecting the individual Azure service you can consult both the current state of health that any problems that occurred in the past on a given resource.

Figure 11 – Current state of health and past events of a specific Virtual Machine

Thanks to the complete integration of Service Health on Azure Monitor, which holds the alerting engine of Azure, you can configure specific Alerts if there are issues on Azure side, that impact on the operation of the resources present on your own subscription. The notification occurs through Action Groups, that currently includes these possible actions:

  • Voice call (currently only in US) or sending SMS (for enabled countries).
  • Sending an email.
  • Calling a webhook.
  • Sending data to ITSM.
  • Recalling a Logic App.
  • Sending a push notification on mobile app of Azure.
  • Running a runbook of Azure Automation.

Figure 12 – Adding a Service Health Alert

Figure 13 -Configuring a Service Health Alert


The recent availability of Azure Service Health, introduced the ability to receive customized and targeted information on the health of your own resources in Azure, without having to search for potential problems of Azure globally by going to its status page. This saves time and easily understand, in the face of problems or scheduled maintenance, what is the impact on your own services.

The extension of Log Analytics Alerts in Azure Monitor

Being able to take advantage of a centralized and effective service for the management of Alerts of your infrastructure is definitely an important and fundamental part of the monitor strategy. For this purpose Microsoft has introduced a new experience in the management of the Alerts through Azure Monitor. This article will present how to evolve the management of Alerts in Log Analytics and what are the benefits introduced by this change.

In Log Analytics there is the ability to generate Alerts when, in the research that is done with scheduled frequency in the OMS repository, you will get the results that match with the criteria established. When an Alert is generated in Log Analytics you can configure the following actions:

  • Email notification.
  • Invocation of a webhook.
  • Running a runbook of Azure Automation.
  • IT Service Management activities (requires the presence of the connector for the ITSM solution).

Figure 1 – Alerts in Log Analytics

Until now, this type of configuration has been managed from the OMS portal.

Azure Monitor is a service that allows you to monitor all Azure borne resources, and it holds the "alerting" engine for the entire cloud platform. By accessing the service from the Azure portal you will have available, in a unique location, all Alerts of your infrastructure, from Azure Monitor, Log Analytics, and Application Insights. You can then take advantage of a unified experience both with regard to the consultation of the Alerts that for its authoring.

At present the Alerts created in Log Analytics are already listed in the Azure Monitor dashboard, but any change involves accessing to the OMS portal. To facilitate this management Microsoft has therefore decided to extend the Alerts in Log Analytics from the OMS portal to the Azure Portal. This process will be done automatically starting from 23 April 2018, will not result in any change to the configuration of Alerts and related queries, and it does not foresee any downtime for its implementation.

It follows that, after this operation, any actions associated with the Alerts will be made through Action Groups, which will be created automatically by the extension process.

The extension of Log Analytics Alerts in the Azure Portal, besides the advantage of being able to manage them from a single portal, allows you to take advantage of the following benefits:

  • There is no longer the limit of 250 Alerts.
  • You have the ability to manage, enumerate and display not only the Alerts of Log Analytics, but also those from other sources.
  • You have greater flexibility in the actions that can be undertaken against a Alerts, thanks to the use of Action Groups, such as the ability to send SMS or voice call.

If you don't want to wait for the automatic process you can force the migration via API or from the portal OMS, according to the steps later documented:

Figure 2 - Starting the "Extend into Azure" process from the OMS portal

Figure 3 – Step 1: view the details of the extension process.

Figure 4 – Step 2: summary of the proposed changes

Figure 5 – Step 3: confirmation of the extension process

Specifying an email address you can be notified at the end of the migration process, that contains the summary report.

Figure 6 - Notification of the planned extension of the Alerts

During the process of extension of Log Analytics Alerts on Azure you will not be able to make changes to existing and creating new Alerts Alerts shall be made from Azure Monitor.

At the end of the extension process the Alerts will be visible even from the OMS portal and you will receive notification via email, to the address specified during the migration wizard:

Figure 7 – Email notification at the end of the extension process

From the Azure portal, in the section “Monitor – Alerts”, you will have a full management of Log Analytics Alerts:

Figure 8 - Example of modifying an Alert Rule from the Azure Monitor

The extension of the Alerts of Log Analytics in Azure Monitor does not involve costs, but you should be aware that, the use of Azure Alerts generated by Log Analytics query, is not subject to billing only if it falls within the limits and under the conditions reported in the page of Azure Monitor costs.


Thanks to this activity of extension of Log Analytics Alerts, Azure Monitor is confirmed that it is the new management engine of all Alerts, by providing to the administrators a simple and intuitive interface and enriching the possible actions of a notification alert.