Software-defined or software-defined networks (Software Defined Networking, SDN) have been written more than once in the past 1.5 years. Hyper-V Network Virtualization (HNV), first introduced in Windows Server 2012, is one of the implementations of the SDN approach. I talked about HNV
architecture and
configuration using System Center 2012 Virtual Machine Manager (VMM) in previous posts. HNV has undergone several changes and improvements in
Windows Server 2012 R2 , which are briefly discussed today.
HNV architecture changes
The fundamental change in the HNV architecture is to move the Windows Network Virtualization (WNV) filter inside the Hyper-V Extensible Switch (Hyper-V Extensible Switch). What is the importance of this change?
Recall that HNV uses packet encapsulation to forward traffic between virtual machines (VMs) of a virtualized network located on different physical hosts. The source packet with the IP addresses belonging to the virtualized network (the so-called CA addresses) leaves the VM, is intercepted by the WNV filter and is packed into the NVGRE structure, which, in turn, is placed in the packet with the IP addresses corresponding to the physical network segment (PA -address) Further, such a packet can be freely transferred between data center hosts. In the Windows Server 2012 network stack, the filter is located between the Hyper-V switch and the driver of the physical network adapter of the host. As a consequence, all Hyper-V Extensible Switch extensions, if any, “see” only source packets with CA addresses, and they don’t know anything about PA addresses and what happens next with the packet before it leaves host
')

In Windows Server 2012 R2, the filter is located inside the Hyper-V switch. Any switch extensions, including rules, access control lists, antivirus modules, can be applied to both the original IP packet and the NVGRE structure. The path of passing, for example, the incoming packet is as follows:

It is also worth noting that if in Windows Server 2012 the WNV filter must be enabled in the properties of the network adapter, in Windows Server 2012 R2 the filter is enabled by default, and the HNV technology is ready for use immediately after the OS is loaded.
IP address change tracking
Another important innovation in Windows Server 2012 R2 is that Hyper-V has become “trained”, that is, it can now track changes in CA addresses inside the VM.
Both earlier and now HNV can be configured either by PowerShell scripts or by using VMM. In the first case, the corresponding cmdlets need to edit the HNV policy and reflect in it the CA addresses that are specified inside the VM. In the case of VMM, as shown
here , you need to create a pool of IP addresses for the CA space, after which, when creating the next VMM virtual machine, it will take a free address from this pool, assign it to the VM being created and update the HNV policy.
What happens if the VM owner manually changes the IP address inside his virtual machine? This VM can no longer communicate with other VMs of the virtualized network, since in Windows Server 2012 there is no mechanism that in such a situation would automatically change the HNV policies and reflect the new CA address in them. Of course, the administrator of the physical hosts can edit the HNV policy via PowerShell, but in the conditions of the data center, this approach seems to be hardly real. If the VM was deployed using VMM, then the latter is designed to control the distribution of addresses. And he does this as long as the IP address issued by him from the pool remains unchanged. But changing the IP inside the VM manually leads to the fact that VMM loses control over the network settings of the VM, and with them the centralized management of the HNV policy, for which HNV support was built into the VMM, is also lost.
In Windows Server 2012 R2, the situation is different. The change of CA-addresses inside the VM is immediately reflected in the HNV policy of the host on which the VM is running, the host transmits these changes to VMM, and that in turn synchronizes the changes with the other hosts that have VMs from the same virtualized network.
The ability to track changes in CA-addresses allows you to now implement a number of important scenarios:
- Clustering support . In a virtualized network, VMs can be combined into a high-available guest cluster using the failover clustering service and the Shared VHDX shared VHDX mechanism. In addition, the HNV gateway can also be clustered.
- Using a DHCP server in a virtualized network . Inside the VM, you can configure DHCP, which will be used by other VMs on this network.
Broadcast and multicast traffic support (Broadcast / Multicast)
The possibility of using DHCP in a virtualized network requires additional explanations, since in addition to “tracking” dynamic addresses, it is necessary to provide broadcast traffic within the virtualized segment. It is virtualized, because if the broadcast of each virtual network is released into the network physical, then the level of flood in the data center will be off scale.
Primarily, multicast IP will be used to transmit broadcast / multicast traffic, if configured on the physical host adapter. However, setting up multicast IP is hardly a common practice in data centers. Accordingly, if multicast is not configured on physics, then broadcast packets from the virtual network are transmitted using unicast, and only to those PA addresses where VMs belong to the same virtual subnet.
As an illustration, the process of requesting and obtaining an IP address by a DHCP client is presented in a somewhat simplified manner. PA addresses are highlighted in blue and CA addresses in green.


Support for broadcast / multicast traffic also includes support for Duplicate Address Detection (DAD), Network Unreachability Detection (NUD).
Productivity increase
From the point of view of improving the performance of network operations when using HNV, two points should be noted.
First, when using the timing of network adapters with dynamic traffic balancing mode (meaning the new Dynamic mode for OS-embedded NIC Teaming technology) for VMs in a virtualized network not only provides fault tolerance at the network adapter level, as in Windows Server 2012, but the actual balancing of traffic between the adapters of the group, and therefore increases the network bandwidth for the VM.
Secondly, network adapters with support for NVGRE Encapsulated Task Offload began to appear on the market. The performance of network operations depends largely on whether the hardware capabilities of the network adapter are used, such as, for example, Large Send Offload (LSO), Receive Side Scaling (RSS) or Virtual Machine Queue (VMQ). If the network adapter supports the listed technologies, then for non-virtualized network traffic they just work. The use of these mechanisms for encapsulated NVGRE packets assumes that the adapter is able to analyze CA packets inside GRE.
Mellanox and
Emulex have announced support for NVGRE Encapsulated Task Offload in some models of their adapters. Below are the test results of one of these adapters.

Diagnostics and Testing
The more complex the technology, the more difficult it is to identify the causes of problems. At least, this applies to HNV fully. Therefore, the set of diagnostic tools for HNV in Windows Server 2012 R2 is expanded.
Microsoft Message Analyzer . This free utility is a network (and not only) analyzer, replacing Network Monitor.
Microsoft Message Analyzer recognizes the structure of NVGRE and makes it easy to analyze encapsulated packets, including CA addresses. I note that this analyzer provides interception of not only network traffic, but also using System Tracing for Windows (ETW) system and other messages on remote hosts with Windows 8.1 and Windows Server 2012 R2.
The figure shows the pings interception between two VMs with CA addresses 10.30.1.104 and 10.30.1.101 and PA addresses 192.168.100.104 and 192.168.100.104.

By selecting the GRE packet, you can see the field containing the virtual subnet ID (Virtual Subnet ID, VSID).
Ping . A new key “-p” has appeared in the ping utility, which allows you to ping a host by its PA address. Typically, the PA space is defined as a separate VMM logical network, different from the addresses used to control the virtualization hosts, and any other addresses.

In this case, PA addresses are not displayed in the ipconfig / all response and do not respond with a “normal” ping. The key “-p” solves this problem and allows you to check the connection between the hosts in the PA-space.
Test-VMNetworkAdapter . For a large data center, a typical situation is when the data center administrator does not have access to the guest OS inside the VM. If a problem arises at the network interaction level, the data center administrator can still check the passage of pacts in the PA space, but the CA space settings may not be accessible to him. The
Test-VMNetworkAdapter cmdlet can help. In essence, this cmdlet implements ping between two VMs, with ping from one CA address to another CA address. (The administrator may not be able to change the CA-addresses inside the VM, but you can see these addresses in the VMM or Hyper-V console, without “climbing” inside the VM). Using the cmdlet, you must specify which VM is the recipient, which sender of the packets, what the sender and recipient addresses are, the sequence number and the MAC address of the recipient.
Test-VMNetworkAdapter -VMName Fabrikam_SRV02 -Receiver -ReceiverIPAddress 10.30.1.107 -SenderIPAddress 10.30.1.108 -SequenceNumber 102
Test-VMNetworkAdapter -VMName Fabrikam_SRV03 -Sender -ReceiverIPAddress 10.30.1.107 -SenderIPAddress 10.30.1.108 -SequenceNumber 102 -NextHopMacAddress 00-1d-d8-b7-1c-0c
The response, like the one shown below, indicates that the network interaction between the VM is normal, including packing and unpacking CA packets into PA packets and back. And so, with a high probability, problems should be sought already inside the VM (firewall rules, start of services, etc.)

HNV Gateway
In order for VMs of a virtualized network to interact with the outside world, you must configure the HNV gateway. Windows Server 2012 R2 can act as an out-of-the-box gateway. That is, all the necessary services and components for implementing the function of the Network Virtualization Gateway are part of the new server operating system, you just need to configure them. How this is done, what are the possibilities of such a gateway is the topic of a separate post, which I hope to publish in the near future.
So, the implementation of the SDN concept in Windows Server 2012 R2 has been further developed. The changes are aimed at increasing the flexibility and performance of the solution. Additional diagnostic tools will help you more effectively detect and fix potential problems, and the presence of an integrated gateway and its support in VMM 2012 R2 will speed up the deployment of virtualized networks across the data center.
Additional information can be found in the second module of the course “
All about System Center 2012 R2 (Jump Start) ” on the
MVA training portal.
Hope the material was helpful.
Thank!