Networking in System Center Virtual Machine Manager is always a difficult subject for students so with this post hopefully it will make a little more sense.
When we talk about networking in Hyper-V and VMM it is important however that you understand some of the key concepts of virtualized networks such as VM Networks, Logical Switches and Logical Networks and how these relate to the underlying infrastructure and Hyper-V’s networking capabilities. This will form the basic knowledge that will be required when configuring advanced networking features such as Port Classifications and Virtual Port Profiles.
Networking in Hyper-V
In Windows Server 2012 Hyper-V Microsoft changed the networking functionality of both Windows Server and Hyper-V. In previous versions of Hyper-V you required at least four physical network adapters in a Cluster configuration to achieve a networking configuration that isolated the various of types of network traffic:
Management: This network is used for general management including managing the Hyper-V host, Active Directory communication, deploying virtual machine files and backup operations.
Cluster: This type of network typically has two responsibilities, the first for Cluster heartbeats between Hyper-V Cluster nodes and secondly for Redirected Input/Output operations (IO) when Cluster Shared Volume (CSV) traffic is redirected to the CSV owner node, either due to a failure in a Host’s connection to the underlying storage or during a backup operation. In Windows Server 2012 and above CSV redirection no longer occurs during back-up operations.
Live Migration: This type of network is for migration of virtual machines between Hyper-V Cluster nodes. This network can remain idle for periods of time as Live Migrations are not a continuous operation.
Virtual Machines: This is normally the network responsible for carrying traffic to and from virtual machines.
In some cases this ran into several physical network cards. If you were using iSCSI based storage then you would ideally require an additional two network adapters, configured with Multipath IO (MPIO) for resilience, for connecting the Hyper-V hosts to the shared storage array.
The following diagram shows typical configuration of Windows Server 2008 R2 Hyper-V:
Prior to Windows Server 2012 physical network adapter teaming was not supported by Microsoft and this to be enabled on the physical NIC using software from the network adapter vendor. In order to do this the NIC’s would have to be from the same hardware vendor which had the potential to cause problems. For example, if you updated the NIC Driver and all the NIC’s were the same model and vendor then OUCH it would hurt if the driver had an issue.
In Windows Server 2012 Microsoft included NIC teaming within the base operating system and fully supported the use of it with Hyper-V. This allowed for a supported Hyper-V implementation when using NIC teaming. By using teaming you are able to create load balanced failover teams of NICS with the common capabilities of the underlying NICs. Crucially it also allowed for teaming network adapters from multiple vendors into a single team. This can assist in mitigating driver/firmware update problems as mentioned above.
Rather than requiring the use of dedicated NIC’s for the different configurations (iSCSI, Management, Live Migaration & Cluster) Microsoft introduced the concept of converged networks. In a nuntshell, the Hyper-V switch in Windows Server 2012 onwards was re-architected and all network cards could now be presented as one to Hyper-V & SCVMM. The following diagram shows how this could be implemented:
This converged networking model can allow for a much more efficient use of the available bandwidth. With server hardware now shipping with 10Gb Ethernet, having a physical 10Gb network adapter dedicated to Live Migration, would not be the best use of the available bandwidth
Receive Side Scaling (RSS)
Receive side scaling (RSS) is a network driver technology that enables the efficient distribution of network receive processing across multiple CPUs in multiprocessor systems.
Note: Because hyper-threaded CPUs on the same core processor share the same execution engine, the effect is not the same as having multiple core processors. For this reason, RSS does not use hyper-threaded processors.
When a Windows Server receives network traffic on any network adapter the network traffic is processed on the physical CPU core 0. A typical CPU core can process anywhere between ~4-6Gbps of network traffic. While this is not a problem in a network where you only have a 1Gbps network, most modern networks are implemented within a minimum of 10Gbps. When you combine 10Gbps networking with NIC teaming the aggregated bandwidth can increase drastically. To combat this bottleneck you can employ two networking offloads typically available on most enterprise level network adapters, Receive Side Scaling (RSS) and Virtual Machine Queue (VMQ).
RSS is used for physical network traffic, for example when copying large files, such as virtual machine hard disk files (VHDX), to a Hyper-V host or when a virtual machine is being live migrated between hosts. This is network traffic that is destined for the Hyper-V host not a virtual machine. RSS allows the NIC to use multiple CPU cores to process the incoming networking traffic consequently allowing for a higher network throughput. It is possible to configure specific physical network adapters to use specific network cores through PowerShell using the Set-NetAdapterRss command. By using the Get-NetAdapterRss command you can see if it is enabled on a NIC:
We can also launch the NIC properties of “Device Manager” to make sure “Receive Side Scaling” is supported by this NIC
If you create a NIC team using the in-box Windows NIC teaming functionality, then the RSS functionality of the NICs can be surfaced in the NIC team. This ensures that you have RSS capabilities even if one of the NICs in the team fails.
In Windows Server 2012 R2 Microsoft introduced virtual RSS (vRSS) which can be enabled within a Virtual Machine running Windows Server 2012 R2 on a Hyper-V host running Hyper-V 2012 R2. This allows virtual machines to take advantage of RSS as a physical host would, as network traffic inside a Virtual Machine is still processed by its vCPU core 0. To use vRSS inside a Virtual Machine you require a physical network card that has Virtual Machine Queues (VMQ) enabled.
Virtual Machine Queue (VMQ)
VMQ is similar to RSS however it works on network traffic that is destined for virtual machines. Just as RSS is capable of spreading incoming network traffic over multiple physical CPU cores so can VMQ. They are mutually exclusive technologies, so as such if you have VMQ enabled on a physical NIC you cannot have RSS enabled.
Note:If you have network adapters below 10Gbps, then VMQ is disabled by default. To enable VMQ on these adapters you will need to create a new Registry entry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\VMSMP\Parameters\BelowTenGigVmqEnabled = 1
It is strongly recommended that you do not enable VMQ on adapters that are running at less than 10Gbps. In some circumstances using VMQ on adapters running at less than 10Gbps it can introduce problems with Virtual Machine networking and is typically not required.
If you create a NIC team using the in-box Windows NIC teaming functionality then the VMQ functionality of the NICs can be enabled at the NIC team level. This then ensures that you have VMQ capabilities even if one of the NICs in the team fails.
When a Hyper-V switch is attached to a physical NIC it will utilize VMQ and disable RSS, as such if you are using the converged networking model then the vNICs attached to the virtual switch are unable to take advantage of RSS.
Virtual Network Adapter Capabilities
In addition to improving the physical network capabilities, Microsoft improved the capabilities of virtual network adapters. For virtual machines you can enable the following capabilities on a per Virtual Network Adapter basis:
VMQ: VMQ can be enabled or disabled for a virtual machine if required.
MAC Address Spoofing: This allows you to specify whether a Virtual Machine is permitted to change its source MAC address for outgoing packets. This is required when you are using Windows Network Load Balancing inside a Virtual Machine.
Router Guard: This prevents a Virtual Machine from performing router advertisement and redirection messages.
DHCP Guard: This prevents Virtual Machine from sending DHCP server messages. Do not enable this setting on Virtual Machines that are authorized DHCP servers.
IPsec Task Offload (IPsecTO): This allows virtual machines to use the IPsecTO capabilities of a physical network adapter to offload the encryption operations from the CPU to the physical NIC. Not all physical adapters are IPsecTO capable so consult your NIC vendor for specifications.
Single Root – Input/Output Virtualization (SR-IOV): SR-IOV enables network traffic to bypass the Hyper-V switch. It uses a feature of a SR-IOV called a Physical Function (PF) to map into a Virtual Machine. The Virtual Machine has a Virtual Function (VF) that is mapped to the hosts PF. As SR-IOV enabled traffic bypasses the Hyper-V switch you cannot apply most other capabilities to the virtual network adapter.
IEEE Priority Tagging: This allows for traffic originating from a Virtual Machine to have IEEE 802.1p QoS tagging applied and for those tags to remain after the traffic leaves the Hyper-V switch.
Guest Teaming: This allows for virtual network adapters to be teamed inside a guest virtual machine. This is typically used in conjunction with SR-IOV enabled virtual network adapters.
Bandwidth Management: A virtual network adapter can have one of two types of bandwidth management applied. You can define hard limits for an adapter where you specify the minimum and maximum amount of bandwidth, in Mbps, a network can utilize. The other option is to use bandwidth
Logical Networks
The Logical Networks piece of VMM is vital to ensure a successful deployment. Logical Networks are the first step in defining a model for your physical networking (also referred to as Fabric networks).
Logical Networks in VMM should be used to define a purpose. Each Logical Network should contain at least one Logical Network Site and should contain the subnets and VLANs you define.
Each Logical Network Site is tied to a Host Group in VMM.
For example you should have at least four Logical Networks defined, Host Management, Host Clustering, Host Live Migration, Virtual Machines as mentioned above.
The diagram below shows how a Logical Network and its subsequent Logical Network Sites could be configured. Each Logical Network Site has more than one VLAN available, with the Contoso-New York site having three VLANs available.
IP Pools
Each Logical Network Site can have an IP Pool associated with it which allows VMM to assign IP addresses from the pool to resources. It is possible to have multiple IP Pools per Logical Network Site however care should be taken when undertaking management of these. You cannot have overlapping IP Pools, for example if you had a pool that contained all other IP address between 10.0.0.100 and 10.0.0.200, you could not have another pool associated with the same logical site that had IP addresses between 10.0.0.190 and 10.0.0.220. If you need to change the parameters of the IP Pools, such as changing the DNS servers for a Logical Network Site, then you must manually change all of the IP Pools for that site.
As the IP Pool is attached to a Logical Network Site VMM already knows the IP subnet for the IP Pool but is unaware of other required information. When creating the IP Pool you need to specify the starting IP address within a pool and an ending IP address. It may be that you only have 30 IP addresses within the subnet for use by VMM as the remainder of the subnet is in use by other services such as Active Directory Domain Controllers. It is recommended that you provision new VLANs and subnets for use by Hyper-V and VMM to ensure there is no overlap or configuration errors.
Additional items that can be specified within an IP Pool are:
- IP addresses to be reserved for use as Virtual IPs by Load Balancers.
- IP addresses to be excluded from allocation within the given range. These IP addresses could already be in use or could be used for other configuration items such as Cluster IP addresses for guest clusters.
- Gateway IP addresses and metrics.
- DNS server(s) and DNS suffixes.
- WINS server(s).
IP Pools are very similar to DHCP Scopes in their capability however one of the key differentiating factors is that an IP address issued by VMM from an IP Pool is statically assigned to a virtual machine when it is deployed. This is not a reservation like a DHCP reservation, it is a statically configured IP address within the virtual machine, be it a Microsoft operating system or Linux.
Hyper-V Network Virtualization
If you are going to use Hyper-V Network Virtualization, referred to as HNV or NVGRE, you will need to have an additional dedicated Logical Network for your Provider Address layer. This is the network that NVGRE packets will be encapsulated within when they travel across the physical wire between Hyper-V hosts. VMM must have an IP Pool for the Provider Address Logical Network Site(s). The topic Overview Hyper-V Network Virtualization gives more information on HNV and the optional lesson, Software Defined Networking, gives more detail.
VLANs
VMM networking supports two types of VLAN:
- Traditional VLAN: This is the standard IEEE 802.1Q implementation. Each network is tagged with a VLAN ID between 0 and 4096.
- Private VLAN (PVLAN): This is an extension of the traditional VLAN where an additional layer of Level 2 isolation is available.
VLANs
When you create a Logical Network Site within VMM you only need to declare the VLAN ID, however without also declaring a subnet you cannot create an IP Pool for the Logical Network Site. When creating a Logical Network Site is recommended that both parameters are completed:
- VLAN ID: This is between 0 and 4096. If you declare that it has a VLAN of 0 then VMM will not apply a VLAN to that subnet.
- Subnet: This is the subnet that is within the declared VLAN. This is given in CIDR notation, for example if your Host Management network has a network ID of 192.168.0.1 and a subnet mask of 255.255.255.0 then this would be entered as 192.168.0.1/24.
PVLANs
Private virtual LANs (PVLANs) are regularly used by service providers to circumvent the scale limitations of VLANs as you can only have a theoretical maximum of 4096 VLANs. Per network, PVLANs allow network administrators to split a single VLAN into a number of discrete and isolated sub-networks which can then be assigned to separate customers, or tenants. PVLANs share the IP subnet that was assigned to the parent VLAN, but they require a router to communicate with other PVLANs and with resources on any other network.
A PVLAN consists of a primary VLAN ID and a secondary VLAN ID and each port, whether that be a physical or virtual port, can be configured in one of these modes:
- Promiscuous: Any network port, physical or virtual, that is set to be Promiscuous can communicate with all interfaces, physical or virtual, including isolated and community within a PVLAN.
- Isolated: Any virtual machine that is set to be Isolated has a complete Layer 2 separation from all other interfaces, physical or virtual. PVLANs prevent all traffic to an isolated port, except to and from Promiscuous ports.
- Community: Any virtual machine that uses a Community PVLAN can communicate with virtual machines that are in the same PVLAN (i.e. have the same Primary and Secondary VLAN IDs) and are set to community or promiscuous mode. Each Community is isolated from other communities.
When a port is in Promiscuous mode it is on the primary VLAN and communicates with resources on the primary and secondary VLANs. A Promiscuous port would typically be found on a router, firewall or
other gateway device.
You can only have one Isolated PVLAN per primary VLAN, so consideration should be given to ensure that a different primary VLAN ID is used in each Logical Network Site, however it is recommended that you use the same secondary VLAN ID for each Isolated PVLAN. For example if you have two Logical Network Sites where site A has a primary VLAN ID of 100 and site B has a primary VLAN ID of 200 you should you use the same secondary VLAN ID in both sites for consistency across your network infrastructure. A typical use for an Isolated mode PVLAN would be for webserver where isolation is required. If the webserver were compromised then it would not be able to communicate with other servers in the PVLAN.
When you configure a PVLAN in Community mode each secondary VLAN ID is a discrete community. Each discrete community is allowed to communicate within itself and with Promiscuous ports that have the same primary VLAN ID.
Hyper-V supports all three type of PVLAN, however VMM can only create Logical Networks and Logical Network Sites based on the Isolated PVLAN mode. If your virtual machines require PVLANs in either Promiscuous or Community mode then these would need to be configured outside of VMM.
Logical Switches
In Hyper-V there is only one switch, the Hyper-V Extensible Switch. Unlike other hypervisors the switch is not replaceable, it is extensible. The switch can have extra capabilities added to it by deploying switch extensions. There are a variety of vendors offering switch extensions including Cisco with their Nexus 1000v and 5nine with their Cloud Security extension. By being extensible you can create an extremely capable virtual switch that implements a variety of technologies without having to compromise on functionality.
A Logical Switch is VMM’s management component that combines the physical characteristics you have defined in an Uplink Port Profile with the characteristics you have defined in Virtual Port Profiles. It can also have virtual switch extensions defined. By combing these into a Logical Switch it is possible to create a reusable configuration that offers consistency between deployments. When you deploy a Logical Switch you choose which Uplink Port Profile to use with that deployment, as VMM knows how you have configured the Logical Switch it will create NIC teams if required and apply the required configuration. Standard Switches
VMM can create standard Hyper-V switches that are not Logical Switch implementations, however there are several limitations:
- You can only use a single physical NIC with a Standard Switch deployment directly through VMM. You can attach a Standard Switch to a NIC team through VMM, however that NIC team would have to be created outside of VMM.
- You cannot create a completely Converged Network implementation on a Standard Switch through VMM. The most you can achieve from this type of deployment is a management NIC. Any additional host level NICs you require will have to be created manually on the Hyper-V host.
- Logical Network Sites have to be manually assigned to the network interface connected to the Standard Switch.
For example, if you had ten Hyper-V hosts in a cluster you could create a single Logical Switch that characterizes all of the networking available in the cluster; you could then deploy the Logical Switch to each host knowing that the configuration of the switch will be identical across each host in the cluster. Using a Standard Switch would require considerable administrative effort to ensure all of the hosts are configured in the same way.
As the Logical Switches contain all of the information from the Uplink Port Profile and Virtual Port Profiles VMM can determine if a Logical Switch is capable of servicing a virtual machine. For example if you had a Virtual Port Profile called “VPP-NLB” and that was assigned to a Virtual Machine Template then any host you wish to deploy a virtual machine from using that Virtual Port Profile must have the “VPP-NLB” declared in its Logical Switch implementation. VMM will prevent the deployment of a virtual machine where the virtual network adapter’s profile is absent from the target Logical Switch.
VM Networks
In terms of the overall network architecture in VMM the VM Network is the final component. The VM Network is the connection between any given virtual network adapter and a Logical Network. Every virtual network adapter must be connected to a VM Network in VMM, including virtual network adapters that are part of a Converged Network. The characteristics of a VM Network are defined by the Logical Network they are hosted in.
The Logical Network and VM Network Relationship
It is possible to have multiple VM Networks connected to a single Logical Network as the Logical Network dictates the level of Isolation required. When you create the Logical Network you decide on the Isolation level that is required for virtual machines connected to that Logical Network.
Hyper-V Network Virtualization and VM Networks
Until now you have learned how VMM represents connections between virtual machines and physical networks in the traditional networking sense. Hyper-V Network Virtualization (HNV) is a way of creating isolated networks within a Logical Network. These isolated networks are known as Tenant networks. The isolated networks are totally contained within the physical Fabric network using the Network Virtualization using Generic Routing Encapsulation (NVGRE) protocol.
Tenant networks are VM Networks where HNV has been enabled in the hosting Logical Network, the lesson Logical Networks discusses Logical Networks that can be used for HNV. A Tenant network can have multiple subnets defined with whichever address spaces they require. The physical networking layer is unaware of the Tenant networks defined within. The next topic Overview of Hyper-V Network Virtualization gives a more complete high-level view of HNV with the optional lesson, Software Defined Networking, going in to detail about HNV.
VM Networks
Hyper-V Network Virtualization is the concept of Software Defined Networks within Hyper-V and VMM. It is referred to as HNV or Network Virtualization using Generic Routing Encapsulation (NVGRE).
The protocol Microsoft have created is called NVGRE which involves encapsulating Tenant network traffic within a physical provider network. The Provider Network contains a unique physical IP address space that is used by NVGRE.
It is possible for Provider Networks to span multiple Hyper-V clusters and even datacenters. As the Tenant traffic is encapsulated, all that is seen on the physical network is the Provider Network, seen as regular network traffic to networking infrastructure such as switches and routers. Provider Networks are Logical Networks within a company’s network infrastructure and therefore can have multiple Logical Network Sites defined within a single Logical Network. If a Provider Network spans multiple datacenters it is possible for Tenant networks to span multiple datacenters.
Routing Domains
A Routing Domain typically consists of a collection of IP networks where IP addresses do not overlap. Typically they are administered by a single source, for example by an enterprise. Routing Domains are usually terminated at the line of responsibility or security. For example your company’s network is its own Routing Domain, however within that overarching Routing Domain there could be several others. If you have a perimeter network that will be its own Routing Domain, your Production network will be another. If you have Development and Testing networks these are usually their own Routing Domain.
The diagram below shows a high-level view of Routing Domains. Each area within this network is its own Routing Domain. Each of these Routing Domains could be combined and represent a Routing Domain for the organization.
Isolation
Within HNV a Tenant Network forms the isolation boundary (or Routing Domain) and subnets within the Tenant Network are broadcast boundaries. Each Tenant Network, which is created in VMM through the VM Networks components, is a single routing domain and is given a Routing Domain ID (RDID) which is a unique GUID issued by the network management software, in this case VMM. Each Tenant Network can have one or more subnets and each subnet is given a unique Virtual Subnet ID (VSID). Within a Logical Network, which can span datacenters, each VSID must be unique; the range of possible VSIDs is 4096 to (2^24) -2 which is over 16.7 million possible unique VSIDs within a single Logical Network. This gives HNV the ability to scale far beyond traditional VLANs and Private VLANs. It is possible to have more than one Logical Network configured for HNV within an organization, but this is not typical.
The diagram below shows a possible configuration for HNV. The Tenant Networks, Contoso – Tenant A and Contoso – Tenant B, are contained within the Provider Address network:
When you assign an IP Pool to the Provider Address Logical Network Sites you only need to include a start and end IP address. If you need to route your Provider Address network then you will need in to include a Gateway IP Address. This Gateway IP Address is not used by normal Windows traffic, such as Management or Backup traffic, and is for the sole use of the Provider Address network. When viewing the routing table on a Hyper-V host where HNV is in use, the networking information does not appear. To view the networking information, you will need to use the Get-NetVirtualizationProviderRoute cmdlet
Routing inside a HNV enabled VM Network
When you create multiple subnets in a HNV enabled VM Network any device attached to the VMnetwork can route between subnets contained within. For example, if you have a subnet with the IP subnet of 10.0.0.0/24 and another subnet 192.168.1.0/24, then devices in either subnet can communicate without restriction.
When the VM Network subnet is created the first IP address in the subnet is allocated to the default gateway IP address. The default gateway IP, in this case 10.0.0.1 and 192.168.1.1, will not allow you to break out of the VM Network without configuring a connection to a HNV Gateway. The Software Defined Networking topic later in this module goes into more detail about HNV Gateways. The default gateway IP addresses will (in the previous example it would be 10.0.0.1 and 192.168.1.1) allow you to route between subnets within the VM Network.
Routing outside of a HNV enabled VM Network
When a Tenant wants to leave its isolated network the traffic must be de-capsulated from the Provider Network. This task is undertaken by specially configured Windows Server Gateways, or networking appliances. Windows Server 2012 R2 ships in-box with the Remote Access role and part of this role is a multi-tenant HNV Gateway.
You can add a Windows Server with these roles to VMM and configure it for use by HNV. Once you have added the server to VMM it is configured to be a multi-tenant gateway, which prohibits you from changing the configuration through the Routing and Remote Access MMC. While it is possible to change the configuration of the gateway through PowerShell it is not recommended. VMM will make all of the required configuration changes and keep it up to date.
Routing options
There are two routing options available for HNV gateways:
- Network Address Translation.
- Direct Routing.
Network Address Translation (NAT)
Each VM Network is assigned an IP address from an IP Pool that is associated with the externally facing NIC on the HNV Gateway and this IP address is used as the public IP address for the VM Network. It is possible to configure port forwarding rules to allow traffic into the VM Network via this IP address. As NAT is in use, the internal IP addresses and subnets of the VM Network are not known outside of the VM Network. This creates its own Routing Domain with total isolation for the virtual machines within the VM network.
It is possible to configure the Gateway to provide site to site VPN support and as such you can create a secure connection from a tenant network to another network using industry standard VPNs. If you connect a VM Network behind a NAT Gateway to another network you create a single Routing Domain which incorporates both networks and as such the network address spaces within network should be unique.
Using NAT based routing for your VM Network it is possible to connect, through the site to site VPN, a Tenant VM Network in your datacenter to a Virtual Network in Azure using an Azure Virtual Network Gateway.
Direct Routing
This allows the internal IP addresses of the VM Network to be seen on the networks on the external side of the HNV Gateway. For example, if you have a subnet of 10.10.10.0/24 within your VM Network and a subnet of 192.168.1.0/24 on the external side, then traffic on the external side would see the originating IP address within the 10.10.10.0/24 subnet.
When a Gateway is configured in this mode it has a one-to-one relationship with a VM Network and as such is exclusively used by a single VM Network, however a Tenant can still have as many subnets within their VM Network as required. In addition the networks inside the VM Network and the networks outside the gateway creates a single routing domain which incorporates both networks.
Conclusion
I hope this makes networking in Hyper-V and SCVMM a little easier to understand.