Free Up Disk Space on Your Mac Hard Drive

Your Disk is Almost Full
Even in 2018, MacBooks still have tiny hard drives that fill up quickly. Luckily there are quick and easy ways to free up space on your hard drive. Here’s how to clean up your Mac and reclaim some drive space.
You can obviously free up disk space by simply doing a cursory find-and-delete for big files and other things that you’ve downloaded, but realistically that’s only going to get you so far. Most of the wasted space on your Mac is only going to be reclaimed if you look at lot deeper—cleaning out language files, removing duplicate files, deleting attachments, clearing temporary files, or emptying all of the Trash cans.
If you fail to keep your Mac’s hard drive clean, you’re eventually going to get the dreaded “Your disk is almost full” error, so you may as well start now and clear up some space.

How to Clean Up Your Mac the Easy Way

If you don’t feel like spending a bunch of time to find and clean things up manually, you can use CleanMyMac 3 to get rid of temporary files, clean up extra language files, uninstall applications, get rid of extra files left behind by application uninstallations, find and get rid of big attachments stored in Mail, and a whole lot more.
It basically has all the features of the cleaning applications we talk about in this article, but in a single app—with the exception of finding duplicate files, which you’ll still want to use Gemini 2 for. Luckily it’s the same vendor that makes Gemini 2 and you can get them both as a bundle.
And of course, there’s a free trial that shows where your free space has gone and lets you clean up some of it for free.
CleanMyMac
They have a single button to clean up everything, but we’d recommend going into the details to make sure.
Note: before running any cleaning tool, you should make sure that all of your important data is backed up, just in case.

Find and Remove Duplicate Files

One of the trickiest things that can take up lots of drive space are duplicate files littering up your computer—this is especially true if you’ve been using the computer for a long time. Luckily there are great apps like Gemini 2 that can be used to find and remove duplicate files with a really slick and easy interface.
You can buy it on the App Store if you want — Apple had this one as their Editors’ Choice, but you’re probably better off getting it from their website, because they have a free trial available here.
There are a lot of other choices on the App Store and elsewhere, but we’ve used this one and had good results.

Empty Your Trash Cans

The Trash on a Mac is equivalent to the Recycle Bin on Windows. Rather than permanently deleting files from within the Finder, they are sent to your Trash so you can restore them later if you change your mind. To completely remove these files and free up the space they require, you’ll have to empty your Trash. But Macs can actually have multiple trash cans, so you may need to empty several.
To empty your user account’s main trash can, Ctrl-click or right-click the Trash icon at the bottom-right corner of the dock and select Empty Trash. This will delete all the files you sent to the trash from the Finder.
iPhoto, iMovie, and Mail all have their own trash cans. If you’ve deleted media files from within these applications, you’ll need to empty their trash cans, too. For example, if you use iPhoto to manage your pictures and delete them in iPhoto, you’ll have to clear the iPhoto trash to remove them from your hard drive. To do this, just Ctrl+click or right-click the Trash option in that specific application and select Empty Trash.
empty-delete-photos-from-iphoto-trash

Uninstall Applications You Don’t Use

The applications you have installed on your Mac are taking up space, of course. You should uninstall them if you don’t need them—just open a Finder window, select Applications in the sidebar, and drag-and-drop the application’s icon to the trash can on your dock. Some of these applications can be taking up a ton of space.
To find out which applications are using up the most space, open a Finder window and select Applications. Click the “Show items in a list” icon on the toolbar and then click the Size heading to sort your installed applications by size.
view-size-of-installed-applications-on-mac

Clean Up the Huge iTunes Backups of Your iPhone or iPad

If you’ve backed up your iPhone or iPad to your Mac using iTunes, you’ve probably got a bunch of massive backup files that are taking up a shocking amount of space. We were able to clear up over 200 GB of space by finding and deleting some of these backup files.
To delete them manually, you can open up the following path to see the backup folders, which will have random names, and you can delete the folders found inside. You’ll probably want to close iTunes before you do that.
 ~/Library/Application Support/MobileSync/Backup
The easier (and much safer) way to delete them is to use CleanMyMac, which translates those confusing folders into actual backup names so you can decide which backup you actually want to delete. Just check the things you want to remove, and then click the Clean button.

Clear Out Temporary Files

Your Mac’s hard drive probably has temporary files you don’t need. These files often take up disk space for no good reason. Mac OS X tries to automatically remove temporary files, but a dedicated application will likely find more files to clean up. Cleaning temporary files won’t necessarily speed up your Mac, but it will free up some of that precious disk space.
Your web browser has a built-in option to clear out browsing data that you can use to quickly clear up a bit of space—but it’s not necessarily a great idea. These caches contain files from web pages so your browser can load the web pages faster in the future. Your web browser will automatically start rebuilding the cache as you browse, and it will just slow down web page load times as your browser’s cache grows again. Each browser limits its cache to a maximum amount of disk space, anyway.
There are a lot of other temporary files on your system, which you can see by opening up Finder, using Go -> Go to Folder on the menu, and using ~/Library/Caches to get to the cache folder. This will pull up a folder that has a ton of folders in it, which you can select and delete manually if you choose.
You can clean up temporary files easier, and much safer, by using CleanMyMac. Just open it up and run through a scan, and then go into the System Junk section to identify all of the cache files and other things that you can clean up. Once you’ve selected what you want or don’t want to clean, just click the Clean button.
One of the things that makes a utility like CleanMyMac so great is that it converts a lot of those confusing folder names into the names of the actual applications, so you can see which temporary files you’re actually deleting.
The thing about temporary files, of course, is that most of them are going to come back after you use your Mac for a while. So deleting temporary files is great, but only works for a while.

Check Your Disk to See What is Taking Up Space and Find Large Files

To free up disk space, it’s helpful to know exactly what is using disk space on your Mac. A hard disk analysis tool like Disk Inventory X will scan your Mac’s hard disk and display which folders and files are using up the most space. You can then delete these space hogs to free up space.
If you care about these files, you may want to move them to external media — for example, if you have large video files, you may want to store them on an external hard drive rather than on your Mac.
Bear in mind that you don’t want to delete any important system files. Your personal files are located under /Users/name, and these are the files you’ll want to focus on.
analyze-disk-space-used-by-files-on-mac-os-x

Remove Language Files

Mac applications come with language files for every language they support. You can switch your Mac’s system language and start using the applications in that language immediately. However, you probably just use a single language on your Mac, so those language files are just using hundreds of megabytes of space for no good reason. If you’re trying to squeeze as many files as you can onto that 64 GB MacBook Air, that extra storage space can be useful.
To remove the extra language files, you can use CleanMyMac, as we’ve mentioned earlier (It’s under System Junk -> Language Files). There’s also another tool called Monolingual that can delete these as well, though it’s yet another tool to download for a very specific use. Removing language files is only necessary if you really want the space—those language files aren’t slowing you down, so keeping them is no problem if you have a big hard disk with more than enough free space.
remove-language-files-with-monolingual-on-a-mac

Clean Up Big Attachments in Mac Mail

If you’re using the built-in Mail application in macOS and you’ve had the same email account for a long time, there’s a good chance that large email attachments are taking up a ton of space on your drive—sometimes many gigabytes worth, so this is a good place to check while cleaning up your drive.
You can change the Mail settings to not download attachments automatically to save space, or run a cleanup tool to get rid of them. If you’re using Gmail, you can set limits on how many messages are synced over IMAP by default to only show the last few thousand instead of everything. Go into Mail -> Preferences -> Accounts -> Account Information and change the drop-down for “Download attachments” to either “Recent” or “None”.
Changing this setting will help Mail not use up as much space going forward, but it doesn’t solve the problem of attachments from emails that have already been downloaded.
If you want to remove those attachments, you’re going to need to follow a very annoying manual process:
  1. Open up Mail, and click on the folder that you want to find and remove attachments for.
  2. Use the Sort by Size option to find the biggest messages.
  3. Click on the message, and choose Message -> Remove Attachments from the menu bar. This won’t delete the attachment from the mail server if you’re using IMAP.
  4. Repeat for all the messages that you want to delete attachments for.
Note: if you are using POP for your email, do not delete attachments unless you really don’t want them anymore, because they will be gone forever otherwise. If you’re using IMAP, which any modern email like Gmail, Yahoo, or Hotmail would be using, the messages and attachments will stay on the server.
Cleaning Up Email Attachments the Easy Way
If you want to clean up and delete old attachments automatically, there’s only one good solution that we know of, and that’s CleanMyMac. You can run a scan, head to Mail Attachments, and see all of the attachments that can be deleted. Click Clean, and your hard drive will be free of them. Those attachments will still be on your email server, assuming you’re using IMAP, so you can delete everything without worrying too much.
If you’re worried, you can also uncheck the box next to “All Files” and then manually select all of the files that you want to delete.

Clean Up Your Downloads Folder

This tip is so obvious that you’d think we don’t need to include it, but it’s something that everybody forgets to deal with—your Downloads folder is so often full of huge files that you don’t need, and it’s not something you think about.
Just open up Finder and head into your Downloads folder and start deleting everything you don’t need. You can sort by file size to quickly delete the biggest offenders, but don’t forget to look at the folders—remember that every time you open up an archive file, it automatically unzips into a folder. And those folders sit there looking innocuous but taking up tons of space on your drive.

Use the Storage Tools in macOS High Sierra

The latest version of macOS Sierra has a new tool to help you clean the junk out of your Mac — just go to the menu and choose “About This Mac” and then flip over to the Storage tab.
About_This_Mac
Once you are there, you can go through the new settings and enable the ones that make sense to you.
  • Store in iCloud – this new feature allows you to store your Desktop, Documents, Photos, and videos in iCloud and Apple will automatically free up local space as needed. If you’re on a slow internet connection, you probably don’t want to enable this.
  • Optimize Storage – the name doesn’t really match the feature, which basically deletes purchased iTunes movies and TV shows after you’ve watched them to keep them from cluttering up your drive. Since movies, especially in HD format, are extremely large files, this can help keep your Mac from running out of space. You can, of course, download them again any time if you’ve purchased them.
  • Empty Trash Automatically – this is fairly simple, if you turn this on Apple will automatically delete old items out of the trash after they have been in there for 30 days.
  • Reduce Clutter – this will help you find the biggest files on your hard drive and delete them.
It’s a little clunky and not as easy to use as some of the third-party tools, but it does work.

Be sure to also remove other files you don’t need. For example, you can delete downloaded .dmg files after you’ve installed the applications inside them. Like program installers on Windows, they’re useless after the program is installed. Check your Downloads folder in the Finder and delete any downloaded files you don’t need anymore.

How to Configure Storage Replication using Windows Server 2016? – Part 1

Storage Replica is a new feature introduced in Windows Server 2016 that enables storage-agnostic, block-level, synchronous replication between servers for disaster recovery, as well as stretching of a failover cluster for high availability. Synchronous replication enables mirroring of data in physical sites with crash-consistent volumes ensuring zero data loss at the file system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of data loss.
Storage Replica is volume-based and uses SMB V3.1.1. It can use any fixed disk storage, as well as any storage fabric. Storage Replica does not require a cluster and can be managed using Failover Cluster Manager (FCM), PowerShell, Windows Management Instrumentation (WMI), and Azure Site Recovery (planned for the future).
This blog post is divided into two parts, in the first part, I will show you how to implement Windows Volume Replication (Server-to-server), and in the second part, I will show you how to implement Stretch Cluster with Volume Replication.
At the time of writing this article, Storage Replica supports the following scenarios:
• Server-to-server storage replication using Storage Replica
• Storage replication in a stretch cluster using Storage Replica
• Cluster-to-cluster storage replication using Storage Replica
Implement Windows Volume Replication
We will configure Windows Volume Replication (Server-to-server) by implementing the following steps:
• Step 1: Create a replication partnership.
• Step 2: Monitor replication performance.
• Step 3: Reverse failover replication to a replica target.
• Step 4: Remove volume replication.

How to Configure Storage Replication using Windows Server 2016? – Part 2

Warning: This article is written with information related to Windows Server 2016 Technical Preview 4.
In part one of this multi-part blog on How to Configure Storage Replication in Windows Server2016, we covered an introduction into Storage Replica which is a new feature introduced in Windows Server 2016, and we covered step by step the implementation of Windows Volume Replication (Server-to-server). In this follow up post, we are going to cover the implementation of volume replication with the stretch cluster. This type of cluster features uses Asymmetric storage, two sites, two sets of shared storage and uses volume replication to ensure that data is available to all nodes in the cluster.
Stretch clusters are typically used to provide high availability across sites, however delivering business continuity involves more than just high availability, not just disaster recovery – think disaster preparedness.

Setting Up a WS2012 R2 Windows Deployment Server (WDS) for Pre-Boot Execution Environment (PXE).

In this post I want to quickly go through my experience setting up a Windows Deployment Server for PXE Boot(Pre-Boot Execution Environment) for my lab servers. Platform is Windows Server 2012 R2 .I outline the steps as follows:
1) Open a PowerShell console and use the Install-WindowsFeature cmdlet to install the specific role as indicated in the screenshot:
Wdsshell
As indicated in the screenshot, a restart is not required.
2) Copy the specific Operating System ISO file to any location on the Deployment Server.
3) Open the Windows Deployment Services console, expand the Servers tree, right the server and follow the instructions to configure the server:
wds01
Click next and select the option to integrate with Active Directory. I selected this option simply because it suits my environment.

Step-by-Step DirectAccess Configuration on Windows Server 2012 R2.

Windows Server DirectAccess is an awesome and exciting feature. It’s a Windows Server role service that enables windows domain-joined machines to have always on and seamless connection to the corporate infrastructure securely over the internet without the need for traditional Virtual Private Network (VPN). The DirectAccess infrastructure has a lot of moving parts. Any dysfunction or misconfiguration in one of its components can halt and disrupt the entire DirectAccess deployment. Some key points:
a) It hugely facilitates the process of implementing the Password expiration and changes Group Policy for telecommuter users.
b) It eliminates the need for end user initiated VPN (or other) connections. (Making the end user’s experience even simpler)
c) Help desk administrators can more easily administer telecommuter end user ticket issues(Manage out capabilities).
d) Securely and seamlessly extending my I.T. infrastructure environment over the internet.IPSec and HTTPS(SSL certificate) encrypt the traffic between client and DirectAccess server to prevent interception. In addition, Windows Firewall must be enabled end-to-end before a successful DirectAccess connection can be made.
e) DirectAccess utilizes IPv6. Since Windows Server 2012 DirectAccess can now be configured behind a firewall using NAT (Network Address Translation) with a single NIC. This deployment model requires the IP-HTTPS transition technology.

Nested Virtualization on Hyper-V & Windows Server 2016 – Step by Step


Several changes have emerged in the marketplace at present and continue to do so. The concept of nested virtualization is not new, I know it has been a possibility in the vSphere ESXi stack for quite some time, Microsoft never really showed any interested in this as the only real use cases were lab environments and training. This has changed recently with the advent of cloud and container technologies as we continue to abstract more and more layers of our IT infrastructure.
So what actually is nested virtualization? Well it is simply running a Hypervisor inside a Virtual Machine. Once you do this you have two layers of virtualization and this can be useful for a number of reasons.
  1. Test Environments: This is awesome to be able to test things like System Center Virtual Machine Manager or Hyper-V Clustering without having multiple machines.
  2. Containers: This is a big thing at the moment. So what it is? A container is effectively a mini VM for applications. Instead of the whole Operating System being virtualized, a container focuses on provides an isolated environment for an application to reside in without the overhead of a full virtual machine.  At any rate, Microsoft is providing us with two different types of containers. Windows Containers and Hyper-V Containers which I have explained in this post.

Overview of Networking in Virtual Machine Manager 2016


Networking in System Center Virtual Machine Manager is always a difficult subject for students so with this post hopefully it will make a little more sense.
When we talk about networking in Hyper-V and VMM it is important however that you understand some of the key concepts of virtualized networks such as VM Networks, Logical Switches and Logical Networks and how these relate to the underlying infrastructure and Hyper-V’s networking capabilities. This will form the basic knowledge that will be required when configuring advanced networking features such as Port Classifications and Virtual Port Profiles.
Networking in Hyper-V
In Windows Server 2012 Hyper-V Microsoft changed the networking functionality of both Windows Server and Hyper-V. In previous versions of Hyper-V you required at least four physical network adapters in a Cluster configuration to achieve a networking configuration that isolated the various of types of network traffic:
Management: This network is used for general management including managing the Hyper-V host, Active Directory communication, deploying virtual machine files and backup operations.
Cluster: This type of network typically has two responsibilities, the first for Cluster heartbeats between Hyper-V Cluster nodes and secondly for Redirected Input/Output operations (IO) when Cluster Shared Volume (CSV) traffic is redirected to the CSV owner node, either due to a failure in a Host’s connection to the underlying storage or during a backup operation. In Windows Server 2012 and above CSV redirection no longer occurs during back-up operations.
Live Migration: This type of network is for migration of virtual machines between Hyper-V Cluster nodes. This network can remain idle for periods of time as Live Migrations are not a continuous operation.
Virtual Machines: This is normally the network responsible for carrying traffic to and from virtual machines.
In some cases this ran into several physical network cards. If you were using iSCSI based storage then you would ideally require an additional two network adapters, configured with Multipath IO (MPIO) for resilience, for connecting the Hyper-V hosts to the shared storage array.
The following diagram shows typical configuration of Windows Server 2008 R2 Hyper-V:
Prior to Windows Server 2012 physical network adapter teaming was not supported by Microsoft and this to be enabled on the physical NIC using software from the network adapter vendor. In order to do this the NIC’s would have to be from the same hardware vendor which had the potential to cause problems. For example, if you updated the NIC Driver and all the NIC’s were the same model and vendor then OUCH it would hurt if the driver had an issue.
In Windows Server 2012 Microsoft included NIC teaming within the base operating system and fully supported the use of it with Hyper-V. This allowed for a supported Hyper-V implementation when using NIC teaming. By using teaming you are able to create load balanced failover teams of NICS with the common capabilities of the underlying NICs. Crucially it also allowed for teaming network adapters from multiple vendors into a single team. This can assist in mitigating driver/firmware update problems as mentioned above.
Rather than requiring the use of dedicated NIC’s for the different configurations (iSCSI, Management, Live Migaration & Cluster) Microsoft introduced the concept of converged networks. In a nuntshell, the Hyper-V switch in Windows Server 2012 onwards was re-architected and all network cards could now be presented as one to Hyper-V & SCVMM. The following diagram shows how this could be implemented:
This converged networking model can allow for a much more efficient use of the available bandwidth. With server hardware now shipping with 10Gb Ethernet, having a physical 10Gb network adapter dedicated to Live Migration, would not be the best use of the available bandwidth
Receive Side Scaling (RSS)
Receive side scaling (RSS) is a network driver technology that enables the efficient distribution of network receive processing across multiple CPUs in multiprocessor systems.
Note: Because hyper-threaded CPUs on the same core processor share the same execution engine, the effect is not the same as having multiple core processors. For this reason, RSS does not use hyper-threaded processors.
When a Windows Server receives network traffic on any network adapter the network traffic is processed on the physical CPU core 0. A typical CPU core can process anywhere between ~4-6Gbps of network traffic. While this is not a problem in a network where you only have a 1Gbps network, most modern networks are implemented within a minimum of 10Gbps. When you combine 10Gbps networking with NIC teaming the aggregated bandwidth can increase drastically. To combat this bottleneck you can employ two networking offloads typically available on most enterprise level network adapters, Receive Side Scaling (RSS) and Virtual Machine Queue (VMQ).
RSS is used for physical network traffic, for example when copying large files, such as virtual machine hard disk files (VHDX), to a Hyper-V host or when a virtual machine is being live migrated between hosts. This is network traffic that is destined for the Hyper-V host not a virtual machine. RSS allows the NIC to use multiple CPU cores to process the incoming networking traffic consequently allowing for a higher network throughput. It is possible to configure specific physical network adapters to use specific network cores through PowerShell using the Set-NetAdapterRss command. By using the Get-NetAdapterRss command you can see if it is enabled on a NIC:
We can also launch the NIC properties of “Device Manager” to make sure “Receive Side Scaling” is supported by this NIC
If you create a NIC team using the in-box Windows NIC teaming functionality, then the RSS functionality of the NICs can be surfaced in the NIC team. This ensures that you have RSS capabilities even if one of the NICs in the team fails.
In Windows Server 2012 R2 Microsoft introduced virtual RSS (vRSS) which can be enabled within a Virtual Machine running Windows Server 2012 R2 on a Hyper-V host running Hyper-V 2012 R2. This allows virtual machines to take advantage of RSS as a physical host would, as network traffic inside a Virtual Machine is still processed by its vCPU core 0. To use vRSS inside a Virtual Machine you require a physical network card that has Virtual Machine Queues (VMQ) enabled.
Virtual Machine Queue (VMQ)
VMQ is similar to RSS however it works on network traffic that is destined for virtual machines. Just as RSS is capable of spreading incoming network traffic over multiple physical CPU cores so can VMQ. They are mutually exclusive technologies, so as such if you have VMQ enabled on a physical NIC you cannot have RSS enabled.
Note:If you have network adapters below 10Gbps, then VMQ is disabled by default. To enable VMQ on these adapters you will need to create a new Registry entry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\VMSMP\Parameters\BelowTenGigVmqEnabled = 1
It is strongly recommended that you do not enable VMQ on adapters that are running at less than 10Gbps. In some circumstances using VMQ on adapters running at less than 10Gbps it can introduce problems with Virtual Machine networking and is typically not required.
If you create a NIC team using the in-box Windows NIC teaming functionality then the VMQ functionality of the NICs can be enabled at the NIC team level. This then ensures that you have VMQ capabilities even if one of the NICs in the team fails.
When a Hyper-V switch is attached to a physical NIC it will utilize VMQ and disable RSS, as such if you are using the converged networking model then the vNICs attached to the virtual switch are unable to take advantage of RSS.
It is possible to configure specific physical network adapters to use specific network cores through PowerShell using the Set-NetAdapterVmqcommand. See the following blog for more information http://www.bctechnet.com/hyper-v-vmm-2012-r2-and-vmq-part-1/
Virtual Network Adapter Capabilities
In addition to improving the physical network capabilities, Microsoft improved the capabilities of virtual network adapters. For virtual machines you can enable the following capabilities on a per Virtual Network Adapter basis:
VMQ: VMQ can be enabled or disabled for a virtual machine if required.
MAC Address Spoofing: This allows you to specify whether a Virtual Machine is permitted to change its source MAC address for outgoing packets. This is required when you are using Windows Network Load Balancing inside a Virtual Machine.
Router Guard: This prevents a Virtual Machine from performing router advertisement and redirection messages.
DHCP Guard: This prevents Virtual Machine from sending DHCP server messages. Do not enable this setting on Virtual Machines that are authorized DHCP servers.
IPsec Task Offload (IPsecTO): This allows virtual machines to use the IPsecTO capabilities of a physical network adapter to offload the encryption operations from the CPU to the physical NIC. Not all physical adapters are IPsecTO capable so consult your NIC vendor for specifications.
Single Root – Input/Output Virtualization (SR-IOV): SR-IOV enables network traffic to bypass the Hyper-V switch. It uses a feature of a SR-IOV called a Physical Function (PF) to map into a Virtual Machine. The Virtual Machine has a Virtual Function (VF) that is mapped to the hosts PF. As SR-IOV enabled traffic bypasses the Hyper-V switch you cannot apply most other capabilities to the virtual network adapter.
IEEE Priority Tagging: This allows for traffic originating from a Virtual Machine to have IEEE 802.1p QoS tagging applied and for those tags to remain after the traffic leaves the Hyper-V switch.
Guest Teaming: This allows for virtual network adapters to be teamed inside a guest virtual machine. This is typically used in conjunction with SR-IOV enabled virtual network adapters.
Bandwidth Management: A virtual network adapter can have one of two types of bandwidth management applied. You can define hard limits for an adapter where you specify the minimum and maximum amount of bandwidth, in Mbps, a network can utilize. The other option is to use bandwidth
Logical Networks
The Logical Networks piece of VMM is vital to ensure a successful deployment. Logical Networks are the first step in defining a model for your physical networking (also referred to as Fabric networks).
Logical Networks in VMM should be used to define a purpose. Each Logical Network should contain at least one Logical Network Site and should contain the subnets and VLANs you define.
Each Logical Network Site is tied to a Host Group in VMM.
For example you should have at least four Logical Networks defined, Host Management, Host Clustering, Host Live Migration, Virtual Machines as mentioned above.
The diagram below shows how a Logical Network and its subsequent Logical Network Sites could be configured. Each Logical Network Site has more than one VLAN available, with the Contoso-New York site having three VLANs available.
IP Pools
Each Logical Network Site can have an IP Pool associated with it which allows VMM to assign IP addresses from the pool to resources. It is possible to have multiple IP Pools per Logical Network Site however care should be taken when undertaking management of these. You cannot have overlapping IP Pools, for example if you had a pool that contained all other IP address between 10.0.0.100 and 10.0.0.200, you could not have another pool associated with the same logical site that had IP addresses between 10.0.0.190 and 10.0.0.220. If you need to change the parameters of the IP Pools, such as changing the DNS servers for a Logical Network Site, then you must manually change all of the IP Pools for that site.
As the IP Pool is attached to a Logical Network Site VMM already knows the IP subnet for the IP Pool but is unaware of other required information. When creating the IP Pool you need to specify the starting IP address within a pool and an ending IP address. It may be that you only have 30 IP addresses within the subnet for use by VMM as the remainder of the subnet is in use by other services such as Active Directory Domain Controllers. It is recommended that you provision new VLANs and subnets for use by Hyper-V and VMM to ensure there is no overlap or configuration errors.
Additional items that can be specified within an IP Pool are:
  • IP addresses to be reserved for use as Virtual IPs by Load Balancers.
  • IP addresses to be excluded from allocation within the given range. These IP addresses could already be in use or could be used for other configuration items such as Cluster IP addresses for guest clusters.
  • Gateway IP addresses and metrics.
  • DNS server(s) and DNS suffixes.
  • WINS server(s).
IP Pools are very similar to DHCP Scopes in their capability however one of the key differentiating factors is that an IP address issued by VMM from an IP Pool is statically assigned to a virtual machine when it is deployed. This is not a reservation like a DHCP reservation, it is a statically configured IP address within the virtual machine, be it a Microsoft operating system or Linux.
Hyper-V Network Virtualization
If you are going to use Hyper-V Network Virtualization, referred to as HNV or NVGRE, you will need to have an additional dedicated Logical Network for your Provider Address layer. This is the network that NVGRE packets will be encapsulated within when they travel across the physical wire between Hyper-V hosts. VMM must have an IP Pool for the Provider Address Logical Network Site(s). The topic Overview Hyper-V Network Virtualization gives more information on HNV and the optional lesson, Software Defined Networking, gives more detail.
VLANs

VMM networking supports two types of VLAN:
  1. Traditional VLAN: This is the standard IEEE 802.1Q implementation. Each network is tagged with a VLAN ID between 0 and 4096.
  2. Private VLAN (PVLAN): This is an extension of the traditional VLAN where an additional layer of Level 2 isolation is available.
VLANs
When you create a Logical Network Site within VMM you only need to declare the VLAN ID, however without also declaring a subnet you cannot create an IP Pool for the Logical Network Site. When creating a Logical Network Site is recommended that both parameters are completed:
  1. VLAN ID: This is between 0 and 4096. If you declare that it has a VLAN of 0 then VMM will not apply a VLAN to that subnet.
  2. Subnet: This is the subnet that is within the declared VLAN. This is given in CIDR notation, for example if your Host Management network has a network ID of 192.168.0.1 and a subnet mask of 255.255.255.0 then this would be entered as 192.168.0.1/24.
PVLANs
Private virtual LANs (PVLANs) are regularly used by service providers to circumvent the scale limitations of VLANs as you can only have a theoretical maximum of 4096 VLANs. Per network, PVLANs allow network administrators to split a single VLAN into a number of discrete and isolated sub-networks which can then be assigned to separate customers, or tenants. PVLANs share the IP subnet that was assigned to the parent VLAN, but they require a router to communicate with other PVLANs and with resources on any other network.
A PVLAN consists of a primary VLAN ID and a secondary VLAN ID and each port, whether that be a physical or virtual port, can be configured in one of these modes:
  1. Promiscuous: Any network port, physical or virtual, that is set to be Promiscuous can communicate with all interfaces, physical or virtual, including isolated and community within a PVLAN.
  2. Isolated: Any virtual machine that is set to be Isolated has a complete Layer 2 separation from all other interfaces, physical or virtual. PVLANs prevent all traffic to an isolated port, except to and from Promiscuous ports.
  3. Community: Any virtual machine that uses a Community PVLAN can communicate with virtual machines that are in the same PVLAN (i.e. have the same Primary and Secondary VLAN IDs) and are set to community or promiscuous mode. Each Community is isolated from other communities.
When a port is in Promiscuous mode it is on the primary VLAN and communicates with resources on the primary and secondary VLANs. A Promiscuous port would typically be found on a router, firewall or other gateway device.










You can only have one Isolated PVLAN per primary VLAN, so consideration should be given to ensure that a different primary VLAN ID is used in each Logical Network Site, however it is recommended that you use the same secondary VLAN ID for each Isolated PVLAN. For example if you have two Logical Network Sites where site A has a primary VLAN ID of 100 and site B has a primary VLAN ID of 200 you should you use the same secondary VLAN ID in both sites for consistency across your network infrastructure. A typical use for an Isolated mode PVLAN would be for webserver where isolation is required. If the webserver were compromised then it would not be able to communicate with other servers in the PVLAN.
When you configure a PVLAN in Community mode each secondary VLAN ID is a discrete community. Each discrete community is allowed to communicate within itself and with Promiscuous ports that have the same primary VLAN ID.
Hyper-V supports all three type of PVLAN, however VMM can only create Logical Networks and Logical Network Sites based on the Isolated PVLAN mode. If your virtual machines require PVLANs in either Promiscuous or Community mode then these would need to be configured outside of VMM.
Configuration of PVLANs in Hyper-V must be done through PowerShell. To configure the secondary VLAN ID please see the relevant section on TechNet. Set-VMNetworkAdapterVlan
http://go.microsoft.com/fwlink/?LinkID=529872
Logical Switches
In Hyper-V there is only one switch, the Hyper-V Extensible Switch. Unlike other hypervisors the switch is not replaceable, it is extensible. The switch can have extra capabilities added to it by deploying switch extensions. There are a variety of vendors offering switch extensions including Cisco with their Nexus 1000v and 5nine with their Cloud Security extension. By being extensible you can create an extremely capable virtual switch that implements a variety of technologies without having to compromise on functionality.
A Logical Switch is VMM’s management component that combines the physical characteristics you have defined in an Uplink Port Profile with the characteristics you have defined in Virtual Port Profiles. It can also have virtual switch extensions defined. By combing these into a Logical Switch it is possible to create a reusable configuration that offers consistency between deployments. When you deploy a Logical Switch you choose which Uplink Port Profile to use with that deployment, as VMM knows how you have configured the Logical Switch it will create NIC teams if required and apply the required configuration. Standard Switches
VMM can create standard Hyper-V switches that are not Logical Switch implementations, however there are several limitations:
  1. You can only use a single physical NIC with a Standard Switch deployment directly through VMM. You can attach a Standard Switch to a NIC team through VMM, however that NIC team would have to be created outside of VMM.
  2. You cannot create a completely Converged Network implementation on a Standard Switch through VMM. The most you can achieve from this type of deployment is a management NIC. Any additional host level NICs you require will have to be created manually on the Hyper-V host.
  3. Logical Network Sites have to be manually assigned to the network interface connected to the Standard Switch.
For example, if you had ten Hyper-V hosts in a cluster you could create a single Logical Switch that characterizes all of the networking available in the cluster; you could then deploy the Logical Switch to each host knowing that the configuration of the switch will be identical across each host in the cluster. Using a Standard Switch would require considerable administrative effort to ensure all of the hosts are configured in the same way.
As the Logical Switches contain all of the information from the Uplink Port Profile and Virtual Port Profiles VMM can determine if a Logical Switch is capable of servicing a virtual machine. For example if you had a Virtual Port Profile called “VPP-NLB” and that was assigned to a Virtual Machine Template then any host you wish to deploy a virtual machine from using that Virtual Port Profile must have the “VPP-NLB” declared in its Logical Switch implementation. VMM will prevent the deployment of a virtual machine where the virtual network adapter’s profile is absent from the target Logical Switch.
VM Networks
In terms of the overall network architecture in VMM the VM Network is the final component. The VM Network is the connection between any given virtual network adapter and a Logical Network. Every virtual network adapter must be connected to a VM Network in VMM, including virtual network adapters that are part of a Converged Network. The characteristics of a VM Network are defined by the Logical Network they are hosted in.
The Logical Network and VM Network Relationship
It is possible to have multiple VM Networks connected to a single Logical Network as the Logical Network dictates the level of Isolation required. When you create the Logical Network you decide on the Isolation level that is required for virtual machines connected to that Logical Network.
Hyper-V Network Virtualization and VM Networks 
Until now you have learned how VMM represents connections between virtual machines and physical networks in the traditional networking sense. Hyper-V Network Virtualization (HNV) is a way of creating isolated networks within a Logical Network. These isolated networks are known as Tenant networks. The isolated networks are totally contained within the physical Fabric network using the Network Virtualization using Generic Routing Encapsulation (NVGRE) protocol.
Tenant networks are VM Networks where HNV has been enabled in the hosting Logical Network, the lesson Logical Networks discusses Logical Networks that can be used for HNV. A Tenant network can have multiple subnets defined with whichever address spaces they require. The physical networking layer is unaware of the Tenant networks defined within. The next topic Overview of Hyper-V Network Virtualization gives a more complete high-level view of HNV with the optional lesson, Software Defined Networking, going in to detail about HNV.
VM Networks

Hyper-V Network Virtualization is the concept of Software Defined Networks within Hyper-V and VMM. It is referred to as HNV or Network Virtualization using Generic Routing Encapsulation (NVGRE).
The protocol Microsoft have created is called NVGRE which involves encapsulating Tenant network traffic within a physical provider network. The Provider Network contains a unique physical IP address space that is used by NVGRE.
It is possible for Provider Networks to span multiple Hyper-V clusters and even datacenters. As the Tenant traffic is encapsulated, all that is seen on the physical network is the Provider Network, seen as regular network traffic to networking infrastructure such as switches and routers. Provider Networks are Logical Networks within a company’s network infrastructure and therefore can have multiple Logical Network Sites defined within a single Logical Network. If a Provider Network spans multiple datacenters it is possible for Tenant networks to span multiple datacenters.
Routing Domains
A Routing Domain typically consists of a collection of IP networks where IP addresses do not overlap. Typically they are administered by a single source, for example by an enterprise. Routing Domains are usually terminated at the line of responsibility or security. For example your company’s network is its own Routing Domain, however within that overarching Routing Domain there could be several others. If you have a perimeter network that will be its own Routing Domain, your Production network will be another. If you have Development and Testing networks these are usually their own Routing Domain.
The diagram below shows a high-level view of Routing Domains. Each area within this network is its own Routing Domain. Each of these Routing Domains could be combined and represent a Routing Domain for the organization.

Isolation
Within HNV a Tenant Network forms the isolation boundary (or Routing Domain) and subnets within the Tenant Network are broadcast boundaries. Each Tenant Network, which is created in VMM through the VM Networks components, is a single routing domain and is given a Routing Domain ID (RDID) which is a unique GUID issued by the network management software, in this case VMM. Each Tenant Network can have one or more subnets and each subnet is given a unique Virtual Subnet ID (VSID). Within a Logical Network, which can span datacenters, each VSID must be unique; the range of possible VSIDs is 4096 to (2^24) -2 which is over 16.7 million possible unique VSIDs within a single Logical Network. This gives HNV the ability to scale far beyond traditional VLANs and Private VLANs. It is possible to have more than one Logical Network configured for HNV within an organization, but this is not typical.
The diagram below shows a possible configuration for HNV. The Tenant Networks, Contoso – Tenant A and Contoso – Tenant B, are contained within the Provider Address network:

When you assign an IP Pool to the Provider Address Logical Network Sites you only need to include a start and end IP address. If you need to route your Provider Address network then you will need in to include a Gateway IP Address. This Gateway IP Address is not used by normal Windows traffic, such as Management or Backup traffic, and is for the sole use of the Provider Address network. When viewing the routing table on a Hyper-V host where HNV is in use, the networking information does not appear. To view the networking information, you will need to use the Get-NetVirtualizationProviderRoute cmdlet
Routing inside a HNV enabled VM Network
When you create multiple subnets in a HNV enabled VM Network any device attached to the VMnetwork can route between subnets contained within. For example, if you have a subnet with the IP subnet of 10.0.0.0/24 and another subnet 192.168.1.0/24, then devices in either subnet can communicate without restriction.
When the VM Network subnet is created the first IP address in the subnet is allocated to the default gateway IP address. The default gateway IP, in this case 10.0.0.1 and 192.168.1.1, will not allow you to break out of the VM Network without configuring a connection to a HNV Gateway. The Software Defined Networking topic later in this module goes into more detail about HNV Gateways. The default gateway IP addresses will (in the previous example it would be 10.0.0.1 and 192.168.1.1) allow you to route between subnets within the VM Network.
Routing outside of a HNV enabled VM Network
When a Tenant wants to leave its isolated network the traffic must be de-capsulated from the Provider Network. This task is undertaken by specially configured Windows Server Gateways, or networking appliances. Windows Server 2012 R2 ships in-box with the Remote Access role and part of this role is a multi-tenant HNV Gateway.
You can add a Windows Server with these roles to VMM and configure it for use by HNV. Once you have added the server to VMM it is configured to be a multi-tenant gateway, which prohibits you from changing the configuration through the Routing and Remote Access MMC. While it is possible to change the configuration of the gateway through PowerShell it is not recommended. VMM will make all of the required configuration changes and keep it up to date.
Routing options
There are two routing options available for HNV gateways:
  1. Network Address Translation.
  2. Direct Routing.
Network Address Translation (NAT)
Each VM Network is assigned an IP address from an IP Pool that is associated with the externally facing NIC on the HNV Gateway and this IP address is used as the public IP address for the VM Network. It is possible to configure port forwarding rules to allow traffic into the VM Network via this IP address. As NAT is in use, the internal IP addresses and subnets of the VM Network are not known outside of the VM Network. This creates its own Routing Domain with total isolation for the virtual machines within the VM network.
It is possible to configure the Gateway to provide site to site VPN support and as such you can create a secure connection from a tenant network to another network using industry standard VPNs. If you connect a VM Network behind a NAT Gateway to another network you create a single Routing Domain which incorporates both networks and as such the network address spaces within network should be unique.
Using NAT based routing for your VM Network it is possible to connect, through the site to site VPN, a Tenant VM Network in your datacenter to a Virtual Network in Azure using an Azure Virtual Network Gateway.
Direct Routing
This allows the internal IP addresses of the VM Network to be seen on the networks on the external side of the HNV Gateway. For example, if you have a subnet of 10.10.10.0/24 within your VM Network and a subnet of 192.168.1.0/24 on the external side, then traffic on the external side would see the originating IP address within the 10.10.10.0/24 subnet.
When a Gateway is configured in this mode it has a one-to-one relationship with a VM Network and as such is exclusively used by a single VM Network, however a Tenant can still have as many subnets within their VM Network as required. In addition the networks inside the VM Network and the networks outside the gateway creates a single routing domain which incorporates both networks.
Conclusion
I hope this makes networking in Hyper-V and SCVMM a little easier to understand.