Upgrading / Migrating from vSphere 5.x to 6.x (6.5 , 6.7) best practices & Approach

Migrating and Upgrading physical systems is night mare, However with VMWare vSphere its not that complicated, a proper planning will always leads to successful migration from vSphere 5.x to 6.x without any downtime for the VM’s. Some cases might need downtime for VM’s which is described in this post. I would recommend to go through the complete post as this is the same process or approach for all the vSphere migrations irrespective of the versions.
vSphere 5.0 and 5.1 support already ended on 24 August 2016, vSphere 5.5 support going to end on 19 sept 2018. But there are many environments still running on 5.0 and 5.1 as well. Its the high time to upgrade to latest vSphere 6.x now. We will detail every single step to be considered for a successful migration in this post.

Know the Existing Environment

For the successful migration we need to know the existing environment completely. Gathering the existing vSphere environment details is very important. Thanks to RVTools which will do this in just couple of minutes. After getting the information from RVTools we need to analyze the info and gather below information.
  • Existing vCenter Version and Build no
  • Existing ESXi Hosts Version and Build no
  • Existing Server hardware model, Make along with NIC and HBA cards info.( this is important if same hardware is used for upgrade)
  • Standard switch or Distributed switch in use with its port groups, uplinks and vLan details
  • Cluster information with HA, DRS rules
  • EVC Mode information and server maximum supported EVC mode.
  • All VM’s Name,OS, IP, port group, Datastore details
  • Hardware version of all VM’s, VMWare tools installation status.
  • VM RDM Lun details – with the SCSI ID mapping for each lun, RDM type (physical/virtual) , pointer location.
  • USB drive mapping for VM’s if any.
  • Integrations with vCenter server like backup, SRM and other.

Verify compatibility and upgrade matrix

Verifying the VMWare products compatibility and hardware compatibility is very important.
VMWare products compatibility  can be verified here.
Hardware compatibility  can be verified here.

Hardware & Storage compatibility

Its very important to check the Servers, Storage, NIC Cards  and HBA Cards compatibility before planning for the upgrade or implementation of ESXi. NIC and HBA compatibility is covered in drivers section below.

Verifying Hardware compatibility & BIOS firmware

Open vmware compatibility guide – Just select the Partner Name (vendor) – in the keyword provide server Model – click update and view results.
Search for the exact server Model with CPU as shown below and find if the desired ESXi version is compatible or not. As shown below One model supports till 6.5 U1 while other supports 6.7 as well.
Click on the ESXi version to find supported hardware firmware details.
Recommended BIOS and hardware firmware details will be shown as below. Install them before installing ESXi.

Verifying Storage and SAN devices compatibility and Drivers

Open the vmware compatibility guide – select the Storage/SAN as shown below.
Select the Vendor and provide the storage model in keywords – click update
Search for the Exac storage Model – Select the storage and click on the ESXi Version
Note: if an ESXi version is not showing in the list, its either not supported or not yet validated by vmware.
All the supported drivers and firmwares list will be shown for the storage.

vCenter to ESXi compatibility

vCenter server support with ESXi host is very important when it comes to migration as most of the cases migration will happen while VM’s are up and running on existing ESXi hosts.
  • ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • ESXi 5.5 or later can be directly upgraded to 6.5 or U1
  • ESXi 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • ESXi 6.0 or later can be directly upgraded to 6.7

Supported vCenter Upgrade Path

  • vCenter 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • vCenter 5.5 or later can be directly upgraded to 6.5 or U1
  • vCenter 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • vCenter 6.0 or later can be directly upgraded to 6.7

Supported ESXi Upgrade Path

  • ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • ESXi 5.5 or later can be directly upgraded to 6.5 or U1
  • ESXi 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • ESXi 6.0 or later can be directly upgraded to 6.7

Decide the vSphere 6.x version to be upgraded to

Based on the available Hardware either new or reusing existing hardware , its compatibility verified as described in above new vSphere 6.x version and Build no needs to be decided. Suppose if the Hardware is compatible with ESXi 6.5 U1. Then vCenter and ESXi upgrades needs to planned for vSphere 6.5 U1, the detailed steps are listed below.
Note that not just hardware, but NIC cards, HBA cards compatibility is also very important if you are reusing existing or new hardware.

vSphere 6.x License and Support

VMWare vSphere license for ESXi and vCenter is based on version , meaning to say if you are an existing customer having ESXi 5.x and vCenter 5.x license it will support only 5.x ( 5.0,5.1,5.5). However if you already have an SA , ESXi and vCenter 5 licenses can be upgraded to 6.x, contact your local vmware partner for support.

Supported Drivers & Firmware for Hardware

Once the vSphere version for the available hardware is decided, all the necessary drivers for NIC cards, FCOE, FC (HBA) and multi path needs to be decided.
Here will explain how to find the exact driver and download the driver for targeted ESXi version.

Support and Driver for Network NIC Cards

Step 1: Run below command to get all NIC cards available on the Host.
esxcli network nic list
Example:
[root@localhost:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
—— ———— —— ———— ———– —– —— —————– —- ———————————————————
vmnic0 0000:01:00.0 bnx2 Up Down 0 Half a4:ba:db:0e:cc:9c 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
vmnic1 0000:01:00.1 bnx2 Up Up 1000 Full a4:ba:db:0e:cc:9d 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
Step 2: Get the Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) using the vmkchdev command:
vmkchdev -l |grep vmnic#
Example:
[root@localhost:~] vmkchdev -l |grep vmnic0
0000:01:00.0 8086:10fb 103c:17d3 vmkernel vmnic0
[root@localhost:~]
Vendor ID (VID) = 8086
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3
Step 3: Get the driver and firmware in use for NIC card
esxcli network nic get -n vmnic#
Example:
[root@localhost:~] esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: 10BaseT/Half, 10BaseT/Full, 100BaseT/Half, 100BaseT/Full, 1000BaseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:01:00.0
Driver: bnx2
Firmware Version: 5.0.13 bc 5.0.11 NCSI 2.0.5
Version: 2.2.4f.v60.10
Link Detected: false
Link Status: Down
Name: vmnic0
PHYAddress: 1
Pause Autonegotiate: true
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:56:6f:75
Wakeon: MagicPacket(tm)
Step 4: Find supported driver and download driver
Select IO Devices in the vmware compatibility guide then provide all the VID, DID SVID, SDID got in step 2 – click on  update and view results – All the supported ESXi versions for the NIC cards will show as below.
Vendor ID (VID) = 8086
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3
Click on the required driver for ESXi version as shown below beside the NIC driver say 6.5 U1
Expand the driver version and the link to download the driver will be shown as below.

Support and Driver for Storage HBA Cards

Step 1: Get Host Bus Adapter Driver currently in use
# esxcfg-scsidevs -a
Output will show something like vmhba0 mptspi or vmhba1 lpfc
Step 2: Get to HBA driver version currently in use
# vmkload_mod -s HBADriver |grep Version
For example, run this command to check the mptspi driver:
# vmkload_mod -s mptspi |grep Version
Step 3: Get HBA Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) using the vmkchdev command:
vmkchdev -l |grep vmhba#
Example:
[root@localhost:~] vmkchdev -l |grep vmhba0
0000:01:00.0 1077:2031 0000:0000 vmkernel vmhba0
[root@localhost:~]
Vendor ID (VID) = 1077
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000
Step 4: Find supported driver and download driver
Select IO Devices in the vmware compatibility guide then provide all the VID, DID SVID, SDID got in step 3 – click on  update and view results – All the supported ESXi versions for the NIC cards will show as below.
Vendor ID (VID) = 1077
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000
Click on the required driver for ESXi version as shown below beside the HBA driver say 6.5 U1
verify the VID and all, Select the ESXi version – Expand the driver – download link will be given as shown below.

How to install/Update the Driver on ESXi

Upload the driver to the ESXi host. Use below command to install if driver not present or update if an old version of driver is present. Some cases you might need to remove the old driver if host is already having higher version of driver than supported. all the commands are given below.
Remove the existing VIB:
Find the vib name from below command:
esxcli software vib list
remove vib using the name of vib got from above.
esxcli software vib remove –vibname=nameofvib
Update VIB driver using below command:
esxcli software vib update -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”
Install VIB driver using below command:
esxcli software vib install -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”

Migration Approach & Steps

vCenter Upgrade / Install

vCenter appliance has come long way and its very stable now. So no need to rely on windows base vCenter any more. Without any doubt vCenter appliance can be used. However based on the environment, Size , integration with other vmware products vCenter topologies will differ. All the supported vCenter topologies can be found here, However there are three most commonly used topologies are high lighted below.

VCenter Topologies

Standard Topology 1: For Small deployments with 5-10 Hosts if there are no integrations with other vmware products like NSX or VRA , Embedded is the best topology.
1 Single Sign-On domain
1 Single Sign-On site
1 vCenter Server with Platform Services Controller on same machine
Limitations
Does not support Enhanced Linked Mode
Does not support Platform Service Controller replication
Standard Topology 2:For Medium to large deployments with integrations with other vmware products or  multiple vCenter servers for different purposes like one vCenter for production hosts another for VDI hosts. below is the best topology.
1 Single Sign-On domain
1 Single Sign-On site
2 or more external Platform Services Controllers
1 or more vCenter Servers connected to Platform Services Controllers using 1 third-party load balancer
Standard Topology 3:  For Medium to large deployments with DR, integrations with other vmware products and multiple vCenter servers below is the best and recommended topology.
1 vSphere Single Sign-On domain
2 vSphere Single Sign-On sites (Prod , DR site)
2 or more external Platform Services Controllers per Single Sign-On Site ( 2 in Prod, 2 in DR)
1 or more vCenter Server with external Platform Services Controllers
1 third-party load balancer per site

vCenter Upgrade / Install

First thing to be migrated or upgrading is the vCenter server, after that only ESXi and VM’s will be migrated. After the Topology is decided then the first thing to think of the the Upgrade path from existing vCenter server to the new One.
Upgrade from windows based vCenter to appliance is supported , But if it is small or medium environment suggested to build a fresh vCenter appliance based on the topology best suitable for your infra, with the same configuration for cluster, standard or distributed switch. For large environments with lots of distributed switches and port groups consider upgrade.

Upgrading vCenter server from 5.0 or 5.1 to 6.5 or 6.5 U1

Direct upgrade is not supported hence an intermediate upgrade to 5.5 or 6.0 any update as during the upgrade we need to consider that existing all ESXi hosts need to be supported by vCenter server.

Upgrading vCenter server from 5.5 or later to 6.5 or 6.5 U1

Direct upgrade is supported hence upgrade option can be used while installing new vcenter server it require an temporary IP while copying the data from old vcenter to the new vcenter.

Upgrading vCenter server from 5.x to 6.7

Direct upgrade is not supported hence an intermediate upgrade to 6.0 any update as during the upgrade we need to consider that existing all ESXi hosts need to be supported by vCenter server.

Upgrading vCenter server from 6.0 or later to 6.7

Direct upgrade is supported hence upgrade option can be used while installing new vcenter server it require an temporary IP while copying the data from old vcenter to the new vcenter.
Dont forget to notify / check the compatibility of your backup as most of the backup vendors integrate with vCenter server.

ESXi Upgrade / Install

Based on the upgrade path and hardware compatibility ESXi upgrade or fresh installation can be done. Its always good to do upgrade for existing servers as no need to do all configurations like vmotion, dns , Ip’s and all. For new Hardware of course fresh installation is the only approach.
  • ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • ESXi 5.5 or later can be directly upgraded to 6.5 or U1
  • ESXi 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • ESXi 6.0 or later can be directly upgraded to 6.7
New Servers : always use the latest OEM provided ESXi build CD as it will have all the Drivers necessary for the server. Minor updates and patches can be manually installed after installing ESXi with OEM provided custom built ESXi CD.
Old Servers for Upgrade: Its Better to use OEM custom CD for ESXi update, But as per experience its always good to update using the fresh ESXi image provided by vmware not OEM and install necessary supported drivers , updates and patches manually later. As OEM CD will always install the latest Drivers available however most of the old hardware may not support based on VMWare compatibility matrices.
So the Standard rules are below:
  1. verify the compatibility of the NIC, HBA and other for vSphere version.
  2. Install necessary supported BIOS and firmware version for hardware.
  3. Install/Update with OEM CD or fresh ESXi image from vmware.
  4. Check the correct version of drivers are present ( esxcli software vib list | grep -i driver_name )
  5. Install / Update Necessary compatible hardware drivers ( esxcli software vib install/update -d path_of_driver)
  6. Install necessary updates and security patches, i recommend using cli not update manager. cli takes few seconds to install.

Distributed switch migration

Distributed switch is a major thing to consider when it comes to migration, As most of migrations will happen without downtime.

Fresh Installation of vCenter without Upgrade.

In this case first we need to manually configure the distributed switch in the new vCenter server however there is export and import option available for distributed switch which is made easy to export the switch and import as all the port groups and config will appear as is in the new vCenter else a manual config is required.
In this case while migrating the ESXi host from Old vCenter server to new vCenter server below to be considered.
  1. Select a Jump ESXi host residing in Old vCenter server ( this will be used for all VM’s migration)
  2. Remove one physical uplink from distributed switch on each ESXi host or the Jump ESXi host (in old vcenter)
  3. Create a Standard switch and port groups with same vLan info as distributed switch with the physical uplink removed added to it.
  4. Migrate the VM’s to the Jump ESXi Host using vMotion (old vCenter)
  5. Migrate the VM’s from Distributed to Standard on the Jump ESXi host (old vCenter)
  6. Move the Jump host from Distributed switch in old vCenter.
  7. Add the Jump ESXi host (Say ESXi 5.5) with VM’s running on it to the new vCenter server say 6.5 U1
  8. Jump Host ESXi host (Say ESXi 5.5) with VM’s running on it will be added to new vCenter say 6.5 U1 and shows as disconnected in old vCenter server say 5.5.
  9. Add the Jump host to Distributed switch in new vCenter.
  10. Migrate all VM’s on Jump host from Standard to Distributed switch.
  11. Move all VM’s from ESXi host say 5.5 to New ESXi 6.5 Hosts ( already added to DS) in new vCenter server.
  12. Disconnect the Jump host from New vCenter and add it back to Old vCenter , follow this until all VM’s are moved.

vCenter server Upgrade from existing VC 5.x.

In this case we no need to worry much as all the existing vCenter server 5.x information and configurations with ESXi hosts will come to the new vCenter server 6.x. Only thing to be considered is both Old ESXi hosts and New ESXi hosts are added to distributed switch after vCenter upgrade.

VM Migration

The important part during the migration is to make sure the VM is up and running. People always say no downtime, thanks to vmware migrations are easy. during the migration in most of the cases either of below or both will take place.

Migrating VM’s from Old vCenter to NEW vCenter ( no VC upgrade)

This is the case when a new vCenter server is built (not upgraded) and old vCenter is hosting the ESXi hosts and VM’s. As long as the vCenter supports ESXi version, we can add the ESXi host with VM’s running on it in the new vCenter server. This process will automatically disconnects from old VCenter and adds to new vCenter without interrupting the VM’s running on it.
If distributed switch is in use, move the VM’s from distributed switch to standard switch. this part is covered in Distributed switch section.
Note: when doing this if the ESXi host moved to new vCenter is having VM’s with RDM which are clustered with another VM. Try to move both hosts hosting the 2 Clustered RDM vm’s one after another and try to keep them on same vCenter server. Don’t leave one Clustered RDM VM on old vCenter and another on new vCenter. This will cause storage flapping issues.

Migrating VM’s between ESXi hosts of Different Versions under same vCenter server.

As long as both old version of ESXi host and new version of ESXi host are under the same vCenter server, Both hosts can host the VM resources like storage, RDM luns, Network port group and all. Its the matter of vMotion from the Old ESXi host to new ESXi host provided vMotion is configured. This way migration is possible even without a ping drop.
if distributed switch is there both old and new ESXi hosts needs to be added to the Distributed switch. If EVC mode is configured on the Cluster Hosting old ESXi host, make sure same EVC mode is configured on new ESXi host if wanted to do vMotion across them.
Things to verify:
  • Old ESXi host ( say ESXi 5.5 ) and New ESXi Host ( Say 6.5 U1) have visibility to all Datastores, RDM LUNs where VM’s are hosted.
  • Same Standard switch port groups are available on Old and New ESXi host.
  • Old and New ESXi host joined to Distributed switchs VM’s are using.
  • Same EVC Mode is configured on Cluster level of Old and New ESXi hosts for live vMotion.

Migrating VM’s with RDM (Physical / Virtual)

VM’s with RDM’s can also be migrated without downtime provided the destination host can see the RDM luns and the LUN’s where the pointers are saved.
However please note if the SCSI controller is shared by multiple RDM luns vMotion is not possible. Meaning to say suppose RDM lun 1 is on SCSI 1:0 and RDM Lun 2 is on SCSI 1:1 ports vMotion is not possible. This is the reason its always recommended to map each RDM lun with different SCSI controllers, like RDM lun 1 with SCSI 1:0 and RDM Lun 2 with SCSI 2:0.
Hope this post is useful, leave your suggestions and comments below.

No comments: