Migrating and Upgrading physical systems is night mare, However with VMWare vSphere its not that complicated, a proper planning will always leads to successful migration from vSphere 5.x to 6.x without any downtime for the VM’s. Some cases might need downtime for VM’s which is described in this post. I would recommend to go through the complete post as this is the same process or approach for all the vSphere migrations irrespective of the versions.
vSphere 5.0 and 5.1 support already ended on 24 August 2016, vSphere 5.5 support going to end on 19 sept 2018. But there are many environments still running on 5.0 and 5.1 as well. Its the high time to upgrade to latest vSphere 6.x now. We will detail every single step to be considered for a successful migration in this post.
Know the Existing Environment
For the successful migration we need to know the existing environment completely. Gathering the existing vSphere environment details is very important. Thanks to RVTools which will do this in just couple of minutes. After getting the information from RVTools we need to analyze the info and gather below information.
- Existing vCenter Version and Build no
- Existing ESXi Hosts Version and Build no
- Existing Server hardware model, Make along with NIC and HBA cards info.( this is important if same hardware is used for upgrade)
- Standard switch or Distributed switch in use with its port groups, uplinks and vLan details
- Cluster information with HA, DRS rules
- EVC Mode information and server maximum supported EVC mode.
- All VM’s Name,OS, IP, port group, Datastore details
- Hardware version of all VM’s, VMWare tools installation status.
- VM RDM Lun details – with the SCSI ID mapping for each lun, RDM type (physical/virtual) , pointer location.
- USB drive mapping for VM’s if any.
- Integrations with vCenter server like backup, SRM and other.
Verify compatibility and upgrade matrix
Verifying the VMWare products compatibility and hardware compatibility is very important.
VMWare products compatibility can be verified here.
Hardware compatibility can be verified here.
Hardware & Storage compatibility
Its very important to check the Servers, Storage, NIC Cards and HBA Cards compatibility before planning for the upgrade or implementation of ESXi. NIC and HBA compatibility is covered in drivers section below.
Verifying Hardware compatibility & BIOS firmware
Open vmware compatibility guide – Just select the Partner Name (vendor) – in the keyword provide server Model – click update and view results.
Search for the exact server Model with CPU as shown below and find if the desired ESXi version is compatible or not. As shown below One model supports till 6.5 U1 while other supports 6.7 as well.
Click on the ESXi version to find supported hardware firmware details.
Recommended BIOS and hardware firmware details will be shown as below. Install them before installing ESXi.
Verifying Storage and SAN devices compatibility and Drivers
Open the vmware compatibility guide – select the Storage/SAN as shown below.
Select the Vendor and provide the storage model in keywords – click update
Search for the Exac storage Model – Select the storage and click on the ESXi Version
Note: if an ESXi version is not showing in the list, its either not supported or not yet validated by vmware.
All the supported drivers and firmwares list will be shown for the storage.
vCenter to ESXi compatibility
vCenter server support with ESXi host is very important when it comes to migration as most of the cases migration will happen while VM’s are up and running on existing ESXi hosts.
- ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
- ESXi 5.5 or later can be directly upgraded to 6.5 or U1
- ESXi 5.x cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
- ESXi 6.0 or later can be directly upgraded to 6.7
Supported vCenter Upgrade Path
- vCenter 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
- vCenter 5.5 or later can be directly upgraded to 6.5 or U1
- vCenter 5.x cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
- vCenter 6.0 or later can be directly upgraded to 6.7
Supported ESXi Upgrade Path
- ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
- ESXi 5.5 or later can be directly upgraded to 6.5 or U1
- ESXi 5.x cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
- ESXi 6.0 or later can be directly upgraded to 6.7
Decide the vSphere 6.x version to be upgraded to
Based on the available Hardware either new or reusing existing hardware , its compatibility verified as described in above new vSphere 6.x version and Build no needs to be decided. Suppose if the Hardware is compatible with ESXi 6.5 U1. Then vCenter and ESXi upgrades needs to planned for vSphere 6.5 U1, the detailed steps are listed below.
Note that not just hardware, but NIC cards, HBA cards compatibility is also very important if you are reusing existing or new hardware.
vSphere 6.x License and Support
VMWare vSphere license for ESXi and vCenter is based on version , meaning to say if you are an existing customer having ESXi 5.x and vCenter 5.x license it will support only 5.x ( 5.0,5.1,5.5). However if you already have an SA , ESXi and vCenter 5 licenses can be upgraded to 6.x, contact your local vmware partner for support.
Supported Drivers & Firmware for Hardware
Once the vSphere version for the available hardware is decided, all the necessary drivers for NIC cards, FCOE, FC (HBA) and multi path needs to be decided.
Here will explain how to find the exact driver and download the driver for targeted ESXi version.
Support and Driver for Network NIC Cards
Step 1: Run below command to get all NIC cards available on the Host.
esxcli network nic list
Example:
[root@localhost:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
—— ———— —— ———— ———– —– —— —————– —- ———————————————————
vmnic0 0000:01:00.0 bnx2 Up Down 0 Half a4:ba:db:0e:cc:9c 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
vmnic1 0000:01:00.1 bnx2 Up Up 1000 Full a4:ba:db:0e:cc:9d 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
Step 2: Get the Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) using the vmkchdev command:
vmkchdev -l |grep vmnic#
Example:
[root@localhost:~] vmkchdev -l |grep vmnic0
0000:01:00.0 8086:10fb 103c:17d3 vmkernel vmnic0
[root@localhost:~]
Vendor ID (VID) = 8086
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3
Step 3: Get the driver and firmware in use for NIC card
esxcli network nic get -n vmnic#
Example:
[root@localhost:~] esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: 10BaseT/Half, 10BaseT/Full, 100BaseT/Half, 100BaseT/Full, 1000BaseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:01:00.0
Driver: bnx2
Firmware Version: 5.0.13 bc 5.0.11 NCSI 2.0.5
Version: 2.2.4f.v60.10
Link Detected: false
Link Status: Down
Name: vmnic0
PHYAddress: 1
Pause Autonegotiate: true
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:56:6f:75
Wakeon: MagicPacket(tm)
Step 4: Find supported driver and download driver
Select IO Devices in the vmware compatibility guide then provide all the VID, DID SVID, SDID got in step 2 – click on update and view results – All the supported ESXi versions for the NIC cards will show as below.
Vendor ID (VID) = 8086
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3
Click on the required driver for ESXi version as shown below beside the NIC driver say 6.5 U1
Expand the driver version and the link to download the driver will be shown as below.
[root@localhost:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
—— ———— —— ———— ———– —– —— —————– —- ———————————————————
vmnic0 0000:01:00.0 bnx2 Up Down 0 Half a4:ba:db:0e:cc:9c 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
vmnic1 0000:01:00.1 bnx2 Up Up 1000 Full a4:ba:db:0e:cc:9d 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
vmkchdev -l |grep vmnic#
[root@localhost:~] vmkchdev -l |grep vmnic0
0000:01:00.0 8086:10fb 103c:17d3 vmkernel vmnic0
[root@localhost:~]
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3
esxcli network nic get -n vmnic#
[root@localhost:~] esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: 10BaseT/Half, 10BaseT/Full, 100BaseT/Half, 100BaseT/Full, 1000BaseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:01:00.0
Driver: bnx2
Firmware Version: 5.0.13 bc 5.0.11 NCSI 2.0.5
Version: 2.2.4f.v60.10
Link Detected: false
Link Status: Down
Name: vmnic0
PHYAddress: 1
Pause Autonegotiate: true
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:56:6f:75
Wakeon: MagicPacket(tm)
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3
Support and Driver for Storage HBA Cards
Step 1: Get Host Bus Adapter Driver currently in use
# esxcfg-scsidevs -a
Output will show something like vmhba0 mptspi or vmhba1 lpfc
Step 2: Get to HBA driver version currently in use
# vmkload_mod -s HBADriver |grep Version
For example, run this command to check the mptspi driver:
# vmkload_mod -s mptspi |grep Version
Step 3: Get HBA Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) using the vmkchdev command:
vmkchdev -l |grep vmhba#
Example:
[root@localhost:~] vmkchdev -l |grep vmhba0
0000:01:00.0 1077:2031 0000:0000 vmkernel vmhba0
[root@localhost:~]
Vendor ID (VID) = 1077
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000
Step 4: Find supported driver and download driver
Select IO Devices in the vmware compatibility guide then provide all the VID, DID SVID, SDID got in step 3 – click on update and view results – All the supported ESXi versions for the NIC cards will show as below.
Vendor ID (VID) = 1077
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000
Click on the required driver for ESXi version as shown below beside the HBA driver say 6.5 U1
verify the VID and all, Select the ESXi version – Expand the driver – download link will be given as shown below.
Output will show something like vmhba0 mptspi or vmhba1 lpfc
# vmkload_mod -s HBADriver |grep Version
# vmkload_mod -s mptspi |grep Version
vmkchdev -l |grep vmhba#
Example:
[root@localhost:~] vmkchdev -l |grep vmhba0
0000:01:00.0 1077:2031 0000:0000 vmkernel vmhba0
[root@localhost:~]
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000
How to install/Update the Driver on ESXi
Upload the driver to the ESXi host. Use below command to install if driver not present or update if an old version of driver is present. Some cases you might need to remove the old driver if host is already having higher version of driver than supported. all the commands are given below.
Remove the existing VIB:
Find the vib name from below command:
esxcli software vib list
remove vib using the name of vib got from above.
esxcli software vib remove –vibname=nameofvib
Update VIB driver using below command:
esxcli software vib update -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”
Install VIB driver using below command:
esxcli software vib install -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”
esxcli software vib install -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”
Migration Approach & Steps
vCenter Upgrade / Install
vCenter appliance has come long way and its very stable now. So no need to rely on windows base vCenter any more. Without any doubt vCenter appliance can be used. However based on the environment, Size , integration with other vmware products vCenter topologies will differ. All the supported vCenter topologies can be found here, However there are three most commonly used topologies are high lighted below.
VCenter Topologies
Standard Topology 1: For Small deployments with 5-10 Hosts if there are no integrations with other vmware products like NSX or VRA , Embedded is the best topology.
1 Single Sign-On domain
1 Single Sign-On site
1 vCenter Server with Platform Services Controller on same machine
Limitations
Does not support Enhanced Linked Mode
Does not support Platform Service Controller replication
Standard Topology 2:For Medium to large deployments with integrations with other vmware products or multiple vCenter servers for different purposes like one vCenter for production hosts another for VDI hosts. below is the best topology.
1 Single Sign-On domain
1 Single Sign-On site
2 or more external Platform Services Controllers
1 or more vCenter Servers connected to Platform Services Controllers using 1 third-party load balancer
Standard Topology 3: For Medium to large deployments with DR, integrations with other vmware products and multiple vCenter servers below is the best and recommended topology.
1 vSphere Single Sign-On domain
2 vSphere Single Sign-On sites (Prod , DR site)
2 or more external Platform Services Controllers per Single Sign-On Site ( 2 in Prod, 2 in DR)
1 or more vCenter Server with external Platform Services Controllers
1 third-party load balancer per site
1 Single Sign-On site
1 vCenter Server with Platform Services Controller on same machine
Limitations
Does not support Enhanced Linked Mode
Does not support Platform Service Controller replication
1 Single Sign-On site
2 or more external Platform Services Controllers
1 or more vCenter Servers connected to Platform Services Controllers using 1 third-party load balancer
2 vSphere Single Sign-On sites (Prod , DR site)
2 or more external Platform Services Controllers per Single Sign-On Site ( 2 in Prod, 2 in DR)
1 or more vCenter Server with external Platform Services Controllers
1 third-party load balancer per site
No comments:
Post a Comment