A second host has been added to a one-node vSphere cluster which has no shared storage and EVC cannot be enabled. Details as follows:
- A vSphere environment consists of an existing cluster containing a single host running ESXi 6. On this host is (amongst other things) the vCenter Server Appliance VM.
- A second ESXi6 host is added and joined it to the vCenter. This host has older hardware with an older EVC level (rather than the usual situation where the newer box has newer hardware).
- EVC cannot be enabled on the cluster because the old host does not have the capabilities of the new host. The old host has VMs running which may be using the enhanced capabilities of the higher EVC level, one of these is the vCSA.
“The host cannot be admitted to the cluster’s current Enhanced vMotion Compatibility mode. Powered-on or suspended virtual machines on the host may be using CPU features hidden by that mode.”
Therefore the EVC level on the existing cluster cannot be lowered by powering off all the VMs because it cannot be changed without using vCenter.
- The vCSA cannot be vMotioned to the old host because this requires EVC to be enabled.
“The virtual machine requires hardware features that are unsupported or disabled on the target host”
- There is no Shared Storage. VMs are stored on local datastores. The Knowledgebase article “How to enable EVC in vCenter Server (1013111)” has a solution but this doesn’t (as far as I can tell) work without shared storage. Without storage visible from both hosts the VM cannot be disabled in one host and brought back up in a second which is in a new cluster.
The problem boils down to needing to cold migrate a VM between ESXi hosts without using shared storage or vCenter. The following solutions came to mind.
- Create some shared storage (possibly using an NFS share on a laptop temporarily) and follow the procedure shown in KB1013111
- Power down the vCSA, use the host web client to move the files from the datastore to a laptop, then back up to the less-able host. Power it on and set EVC on the cluster.
- Dump the vCSA and setup a replacement instance on the less-able host. Reconfigure everything. The “Start Again” option.
- Use SCP to do a host-host local datastore transfer of the powered down and unregistered vCSA Virtual Machine files
My Chosen Solution
This is what I tried and tested and it worked in my environment, along with step-by-step instructions if anyone else finds themselves in this predicament (usual disclaimer applies).
Option 4: Use SCP to do a host-host local datastore transfer of the powered down and unregistered vCSA Virtual Machine files.
Rough steps – “First Host” is the existing box with newer hardware, “Second Host” is the box with older hardware being added:
- Using vCSA setup a new cluster containing just the second host (the one with older hardware) and turn on EVC appropriately.
- Enable Secure Shell access on both hosts
- Shutdown all VMs including the vCSA on the first host
- Remove vCSA from inventory (Unregister) using web client on first host
- SSH into second host
- Enable SCP through the firewall with
esxcli network firewall ruleset set -e true -r sshClient
-thanks http://jim-zimmerman.com/?p=723 for that snippet
- Use SCP to copy VM files from local datastore on first host to local datastore on second host.
For example- in SSH session on second host, something like this:
scp ‘firstname.lastname@example.org:/vmfs/volumes/Datastore2/LABVC1/*.*’ /vmfs/volumes/datastore1/LABVC1/
- Connect to web client on second host and register the copy of the VMX file to inventory
- Turn on the vCSA VM. When prompted say “I moved it”
- Wait for vCSA to spin up then move the first host into the cluster with the second host.
- Tidy up: -Remember to go back and delete the old copy of the VCSA from the datastore on the first host and disable SSH on both hosts if it’s not required. The vCSA can be Storage-vMotioned to rethin disks if they inflated during the SCP operation.