Migrating VMs from VMware on-prem (or GCVE ) to Google Compute Engine

When we want to migrate our existing (on-prem) VMware workloads to a cloud we have two options. It can be a lift and shift migration to a VMware-As-A-Service solution like GCVE where VMware HCX could be used to guarantee a high performance of migrations and a minimum or no downtime during such a process. The other option is to convert from VMware to cloud native format like Google Compute Engine. This article briefly covers the second approach.

The VM format conversion sounds scary but it is actually a very easy process if you are using Google Migrate for Compute (M4C) service.

Imagine you have a VM (or a bunch them) that runs in a vSphere environment (on-prem or on GCVE) and it uses an OS system supported by M4C. In a matter of let’s say an hour (depending on the VM size and its data churn and also assuming you have a connectivity between your on-prem VMware cluster and a VPC where your M4C service is running or Private Service Connection to your GCVE cluster) you can have it running on Google Cloud.

I will use test-aga-vm that runs on Ubuntu in my example.

What you need to do is to setup you M4C environment following official Google’s documentation. One of the steps is to enable Migrate for Compute API after which you will find your M4C dashboard is under “Compute Engine”

When everything is set on GCP side (APIs and permissions), the next step is to deploy M4C appliance on your vSphere cluster. The link to the most recent OVA version is included in the documentation.

The most important part when deploying OVA is to deploy it in a network segment that can access googleapis.com and your DNS. The M4C appliance will have to be able to resolve your vCenter FQDN and also FQDNs of all ESXis in your cluster. It is ok to run it on a vSAN cluster.

When a M4C appliance is ready the only way to ssh to it is via its SSH Private Key. The SSH Public Key has to be provided when M4C is deployed.

After a M4C appliance is powered on, it has to be registered in your project. The registration process looks like this:

admin@migrate-appliance:/m4c/OSS$ m4c register
Please enter vCenter host address: xxx.us-east4.gve.goog
vCenter server SSL certificate fingerprint is xxx Do you approve? [Y/n]Y
Please enter vCenter account name to be used by this appliance: solution-user-05@gve.local
Please enter vCenter account password:xxx
vSphere credentials verified

Please visit this URL to authorize this application: https://accounts.google.com/o/xxx
Enter the authorization code: xxx
This Migrate Connector was registered to Source xxx in Project yyy
Please select project:
xxx
List is longer than 10, truncating list. Please select or type project.
xxx

Please select region:
1. asia-east1
2. asia-south1
...
List is longer than 10, truncating list. Please select or type region.
us-east4

Please supply new vSphere source name (vSphere source format must be only lowercase letters, digits, and hyphens and have a length between 6 and 63) : vcsa-599
Creating new source…

Please select service account: ("new" to create)
1. new
2. xxx
new


Please supply new service account name (service account format must be only lowercase letters, digits, and hyphens and have a length between 6 and 3 0): migration
Waiting for the Migrate Connector to become active. This may take several minutes…

Registration completed

If you wan to check the status of the appliance, use m4c status.

After the successful registration you will find new service accounts were created to support migrations and OS customisations.

After a successful registration you will also see in the Source tab a list of all VMs under a registered vCenter.

You can start a VM replication now. After an initial sync you can test-clone your VM (to a sandbox VPC for example) or do a cut-over. For those two actions you need to provide Migration Details for a VM. The M4C service needs to know where do you want your replica to be created, what would be a machine type you want to use when you do a test clone or cut-over and how frequent the replication cycle should be. It looks like this service could be used not only for migrations but also as a DR.

If you want to use a service account, this is where you configure it:

A Cut-over will never delete an original VM on a vSphere side, it will only power it off. Other activities I observed during replication and cut-over were VM snapshots. M4C will not reconfigure a VM that runs on vSphere. In case a cut-over fails, you can always power on an original VM.

When my cut-over task was completed I could ssh into your my migrated VM from a Cloud Shell to evaluate it.

M4C does a lot of OS adaptations when converting the format from VMware to Compute Engine like uninstalling VMware Tools, configuring NIC to use DHCP, installing Google packages etc. A new VM will have new IP addresses that are provided by your VPC. It can also use different CPU and RAM parameters than the ones configured on vSphere, so a migration can also be a good moment to evaluate and resize your VM. M4C also offers a VM utilization report that you can run for a longer period of time on VMs that are still running on vSphere to right size them for a migration.

How to check your vSAN Health history?

Skyline/vSAN Health is the first place we go when we troubleshoot a vSAN cluster. But in case we have a problem that appears from time to time, it can happen that an updated vSAN Health report will not indicate any problems.

One of the simplest ways to check the historical vSAN Health data is to check them in your vCenter’s log file:

/var/log/vmware/vsan-health/vmware-vsan-health-summary-result.log

vSphere and vSAN 7.0 are GA!

Brand new vSphere and vSAN 7.0 binaries are available to download on my.vmware.com.

Check out the small sneak peak of 7.0, freshly installed on 4 vSAN all-flash hosts. We say goodbye to the old flash-based web client, we welcome VM hardware version 17 with watchdog timer (resetting the VM if the guest OS is no longer responding) and support for Precision Time Protocol, new re-written workload – centric DRS with scalable shares, vSAN memory consumption dashboards and many more….

vCenter 6.7 U2 -> U3 offline update via VAMI

U2 to U3 is a small update, just a patch, easly done via VAMI (vCenter’s Virtual Appliance Management Interface: https://appliance-IP-address-or-FQDN:5480). You just have to check for recent updates online and you can start patching immediately or schedule it for later.

Online….but what about offline bundles? Usually we go to my.vmware.com and download .iso. The .iso we get on the Product page will not be useful for a small patch, it is prepared for a larger updates like 6.5 to 6.7 etc. For this kind of patch you should go to Patches page and download .iso there.

This one includes word “patch-FP” (Full Patch) and needs to be mounted as CD-ROOM for vCenter VM.

Now we can check for updates using CD-ROOM option and our new patch is available for installation.

And the update is in progress..

vCenter Server Appliance on a one-host vSAN

Usually we deploy vCenter when we have a datastore with enough free space available. In case of a brand new vSAN cluster installation we need (or at least we should have) a vCenter to activate vSAN but we need a vsanDatastore to install a vCenter. Classic chicken and egg situation.

Starting from vSAN 6.6 we do not have this issue any more, you can find more details in Release Notes for this version :

"You can create a vSAN cluster as you deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables you to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster"

How does it work? When you run vCenter Server Appliance Installer in step 7 you are asked to select a datastore, you can pick an option to “Install on a new vSAN cluster containing the target host”.

This option will create one-host vSAN Cluster and install vCenter on it – with a storage policy SPBM: FTT=0.

Step 8 will also allow us to claim disks for vSAN for this particular host.

After vCenter is deployed on one-host vSAN there are some things that we can check to confirm vSAN is running fine:

esxcli vsan cluster get

Sub-Cluster Member Count: 1 indicates one-host cluster.

Using esxcli vsan debug object list we can verify that VMDKs that belong to vCenter have FTT=1 SPBM policy but also Force Provisioning is enabled, that means we have just one copy of VMDK on this one-host cluster and this is not complaint to FTT=1 so the health state of the object is: reduced-availability-with-no-rebiuld. There is no other ESXi in the cluster to host a second copy and another one to host a witness so there is no way to satisfy FTT=1 for now.

Logging into web GUI of ESXi we see that vsanDatastore is created next to other VMFS datastore.

Looking into the Health tab we can verify the state of our object which matches the output of esxcli vsan commands.

It might me interesting to know, one-host vSAN does not have a vmknic configured.

After vCenter is deployed we definitely need to finish our vSAN configuration, vCenter should not run on FTT=0 longer than necessary.

Next steps include:

  • adding vSAN vmknics
  • adding remaining hosts to vSAN cluster
  • configuring disk gropus
  • configuring SPBM policies
  • applying licenses
  • configuring HA

and we are reminded to do so at the end of our installation: