vSAN Magic

Some time ago I had a booth duty during a conference. My task was to present VxRail demo. If you are familiar with it, you know that after successful installation of the whole VMware stack, you get the Hooray page.

And vSAN magic was so strong that event Geralt of Rivia couldn’t resist it. And if you are familiar with The Witcher you also know he was generally immune to magic 😉

During Proof of Concept tests, when we run various complex failure scenarios, we might be lucky to see some of the vSAN magic too…. 😉

For me Intelligent Rebuilds are pure magic. I really like the smart way vSAN handles resync, always trying to rebuild as little as possible. Imagine we have a disk group failure in a host. After Object Repair Time is up, vSAN starts to rebuild the components on other hosts. Do you know what happens when a failed disk group comes back in the middle of this process? vSAN calculates what will be more sufficient – updating existing components or building new ones.

In the example below I have a vSAN cluster that consists of 4 hosts. Physical disk placement shows the components of the VMDK objects and all objects are ok.

When I introduce a disk group failure on the host number 3 (esx7-3), disk placement reports that some components are absent but objects are still available, because those VMDKs have FTT-1 mirror policy (two copies and a witness).

When the rebuild process kicks in, we can observe how vSAN resynchronizes objects:

In the meantime I put the disk group of the esx7-3 host back in. And here it is – for a brief second we can see two components in the Resync view. One is a new one on esx7-1 that still has 11.07 GB to resync and the second is an old one on esx7-3 that has only 262.19 MB of data to resync.

A couple of seconds later, resync process ends because vSAN chooses to resync an old object to make the process more efficient.

Ok, ok, this is not magic but it is awesome anyway 😉

vSAN Disk Fault Injection

Remote Proof of Concept testing seems to be gaining in popularity recently. The major difference in on-site vs remote testing is the access to HW to test drive unplug or physical network failure. What I use in case of disk failure testing in a vSAN cluster is vSAN Disk Fault Injection script that is available on ESXi. There is no need to download anything, it is there by default, check your /usr/lib/vmware/vsan/bin path but use the script for POC/homelab only.

We need to have a device id do run the script, we can test a cache or capacity drive per chosen disk group. In the example below I picked mpx.vmhba2:C0:T0:L0 which was a cache drive (Is Capacity Tier:false).

You can use esxli vsan storage list for that:

Or check in the vCenter console under Storage Devices:

Or under Disk Management:

python vsanDiskFaultInjection.pyc has the following options:

I am using -u for injecting a hot unplug.

/var/log/vmkernel.log is the place you can verify the disk status:

vSAN-> Disk Management will also show what is going on with a disk group that faced a drive failure.

And now we can observe the status of the data and the process of resyncing objects due to “compliance”.

After we are done with the testing, simple scan for new storage devices on the host will solve the issue.

Disabling vSAN kernel module

When you work with nested vSAN homelab installations that constantly suffer power loses and network issues you get to know tons of useful troubleshooting tricks ;-). vSAN data seems to survive all of these unexpected failures, it is just a cluster services that sometimes need a little help. But remember, feel free to explore new tricks in your homelab but always consult Technical Support when you are not sure about the results of the command you want to use in your production environment.

Recently I ran into an issue in my lab and I wanted to see if it is vSAN related. There is this option in ESXi to boot a host with selected modules disabled. When the host boots, you have to press Shift+O to be able to disable modules.

Here is how I disabled vsan module:

jumpstart.disable=vsan,lsom,plog,virsto,cmmds

And how to verify if the module is loaded:

esxcli system module list

vCenter recognized that the host in the cluster does not have its vSAN service enabled.

How to make a host to load back the vSAN module? Simply by restarting it. Although this host did not have a vSAN module at that moment, it was still in my vSAN cluster. The nice thing is that I got an additional notification from the vCenter that I had a partition in my cluster before the restart. Good to know…?

esxcli vsan cluster unicastagent list

This may often happen in a nested vSAN environments in our home labs. We play with networking, remove vSAN kernels, put vCenter down, remove hosts from vSAN cluster…and there is this one step too far that results in having all our objects inaccessible, including vCenter. To be able to access the data (it is stored securely on the disk groups) we need to re-create our cluster back again.

How this can be done without vCenter? vSAN works fine when vCenter is down, but what happens when vCenter IS actually down and cluster is broken or needs to be reconfigured?

vSAN Health in ESXi web interface is a good start to asses the “damage”. If all of the hosts are isolated, all of them will be masters of their own single-node vSAN. If we do not see any other hosts in Hosts tab, this means the host does not see any of its neighbors in the vSAN network.

What we could do next is to ssh to all esxi hosts and check the cluster status with the command: esxcli vsan cluster get.

This will confirm that hosts are isolated or will help us to determine how the cluster is partitioned.

vmkping -I vmkX x.x.x.x will always help us to check if this is a network problem of the nested host. In this scenario we assume network works fine, pings are successful but nested hosts somehow cannot form the cluster.

It is vCenter’s role to inform hosts about their vSAN neighbors when we form the cluster but in this case we need to do this manually.

We need to “inform” hosts about their neighborhood (vSAN uses unicast). On the screen below we see 4 vSAN 7.0 hosts with vmk2 tagged with vsan traffic.

Every host should have a list of other host in a cluster. We can check it using esxcli vsan cluster unicastagent list.

If the cluster runs fine, this command shows the complete list of the neighbors from the single host perspective. Here we can see esxi-13 seeing all three other hosts on their vSAN network on vmk2.

On the screen below we can see that the host esxi-10 sees only esxi-11 and esxi-12.

Assuming network is fine, vCenter is down and won’t help us with this issue, we need to fill gaps in the unicastagent lists manually. Just remember, never add the IP of the host whose table is being configured. Here is the command we have to use:

esxcli vsan cluster unicastagent add -t node -u <Host_UUID> -U true -a <Host_VSAN_IP> -p 12321

for esxi-10 we have to have esxi-11,12 & 13 on the list, for esxi-11 it will be esxi-10, 12, 13 etc.

If the lists are complete, cluster should instantly be recreated and objects available again. Check out the sub-cluster member count – it was 1 and now it is 4.

The cluster if formed back again and vCenter should be starting.