Which gateway is used by vSAN kernel by default?

vSAN unlike vMotion does not have dedicated TCP/IP stack. This means it uses default gateway.

In clusters where vSAN uses a single L2 domain it is not a problem. In cases where there are multiple L2 domains within a cluster (stretched cluster, dedicated L2 domain per site, clusters that span racks in a Leaf and Spine topology) we need to define static routes to reach other L2 domains.

It is important to know that when you enter a dedicated gateway address for the vSAN network (Override default gateway for this adapter) it does not override the routing table on the ESXi host:

ESXi attempts to route all traffic through the default gateway of the default TCP/IP stack (Management) instead.

So far, the only option to route traffic via dedicated gateway for vSAN is to create a static route using this command:

esxcli network ip route ipv4 add --gateway IPv4_address_of_router --network IPv4_address 

Other useful commands:

esxcli network ip route ipv4 list

esxcli network ip route ipv4 remove -n network_ip/mask -g gateway_ip
esxcfg-route -l

Let’s not forget about the most important one after every change in the network: vmkping:

If you have Jumbo Frames configured in your environment, run vmkping with -d (disable fragmentation) and -s (size).

vCenter 6.7 U2 -> U3 offline update via VAMI

U2 to U3 is a small update, just a patch, easly done via VAMI (vCenter’s Virtual Appliance Management Interface: https://appliance-IP-address-or-FQDN:5480). You just have to check for recent updates online and you can start patching immediately or schedule it for later.

Online….but what about offline bundles? Usually we go to my.vmware.com and download .iso. The .iso we get on the Product page will not be useful for a small patch, it is prepared for a larger updates like 6.5 to 6.7 etc. For this kind of patch you should go to Patches page and download .iso there.

This one includes word “patch-FP” (Full Patch) and needs to be mounted as CD-ROOM for vCenter VM.

Now we can check for updates using CD-ROOM option and our new patch is available for installation.

And the update is in progress..

vCenter Server Appliance on a one-host vSAN

Usually we deploy vCenter when we have a datastore with enough free space available. In case of a brand new vSAN cluster installation we need (or at least we should have) a vCenter to activate vSAN but we need a vsanDatastore to install a vCenter. Classic chicken and egg situation.

Starting from vSAN 6.6 we do not have this issue any more, you can find more details in Release Notes for this version :

"You can create a vSAN cluster as you deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables you to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster"

How does it work? When you run vCenter Server Appliance Installer in step 7 you are asked to select a datastore, you can pick an option to “Install on a new vSAN cluster containing the target host”.

This option will create one-host vSAN Cluster and install vCenter on it – with a storage policy SPBM: FTT=0.

Step 8 will also allow us to claim disks for vSAN for this particular host.

After vCenter is deployed on one-host vSAN there are some things that we can check to confirm vSAN is running fine:

esxcli vsan cluster get

Sub-Cluster Member Count: 1 indicates one-host cluster.

Using esxcli vsan debug object list we can verify that VMDKs that belong to vCenter have FTT=1 SPBM policy but also Force Provisioning is enabled, that means we have just one copy of VMDK on this one-host cluster and this is not complaint to FTT=1 so the health state of the object is: reduced-availability-with-no-rebiuld. There is no other ESXi in the cluster to host a second copy and another one to host a witness so there is no way to satisfy FTT=1 for now.

Logging into web GUI of ESXi we see that vsanDatastore is created next to other VMFS datastore.

Looking into the Health tab we can verify the state of our object which matches the output of esxcli vsan commands.

It might me interesting to know, one-host vSAN does not have a vmknic configured.

After vCenter is deployed we definitely need to finish our vSAN configuration, vCenter should not run on FTT=0 longer than necessary.

Next steps include:

  • adding vSAN vmknics
  • adding remaining hosts to vSAN cluster
  • configuring disk gropus
  • configuring SPBM policies
  • applying licenses
  • configuring HA

and we are reminded to do so at the end of our installation:

vSAN Off-Road

The VMware’s Virtually Speaking Podcast needs no introduction. This podcast accompanies me usually on my business trips not only because it ties to my storage-focused job. It is entertaining, inspiring and it is just great to be able to hear our overseas subject matter experts.

“vSAN Off-Road” is the title of one of my favourite Virtually Speaking Podcast episodes from 2017. It is about vSAN being the most flexible storage…but there has to be a limit to this flexibility, hasn’t it? Speakers discuss what happens when we wander off the HCL and recommendation path…or decide to run a single cluster on servers coming from different vendors.

vSAN can run on “almost” every possible server that vSphere can run on. On the photo below you can see our vSAN suitcase that usually has a place under my desk in our office but also travels with me during workshops. vSAN is installed on 3 Intel NUCs (one SSD as a cache tier and second SSD as capacity tier) and the fourth NUC is a single ESXI host with management VMs and apps like vCenter, DNS, NSX controller, Connection Server for VDIs etc. The suitcase vSAN cluster has a datastore size of around 1,2 TBs and has deduplication on.

This is not a supported hardware configuration, but this is just a demo. In real-life situations I come across a lot of off the road vSAN questions. I decided to share with you my personal FAQ.

  • Can we run a vSAN cluster on hosts coming from different vendors or different generations of hosts?

Technically, yes. Although I can imagine troubleshooting could be difficult in such a situations. There are couple of things you have to consider. One of the most important is the compute layer. Combining different hosts = combining different CPUs. For the vMotion this can be a problem, you would probably need to enable EVC. Here is a nice post about it.

If you would add a newer host to the cluster, containing newer CPU packages, EVC would potentially hide the new CPU instructions to the virtual machines. By doing so, EVC ensures that all virtual machines in the cluster are running on the same CPU instructions allowing for virtual machines to be live migrated (vMotion) between the ESXi hosts.

When mixing different hosts in vSAN cluster, it is important to have similar CPU and RAM resources, so one host will not slow down others.

  • What about mixing different type of hosts: all flash with hybrid?

This is not supported.

  • Can an ESXi host have access both to vSAN datastore and VMFS datastore on SAN?

Yes, of course it can. vSAN does not care about other datastores. Host can have a datastore on a local disk, access to LUN on some storage array, iSCSI datastore and vSAN at the same time. Although it might get difficult to manage and automate and you will definitely need more ports and HBA adapters.

  • Is it possible to scale UP a host adding disks that have different sizes?

Here you will find an official statement.

If the components are no longer available to purchase try to add equal quantities of larger and faster devices. For example, as 200GB SSDs become more difficult to find, adding a comparable 400GB SSD should not negatively impact performance.

  • Can hosts have different number of disk groups?

They can. The challenge is that some of the host will aggregate more storage traffic than others. VMDK components will not be equally balanced over the cluster.

  • Can a vSAN cluster have compute only nodes?

Yes. You still need to licence those hosts with vSAN license and configure vSAN kernel for them. There is no official statement on the compute nodes to standard nodes ratio. It all depends on a workload. Having just 3 vSAN nodes serving 10 diskless hosts hosting compute intensive apps might require a good planning, especially on the networking part.

About soft limits

VMware Configuration Maximum tool is a go to tool for all of us that design VMware stack. I always considered all the max values to be a kind of hard limit meaning a vCenter will not allow to create larger than defined number of items. For vSAN we have definitely a couple of hard limits: max number of disk groups is 5 per host, max number of disks in a disk group is 7, max number of vSAN dastastores is 1 per cluster etc.

Recently I had a task to verify one of the limits which was max number of VMs per host in a vSAN cluster: 200. The task was to check what would happen if we created > 200 Instant Clones per host.

When you design a VDI cluster on vSAN, you have to take into consideration a certain number of host failures that the cluster can tolerate not only from a compute perspective (will there be enough RAM and CPU for all VMs? ) but also from a storage perspective (will there be enough storage to host all VMs? ). Luckily, with Instant Clones storage is rarely an issue even less often when you use vSAN deduplication.

Having limited budget in mind, sometimes it turns out that when one host is down or two hosts are not working, cluster works fine but other hosts have to take on a little more than they were designed for.

That is why 200 VMs per vSAN host is a soft not a hard limit. VMware supports the installations up to 200 VMs but in “survival mode”, hosts can handle more. In my case it was 241 Instant Clones.

I am not saying nothing bad will happen, take a look at this:

RAM and CPU resources are strained here and the user experience for VDI users could get worse. Still it is good to know the limit is soft.