Some time ago I worked on a vForum’s session about VMware Cloud Foundation and what features in VCF make admin’s life easier. But my slide deck seemed to focus on larger environments only… Is VCF only for them?
Later that day, when I was making my coffee, I had a flash ;-). If it is just one person that drinks coffee @homeofiice, the overhead to maintain a coffee machine may seem inadequate. But just wait till your guests arrive unannounced to feel the difference…
It is worth to learn about Private/Hybrid Cloud platforms even if your workload today is small because your business is already on their way to surprise you with some new ideas.
What is VCF?
To put it simply, it is software stack that enables you to operate your on-premise data center like hyperscalers do (Google, AWS etc) and extend it to a public cloud at any time, keeping the same user experience.
Why cloud-like approach?
Just think about it. No matter how much you like to eat healthy, after a long road trip a visit to McDonald’s sounds good. We like the comfort of being able to go anywhere in the world and know that the food in McDonald’s will taste the same. They are obsessive about quality, user experience, predictability, consistency. It helped them became a franchise phenomenon and the most recognised company of all times.
And our business needs their apps to be delivered fast, they need a kiosk/blueprint to order, monitor the delivery, automate to deploy without any delays, charge per usage and consume quicker than ever. No matter how hilarious that sounds, the IT team needs to operate almost like a fast-food chain these days to be able to quickly cater to market needs…otherwise, the business will go somewhere else.
Cloud is not a PLACE, it is the WAY IT operates.
Why can’t we operate like we used to?
We can always have a datacenter based on Validated Design or vendor’s best practices, we can commission servers manually, script or automate on an infrastructure level on our own. But as software and hardware products evolve, we would have to spend more and more time focusing on automating things that had been already automated for us. Why not to focus on applications that bring more value to the business? Public cloud is all about apps not hardware.
Here is for example an Excel file with Validated Design decisions that was once published on communities.vmware.com. It is around 400 decisions that have to be discussed and then implemented in almost every large datacenter. Sometimes business can’t wait that long. Wouldn’t it be great to have it all done for you?
Let’s look at the way IT can operate
On the frontend
Imagine Department X has a Project Y and Budget $. After a brainstorm, they open their App Kiosk and order the Apps or Platforms or Processes that are needed for the Project Y. Developers may still have to add some customizations, but in general, starting from day 0, we know the resources needed and price per usage. Project Y has to be approved by the resource owners. Within minutes or hours (when new physical infrastructure has to be prepared) automation engine deploys applications on a dedicated or shared cluster of servers on premise or in the public cloud.
On the backend
There is no fiber channel in the cloud.
To be able to automate, to predict the unpredictable and design for unknown consumption rates, we need a proper infrastructure. A “building-block” approach: Software Defined Data Center (SDDC). Scalable pools of easily replaceable commodity x86 servers with local disks and fast Ethernet network and some way to match automation on the app level with automatic creation of new resources like compute and storage.
VMware Cloud Foundation stack uses SDDC Manager to automate on the infrastructure level. Resources required for Project Y may be assigned on the existing pool of servers or new cluster may be created.
If the new cluster needs to be created, SDDC manager will take care of installing whole VCF stack on available/free hosts.
Pools of hosts dedicated to serve the assigned workload are called Workload Domains (WD). A WD can host one or more vSphere clusters. E.g. WD1/ Cluster 1 can host VDI workloads, WD2/ Cluster 2 can be a Database cluster, WD3/Cluster 3 can host Kubernetes workload. WD can run on a single site or it can be stretched across two sites.
Workload domain is self sufficient, it has its own vCenter server instance, NSX Manager and dedicated vSAN datastore (or other type of storage but in this case it has to be automated and managed separately by the admins themselves). WD can be easly and automatically expanded or decomissioned.
Wait…if every WD has its own vCenter, how do we automate all those things? Which one is the manager? All vCenters operate in Linked Mode (we can log into any one of them and see other vCenters as well) and are managed by SDDC Manager. It is SDDC Manager that makes it all possible: creation of new WD, WD expansion, decommissioning, automatic stack deployment and many more.
SDDC Manager operates on its own “WD”: on a Management Cluster. This cluster is build as first and also has its own vCenter, NSX Manager and vSAN datastore (this is the only case where vSAN is obligatory because Management Cluster’s bringup is 100% automated).
What is the purpose of a Management Cluster? It hosts all “management” VMs like vCenters from all WDs, NSX installations, vRealize Operations and vRealize Log Insight instances. It can also be used to host 3rd party management appliances or for example backup or Active Directory VM. This is a VVD best practice to have a separate cluster for management. This way also WDs can dedicate almost all of their resources to applications.
Here is how SDDC Manager looks like when you log in. You can see in this case we have 2 domains: Management Domain and Workload Domain, both are using vSAN all-flash clusters.
Every Workload Domain with vSAN can have from 3 to 64 hosts, only Management Domain needs minimum 4. The way we operate inside a Workload Domain does not change – we are still using vCenter to create VMs.
WD can have a general purpose. But if you want to deploy Horizon as a WD, SDDC Manager has an option to do that for you! Imagine how easy it would be to expand a VDI cluster just by adding two-three new hosts and observe how SDDC Manager takes care of everything.
The other “special” WD is PKS. PKS is the VMware’s turnkey platform that simplifies the deployment and operations of Kubernetes clusters. With VCF you can skip the manual installation of the platform and just start consuming your PKS cluster.
Automatic installation is great but is that all?
The best thing about VCF is that you can do an upgrade of all of the components using a single package. Upgrade sequences and compatibility matrixes soon will be a thing from the past 😉
…or not. We still need to check compatibility matrixes to verify if a certain HW or a 3rd party vendor supports our VCF release, but it will not be as time consuming as it is now.
Management Domain has to be upgraded first, then we can run upgrade of WDs or schedule it. Some bundles contain larger updates, some smaller, but it is always a single package.
We can connect our SDDC Manager to my.vmware and get a nice Update/Patch notifications. Offline bundle upload is also supported.
There would be no upgrade without a precheck. SDDC Manager runs it automatically through ALL of the components making sure the platform is ready.
Processes like cluster bringup, host commissioning, upgrades, certification management, password rotation, license management and many more are being automated for us.
VCF automates things that I used to do manually, so what do I do now?
VCF helps us to transform from Infrastructure Admin to Cloud Admin. Finally we can focus on consuming the infrastructure.
Remember Department X and Project Y? Wouldn’t it be nice if they had a portal like on the video below? vRealize Automation (also a part of VCF) makes it all happen. We (infrastructure people) tend to think about automation in the context of automating infrastructure only but the ultimate goal is to automate our infrastructure to be able to automate on the higher levels.
VCF automates things so we can automate even more…but on the application layer.
That’s great but we have many locations across the globe…
Well, that WILL work out perfectly, because you can create a federation of all your VCF instances. In my case I have just one installation but I could expand it easly by inviting “new members”. So a single VCF instance can operate as a “single site” or it can be stretched between locations (active-active stretched cluster), more VCF instances can work as a federation.
That’s great but my workload is small…
There is a special kind of architecture for smaller deployments and special use cases. It is called VCF Consolidated Architecture. The minimum number of physical hosts (vSAN Ready Nodes) that are required for such a VCF installation is 4. Instead of Workload Domains, dedicated resource pools are assigned for management workload and production workload. The primary storage for Consolidated Architecture is vSAN but additionally NFS Datasore can be used to store backups. Such an architecture can be expanded by adding more hosts or reconfigured by adding a Workload Domain and migrating VMs from WD resource pool to a new WD.
VCF also changes the way we:
Instead of creating VMs manually, installing operating systems and apps, we use vRealize Automation (a part of VCF stack) to prepare blueprint. Service Catalogs created via vRA can offer more than just VM deployment, we can automate processes and procedures, manage hardware equipment, automate in the public cloud, create users, firewall rules etc.
We no longer have to log into a single vCenter to know what is going on with the cluster. Advanced troubleshooting across many clusters and many layers (from app layer to the particular disk that the data resides), support for many known applications, advanced application metrics, what-if analysis and many more are the features of vRealize Operations Manager (also a part of VCF stack).
We do not have to analyze logs coming from different platforms manually, vRealize Log Insight (a part of VCF stack) provides, as we call it, “deep operational visibility and faster troubleshooting across physical, virtual and cloud environments”. It can analyze massive amounts of logs to deliver near real-time monitoring and log analytics.
How can I learn more about VCF?
So VMware Cloud Foundation simplifies admin’s life, it combines the platforms and applications that we know with new ones and integrate them into a single stack. The integration is getting deeper and deeper, so we can install and operate the whole stack, not the separate components like we are doing now. Best way to start your journey with VCF are VMware Hands-on Labs. Look for “VMware Cloud Foundation – Getting Started” lab in the catalogs.