Deploy and Verify VKS Cluster using kubectl
This tutorial focuses on clarity and simplicity, so even if you are new to VKS or Kubernetes, you should be able to follow along.
In case if you are not aware what VKS Cluster please visit the article VKS Cluster Overview
Prerequisites
First, make sure a supervisor namespace has been created and attached required storage policies, VM Class templated and User permission.
Below we’re using the latest release available at the time, VKr 1.35.0 and VKS 3.6.0
Deploying a VKS cluster using kubectl typically involves the following steps:
1.Login to the Supervisor Cluster
2.Prepare the VKS Cluster Configuration YAML
3.Deploy the VKS Cluster
4.Monitor Cluster Creation
5.Access the VKS Cluster
Step 1 — Login to the Supervisor Cluster
Before creating a VKS cluster, you must connect to the vSphere Supervisor Cluster. This is done using the kubectl vsphere login command.
Ex: kubectl vsphere login --server=<supervisor_ip> -u <vc_sso_user> --insecure-skip-tls-verify
Note: Use --insecure-skip-tls-verify only for lab environments. For production environments, it is recommended to use proper CA certificates.
To verify connectivity:
If the command returns Supervisor nodes, the login is successful.
Step 2 —Prepare the VKS Cluster Configuration YAML
The YAML file typically contains:
- Kubernetes version
- Control plane configuration
- Worker nodepools configuration
- Storage Class and VM Class
Replace the values in the following YAML according to your infrastructure and save the file as vks-cluster.yaml.
Explanation of Important YAML Parameters
The following table explains the key parameters used in the VKS cluster configuration YAML.
| Parameter | Description |
|---|---|
| apiVersion | Defines the API version used for the Cluster resource. In this example, the cluster is created using the Cluster API (cluster.x-k8s.io). |
| kind | Specifies the Kubernetes resource type being created. Here it is a Cluster resource. |
| metadata.name | The name of the VKS cluster that will be created. This name uniquely identifies the cluster. |
| metadata.namespace | The Supervisor namespace where the VKS cluster will be deployed. |
| spec.clusterNetwork.pods.cidrBlocks | Defines the IP address range used by Kubernetes Pods inside the cluster. |
| spec.clusterNetwork.services.cidrBlocks | Defines the IP address range used for Kubernetes Services. |
| spec.clusterNetwork.serviceDomain | Defines the internal DNS domain used within the cluster (commonly cluster.local). |
| topology.classRef.name | Specifies the ClusterClass used for deployment. This defines the base configuration template for VKS clusters. |
| topology.classRef.namespace | Namespace where the ClusterClass is located. In VKS deployments, this is typically vmware-system-vks-public. |
| topology.version | The Kubernetes version used for the cluster. Available versions can be checked using kubectl get kr. |
| topology.controlPlane.replicas | Number of control plane nodes. Supported values are typically 1 for development and 3 for production environments. |
| variables.vmClass | Defines the VM size used for cluster nodes. This determines CPU and memory allocation. |
| variables.storageClass | Specifies the storage policy used for Kubernetes volumes and node disks. |
| variables.kubernetes.etcdConfiguration.maximumDBSizeGiB | Sets the maximum size allowed for the etcd database. |
| workers.machineDeployments.name | Name of the worker node pool. |
| workers.machineDeployments.replicas | Number of worker nodes created in the cluster. |
| workers.machineDeployments.class | Specifies the worker node pool type defined in the ClusterClass. |
| workers.machineDeployments.variables.overrides.vmClass | Overrides the VM class used specifically for worker nodes. |
| workers.machineDeployments.variables.overrides.volumes | Defines additional volumes mounted on worker nodes (for example kubelet or container runtime storage). |
Step 3 — Deploy the VKS Cluster
Once the YAML file is ready, you can deploy the cluster using kubectl. Following Kubernetes command will send request to the Supervisor Cluster which then starts creating the VKS cluster resources..
Step 4 — Monitor VKS Cluster Creation
Cluster deployment usually takes a few minutes. You can monitor the cluster status using:
When the AVAILABLE become True and Desired Control Plane and Worker Node replicas are running fine, the cluster is fully deployed.
Step 5 — Access the VKS Cluster
Once the vks cluster is ready, you can retrieve the kubeconfig and access the cluster.
Ex: kubectl get secret <vks_cluster_name>-kubeconfig -n <supervisor_namespace_name> -o jsonpath='{.data.value}' | base64 -d > <kubeconfig_file_name>
In this article, we learned how to deploy a VKS cluster using a simple YAML configuration and kubectl.
Once the VKS Cluster is ready, the next step is to deploy platform services and application workloads. One common requirement in modern Kubernetes environments is service mesh, which provides traffic management, observability, and security for microservices, please refer the article here
Disclaimer:
This blog is for informational and educational purposes only. The configurations, examples, and architectural guidance provided are based on general best practices and publicly available references.
Always validate configurations in a non-production environment before applying them to live systems. Features and integrations may vary depending on the versions of VKS, VKr, Supervisor and vCenter Server being used. The author is not responsible for any unintended impact caused by the use of this information in production environments.
