Infrastructure and Cloud for Enthusiasts

[blog 019]# git commit

Cloud Director Tenancy Container Applications

This image has an empty alt attribute; its file name is image-1.png

Deploying container based workloads in AWS, GCP and Azure is definitely not a new thing in the world of hyperscaler deployments and the majority of the planet does it for good reason, but is there another option out there from your sovereign VMware Cloud Service Provider to provide the same service ? ( We can discuss the benefits of sovereign cloud providers another time. )

This blog is going to be the beginning of series from high level, to implementation and then onto troubleshooting for Cloud Director Tenancy based container workloads.

So what are the benefits and efficiencies of deploying container based workloads withing a Cloud Director Tenancy.

  • Multi-tenancy – Tenancy isolation mean that one tenant cannot access or impact another tenants container based workloads
  • Resource based allocation – Tenants can have their own dedicated resources, policies, and controls, ensuring fair resource distribution and preventing resource contention,
  • Simplified Management – Tenants can manage networks, storage, and containers all from the same pane of glass where their VM workloads reside,
  • Scalability – Cloud Director supports a scale up and scale down approach with container workloads based on the requirement of the tenant and the tenant workloads,
  • Persistent Storage – Container applications have persistent storage backed by either principle vSAN storage or 3rd party storage vendors.
  • Automated Ingress Deployment – Application ingress and load balancing is deployed automatically with application instantiation.
  • kubetctl access – Cloud Director workload clusters allow for the use of traditional kubectl commands from the command line from the download Kubernetes config for the cluster for the management of applications.
  • Security – Cloud Director has features such as network segmentation, identity management and role based access control which extend into container workloads,
  • Availability – Cloud Director leverages underlying vSphere resources for high availability of workload clusters, while other components for the platform such as NSX Advance Load Balancer provides failover and load balancing services. Kubernetes Workload Clusters have the ability to “auto repair” when errored and consistent node health checking.

Within VMware Cloud Director a Tenant can deploy helm charts either from the VMware Market Place, which is provided by the service provider and presented to the tenants Content Hub, public helm repositories which can be either provided by the service provider or added to the tenants own content library or a Harbor repository. These deployed applications can have automated ingress deployments and can leverage CIDC pipelines with full management from Tanzu Mission Control. (We will cover TMC in another article.)

Figure 1 – Service Provider Helm Chart Repository.

Before being able to deploy helm charts into a tenancy, there are quite few tenancy based components are required.

The first requirements are provided by the service provider and published to the tenancy.

  • The service provider must deployed and Edge Gateway to provide L3 networking, security and NAT functionality.
  • The service provider must have provided a “network service specification” address range for container workload ingress and NAT addressing ( based on customer requirements )
  • The service provider must enable Load Balancing services for the tenant. This can either be shared or dedicated NSX ALB Service Engines.
  • A published role within the tenancy to be able to deploy Kubernetes container clusters. This is typically the “Kubernetes Cluster Author” role however a service provider may wish to create custom roles and provide them to the tenancy.
  • A published role ( can be the same Cluster Author role ) to able to manage applications within a deployed workload cluster.

The second requirements are implemented by the tenant.

  • A configured routed network deployed to the Edge Gateway,
  • A tenancy user assigned to the Kubernetes Cluster Author role to allow for creation of Kubernetes Workload Clusters and to have API rights to deployed applications,
  • A published helm chart repository,
  • A deployed Kubernetes Workload Cluster.
  • Rights to deploy the Cloud Director Kubernetes Operator to deployed workload clusters.
Figure 2 – Example Deployed Workload Cluster.

Before being able to deploy a container application, the Cloud Director Kubernetes Operator must be installed on the workload cluster. The VMware Cloud Director Kubernetes Operator is a component designed to facilitate the management and orchestration of Kubernetes clusters within VMware Cloud Director environments. Kubernetes Operators in general provide a mechanism to manage applications on the workload clusters for customer resource definitons (CRDs), automation, lifecycle management, allows tenant users to define custom configurations, self-healing of applications and consistency of deployments.

The VMware Cloud Director Kubernetes Operator is downloaded from a VMware Public Registry ( or a private registry where there is a requirement for air-gapped solutions.

Figure 3 – Example of the Deployed Kubernetes Operator.

When deploying applications, the manifest of the application helm chart can be modified based on the requirements of the tenants DevOps team for items such as changing from a ClusterIP to LoadBalancer to allow for automated deployment of ingress services, providing valid application certificates, or defining how many replica’s are required for the application just to name a few.

The advantage of deploying VMware Market Place application workloads for example from Bitnami is that their applications are constantly security hardened and updated providing the service provider publishes the updates. Also pre-packed applications reduced the requirement to maintain specific systems engineering skills within an organization to allow businesses to focus on the application and outcome, and not the engineering behind it.

Below is an example of deploying a container application.

Figure 4 – Example of Deploying a Container Application.

Below is an example of modifying the helm chart manifest to deploy a Load Balancer instead of a ClusterIP. Just as note you do not have to deploy a load balanced service during the instantiation of the application as you could run for example, “kubectl expose deployment grafana –type=LoadBalancer –port=8080 –target-port=3000 -n vcd-contenthub-workloads –name=grafana-ingress –kubeconfig C:\temp\kubeconfig-grafana.txt” to create an ingress controller from the command line which intern via the Cloud Director Kubernetes Operator deploy the ingress service in the tenancy.

Figure 5 – Example of Modifying the ClusterIP to type LoadBalancer.

Below is a exampled of deployed applications on a Kubernetes Workload Cluster called “monitoring”.

Figure 6 – Example of Deployed Container Applications.

Below is an example of automated provisioned ingress which was created when launching the application creation to allow access to the deployed application.

Figure 6 – Example of Ingress Load Balancing for Container Workloads.

While this has been a very high level overview of Cloud Director Tenancy Container Applications it provides insight into the capability of the platform to deploy your applications on a multi-tenanted cloud environment while providing automated features to allow applications to be deployed quickly and seamlessly while maintaining security and availability.

I’m excited to see how far I can push the platform with the intent to deploy GPU based workload clusters and try open source AI technologies such DeepSpeed and PyTorch as I believe there is a place for sovereign multi-tenanted private AI outside of the hyperscalers, so stay tuned for that. I have deployed SOLR machine learning already within a tenancy using the public GIT repository, but still yet to understand the application. .

I will go into a more deeper technical breakdown of components and how to deploy the infrastructure in further blogs but I feel this is a great place to start to allow people to understand the capabilities. Until next time.


Tony Williamson

Add Your Comment

* Indicates Required Field

Your email address will not be published.