Infrastructure and Cloud for Enthusiasts

[blog 015]# git commit

VMware Cloud Director Tenancy Load Balancing with NSX Advanced Load Balancer

I have spent quite a bit of time recently implementing Cloud Director Tenancy Load Balancing with NSX Advanced Load Balancer and also talking to quite a few people about it. The latest was at the Sydney VMUG Usercon as part of my of “Real World Service Provider Networking Load Balancing” Presentation which I will upload in the next blog. So after all the presenting and talking I thought I should do a blog on the implementation and behind the scenes.

Lets start with what has changed between NSX for vSphere, early implementations of NSX-T and where we are at now with load balancing.

So in NSX for vSphere load balancing was carried out on Edge Services Gateways and had simple functionality around virtual server protocols, port ranges and the backend pools for connectivity. There were also simple service monitoring services for TCP, HTTP and HTTPS to name a few.

Customers with a tenancy inside VMware Cloud Director could create simple load balancing services on demand based on their available IP resourcing assigned.

When NSX-T 2.4 came out it had similar functionality however was assigned to a T1 gateway and an Edge Cluster of least of medium size. While this could be done in NSX-T there was not supported functionality within Cloud Director.

Enter NSX Advance Load Balancer with NSX 3.0 and integration with Cloud Director. Now generally I am a big fan of NSX Advanced Load Balancer and with the integration into Cloud Director is brings new functionality and implementation such as “HTTP Cookie Persistence”, “Transparency Mode” Which allows the preservation of client IP’s and shared / dedicated service engines for customers.

The following diagram show the construct of how load balancing services are provided. An NSX-T Cloud Service Engine Group is assigned to a tenant. I prefer to not share Service Engines between customers as “sharing” can make them nervous even though in reality they would separated via there own routing domain VRF on the Service Engine.

It also allows for a simpler billing for customers as they can consume as many virtual services as they require depending on how many available IPs they have.

Cloud Director API creates a transit overlay network between the T1 and the Service engine and a static route is applied on the T1 for the Virtual Service that is hosted on the Service Engine.

Route advertisement is updated on the T1 via API from vCD to NSX to enable LB VIP Routes and Static Routes to allow advertisement to the T0 and into a customers VRF or their public facing network and VM workloads on NSX overlay networks can access the VIP service.

The management plane of the Service is connected via a “Cloud Management” network which is your typical T1 / T0 design.

Logical Design for NSX Advanced Load Balancer.


I have created the following video that shows the creation of Load Balanced Services and what is required / takes place from a NSX and NSX ALB perspective.

Deployment of NSX Advanced Load Balancer with Cloud Director.

Having an understanding of what happens behind the scenes in my mind is the most import aspect to any design and implementation, as it will help with trouble shooting deployments and existing environments when things don’t go as planned and I like to know the mystery behind the magic.

See you all in the next #git commit.


Add Your Comment

* Indicates Required Field

Your email address will not be published.

*