Two Node vSAN vs Stretched vSAN
So recently I was doing a personal lab with VMware Tanzu and I decided to do a 2x node vSAN cluster with a Witness node to provide storage to the cluster and I had never done a small vSAN cluster before. The cluster was based on VMware’s vSAN Two-Node Architecture for Remote Offices. https://www.vmware.com/files/pdf/products/vsan/vmware-vsan-robo-solution-overview.pdf
I have built in my work life vSAN and most recently stretch vSAN clusters across Availability Zones and the latter is where I realized that Stretched vSAN and Robo vSAN are the same thing! Minus the whole L3 vSAN Kernel across AZ’s with a Witness node in another AZ, big dollars on infrastructure, 9000 MTU etc. etc. I could go about the differences, but this is a quick blog around the fundamental similarities.
Robo vSAN and Stretched vSAN can be broken down into the same fundamental underlying VMware Storage Technology.
- Validated Host Infrastructure with localized disk (This could be Hybrid vs All Flash)
- vSAN disk groups
- vSAN Fault Domains
- Preferred and Secondary Fault Domains
- Requirements of a vSAN Witness node
The concepts around vSAN Fault Tolerance (FTT) and Host cluster availability still apply regardless of Two Node or Stretched vSAN.
The major differences albeit the availability zones where your vSAN hosts reside will be firstly the number of hosts you have in a Fault Domain. For example, the minimum in a VCF deployment is 4x nodes per Fault Domain. Secondly is that you should not stretch Layer 2 networks between vSAN fault domains as they should be routed VMkernels.
The vSAN witness node in both accounts will have a management network and a secondary network on the same L3 subnet that vSAN VMkernels reside in for vSAN object tracking. In both cases the Witness Node must be in a separate Availability Zone (except my lab … at least it is on another host that is not in the same cluster).
Ultimately the use of Two node vs Stretched are completely two different architectural requirements, however VMware is maintaining consistency with the overall underlying technology, which I suppose is not a surprise with the rise of VCF and Life Cycle Management.
As a Storage Engineer at heart, I am a big fan of vSAN, Converged Storage Infrastructure, or any type of storage that relies on object, block or meta data replication, albeit it is only as good as the underlying network redundancy below it. The reason why I am big fan is that the scale and resiliency become endless from cold storage to high I/O workloads fronted by NVME controllers which cache and reduce write I/O amplification to SSD based media.
So next time you playing storage take a closer look as you might be surprised at what you find and, in my case, technological consistency.