First world DC problems

Cloud computing has thrust multi-tenant DC’s forward in their advancements. The elasticity required for rapid deployment has pushed the bounds of many fields including storage, virtualisation, server architectures, and orchestration. As our environments onboard different customers in a multi tenant data centre, path isolation, and Equal Cost Multi Path (ECMP) becomes a requirement.

Path isolation can be performed at Layer 2 and Layer 3. Traditional Layer 2 isolation has been performed by putting Customer A into VLAN 10, Customer B into VLAN 11. Layer 3 isolation can per applied through VRFs, overlays, and other technologies.

If a cloud provider services a bulk of customers that all require path isolation – and you expect this to be the case – then VLAN allocation will dry up fast. A DC may want to offer ECMP to a customer providing them with an interconnection with the use of IP for interconnection. They will also want to maintain and preserve the Layer 2 model for inter-VM communications.

Encapsulated overlay

A solution exists. It has been put forth by our pals at Cisco and VMware is VXLAN. Entitled VXLAN: A Framework for Overlaying Virtualised Layer 2 Networks over Layer 3 Networks.

VXLAN is an overlay network that helps alleviate the problems of the cloud network environment. By addressing L2 and L3 DC network challenges of ever moving virtual machines, administrators can over come constraints of traditional networking.

You can like VXLAN to Layer 2 over Layer 3. Each different overlay is referred to as a VXLAN segment. VXLAN segments differs from a VLAN segment in that it uses a 12 bit – opposed to 12 bit – segment ID. This allows 16,000,000 VXLAN segments to be assigned. A vast improvement over 4096. The VXLAN Network Identifier (VNI) represents the segment.

The VNI wraps itself around the existing MAC frame that came from the existing VM. It protects the existing frame this way, allowing new information to be written and interpreted by network devices along a given path.

Screen shot 2013-03-21 at 2.17.31 PM

VXLAN is , due to the outer wrapper, or envelope, essentially a tunnelling mechanism. With L2 wrapped around the traditional L3 and L2 packet. VLAN Termination end points (VTEP) are located in hypervisors on servers. These VTEPs strip VNI information before the hypervisor passes a VXLAN frame to the server. The guest never sees it. Hardware termination of a VTEP inside a switch is also possible, though it requires certain chipsets such as a Trident II. The Trident II supports VXLAN and VTEP termination.

My mind wanders to two hosts that are separated by a layer 3 segment. They could even be in the same layer 2 segment. They could be in different racks in different parts of a data centre. VXLAN would then be overlaid upon the aforementioned networks.

Half a dozen VNI’s could be in each hypervisor of each host. Between each host in separated L3 subnets, a logical tunnel is formed between VTEPs. Guests on the host seeking to communicate to other hosts with the VNI can due to the overlay. The host believes it is in an adjacent network.

A switch acting as a VTEP gateway could host servers behind it and do the termination on the switch itself. This would allow a different topology to be formed. This requires support in the hardware as well as software. Over time this will be default in silicon.

My thoughts

The notion is how MPLS label switches across a WAN core and strips labels accordingly. Well, at least it is in my head. Moving forward I believe this overlay may find more traction as it can be terminated in a hypervisor. Upgrade your cluster and you’ll have support. With announcement of NSX, it is time to understand VXLAN overlays.





2 thoughts on “Who needs more than 4094 VLANs?

Leave a Reply

Your email address will not be published. Required fields are marked *