So far we have covered off associating NSX manager to vCenter and then deploying the NSX Controllers. Now it is time to prepare the control planes scope and prepare the clusters. It is easy to do and probably the most clicky clicky part of the entire process. Once you make it to the end of this post you will be ready to use NSX to build your virtual network topologies.

Cluster preparation is the first cab off the rank. Under the installation tab after we have installed the controllers we need to install the three VIBS – VXLAN, dFW and Logical Routing – into every host. This is done at a cluster level. Below you can see my three defined clusters each with two hosts inside it – Compute A, Compute B and Management and Edge.

Screen Shot 2014-02-17 at 14.32.29

Now click on Host Preparation and you can see my clusters. Simply hit install to begin host preparation of this cluster.

Screen Shot 2014-02-17 at 14.51.38

What is occurring is the preparation of the hypervisor. Due to NSX’s tight integration and relationship with the hypervisor, NSX manager will inject three VIBs into the vSphere environment. Logical routing, distributed firewall and VXLAN.

It is good to know that any additional hosts that are added to an enabled cluster are prepared automatically. Any new cluster added requires host preparation to be done on it. That means you can have NSX enabled islands as you first virtualise your network. With a click of a button there is enablement and an extension of your NFV domain.

VXLAN transport networks deploy a VMkernel interface for VXLAN on each host. This is the interface that will encapsulate a network segments packets if it needs to reach a guest on another host. By encapsulating via a VMkernel interface the workload is totally unaware of this process occurring. As far as the workload is concerned the two guests are adjacent on the same segment when infact they could be spanning many L3 boundaries.

Screenshot 2014-02-18 14.56.53

Here I created a pool very similar to before like the controller pool. This pool has a gateway interface and can communicate to other VMkernel interfaces in different subnets. My topology here has its vmk3 interface in the subnet 192.168.250.0/24 which encompasses Compute A and B clusters. The management and edge cluster has its vmk3 interfaces in the 192.168.150.0/24 subnet.

Segment ID range carves up the large range of VXLANs available for assignment to logical segments. If you have multiple NSX domains or regions you can assign a subset of the larger pool. Segment ID pools are subsequently used by logical segments for the VXLAN Network Identifier (VNI). Here I use the segment range of 5000-5999.

Screenshot 2014-02-18 09.29.44

Click the Green Add symbol.

Screenshot 2014-02-18 09.29.55

 

A transport zone is created to delineate the width of the VXLAN/VTEP replication scope and control plane. This can span one or more vSphere clusters. A NSX environment can contain one or more transport zones based on user requirements. The use of transport zone types is interchangeable and an environment can have unicast, hybrid and multicast communication planes.

Screenshot 2014-02-18 09.40.23

Here I have made a Global TZ that covers off all my clusters. I have selected Unicast replication which gives me the benefit of leveraging the controller for VXLAN/VTEP communication replication.

Done. That is the setup. Three short blog posts that enable the one time installation of NSX. It is easy to setup and run into an environment and it makes it very approachable.

Next up we will deploy a logical switch and connect up some workloads on different hosts.

 

The only hardware adjustments of the underlay required is simply an MTU increase from 1500 to 1600. This is to account for the overhead brought on by VXLAN encapsulation. I find a majority of installations these days are on networks where jumbo frames are already enabled.

 

2 thoughts on “Installing VMware NSX Part 3

  1. Pingback: Frederick

Leave a Reply

Your email address will not be published. Required fields are marked *

*