What’s new in NSX 6.2 – Traceflow

This is a post in the series – What’s new in 6.2? It covers off the new features of a pseudo-major NSX release.

Introducing Traceflow

Traceflow adds functionality to the Toolbox that NSX provides to help Operationalise the NSX Network Virtualisation platform. Traceflow allows the injection of varying types of packets into application topologies. As the name suggests traces the flow through the path. It collects observation of actions, hosts, relevant components, and their names. This is used to help administrators visualise a topology path.

Tracing within a Layer 2 domain

As an administrator using Traceflow it is possible to craft a packet with a variety of settings. As seen below I have picked a source and destination VM on a Logical Switch. This can be selected on Logical Switches in Unicast or Hybrid mode.

Screen Shot 2015-08-25 at 9.31.41 PM

Here you can see that there is an ability to select protocol and then modify additional fields. I have chosen a TCP packet and a SRC/DST port of 80 for this example. My firewall rules ‘protecting’ my workloads are permit any any.

Screen Shot 2015-08-25 at 9.57.27 PM


This matches App-01 Web Tier that are in a Security Group (matching on a Security Tag) to individual VM’s listed App01, App01, App02, App02. This rule allows all traffic. When the Traceflow is executed the following output is seen:

Screen Shot 2015-08-25 at 9.31.13 PM

At first this looks rather busy. It is possible to identify the following information from the above figure:

  • SRC: Web01 NIC1
  • DST: App01 NIC1
  • Packet flow and order of operations
  • Objects between two points

These Virtual Machines are on a VXLAN Logical Segment. This allows administrators to provide Layer 2 connectivity between workloads independent of the underlying infrastructure.

The order of operations as displayed by the figure is as follows:

  1. The Traceflow packet is injected into Web01 vNIC.
  2. Received by the Distributed Firewall protecting the Web01 vNIC
  3. Forwarded (due to permit rule) by Distributed Firewall protection the Web01 vNIC
  4. Forwarded via VXLAN Tunnel Endpoint of host
  5. Received via VXLAN Tunnel Endpoint of host (where App01 currently is located)
  6. Received by the Distributed Firewall protecting the App01 vNIC
  7. Forwarded (due to permit rule) by Distributed Firewall protecting the App01 vNIC
  8. Delivered to destination workload App01.

That gives administrators visibility to all related objects to a topology between two end points.

Identifying the Deniers

So what would happen if the administrator decided to ratchet down security? What would occur if the rule was changed to the below:

Screen Shot 2015-08-25 at 9.58.15 PM Time to see how Traceflow reacts. When the administrator runs Traceflow a second time the following output is seen.

Screen Shot 2015-08-25 at 9.59.30 PM

The result shows 1 Dropped observation in red. Something has been blocked. The sequence is as follows:

  1. The Traceflow packet is injected into Web01 vNIC.
  2. Received by the Distributed Firewall protecting the Web01 vNIC
  3. Dropped immediately (due to deny rule) by Distributed Firewall protecting the Web01 vNIC on egress.

The component name for Sequence 2 states Firewall (Rule 1005) is the Culprit. All the objects in the Component Name column are hyperlinked. This will reveal more information to the user about the object.

Screen Shot 2015-08-25 at 9.59.45 PM

Drop details which are hyperlinked show Rule ID 1005 is the culprit as suspected. The reason is due to a FW_RULE.

If this is not a desired behaviour or a rule that should not be enforced on this workload the administrator can quickly, easily, and efficiently identify the rule and remediate accordingly.

Layer 3 Traces just got visible

Taking this mentality with security policies on the same Layer 2 domain it is possible to perform Traceflow across routed segments. In this example the administrator decides to

Screen Shot 2015-08-25 at 9.33.14 PM

The difference between this Traceflow and the last one is that the Destination is an IP address. It is an ICMP trace. This is an address that is attached to the DLR. In this case this IP address is the Gateway IP for that subnet. It is local to all hosts in the transport zone the Logical Switch and DLR are assigned to. When the flow is executed the output below is seen:

Screen Shot 2015-08-25 at 9.35.09 PM


Time to look at the steps occurring here to gain an insight into how the traffic is being processed:

  1. Traceflow packet is injected into the vNIC of Web01 VM
  2. Forwarded (due to permit rule) by Distributed Firewall protection the Web01 vNIC
  3. Received by the Distributed Firewall protecting the Web01 vNIC
  4. Logical Switch App-01-Flat forwards this packet
  5. Packet is received by App-01-DLR
  6. Packet is returned by App-01-DLR
  7. Logical Switch App-01-Flat forwards this packet
  8. Received by the Distributed Firewall protecting the Web01 vNIC
  9. Forwarded (due to permit rule) by Distributed Firewall protection the Web01 vNIC

Screen Shot 2015-08-25 at 9.47.57 PM

Like before it is possible to understand the related objects from the Component Name hyperlink. Observation details below outline the Segment ID and Component Name. Very handy to know what VXLAN Numerical Identifier (VNI) is assigned to a Logical Switch.

Screen Shot 2015-08-25 at 9.48.03 PM



Traceflow is a great addition to the tools within VMware NSX for vSphere. It is born out of a maturing platform and provides actionable information at an administrators fingertips. I personally like how I can correlate Firewall policies to where a packet stops. I also like the notion I can inject varying traffic types into my topologies very easily.

VMware NSX for vSphere 6.2 is available now.

[Release] NSX 6.2 is now live!

New NSX version dropped today. Here is the Release notes. I will follow up with some feature focused blogs soon!

NSX vSphere 6.2 includes the following new and changed features:

  • Cross vCenter Networking and Security
    • NSX 6.2 with vSphere 6.0 supports Cross vCenter NSX where logical switches (LS), distributed logical routers (DLR) and distributed firewalls (DFW) can be deployed across multiple vCenters, thereby enabling logical networking and security for applications with workloads (VMs) that span multiple vCenters or multiple physical locations.
    • Consistent firewall policy across multiple vCenters: Firewall Rule Sections in NSX can now be marked as “Universal” whereby the rules defined in these sections get replicated across multiple NSX managers. This simplifies the workflows involving defining consistent firewall policy spanning multiple NSX installations
    • Cross vCenter vMotion with DFW: Virtual Machines that have policies defined in the “Universal” sections can be moved across hosts that belong to different vCenters with consistent security policy enforcement.
    • Universal Security Groups: Security Groups in NSX 6.2 that are based on IP Address, IP Set, MAC Address and MAC Set can now be used in Universal rules whereby the groups and group memberships are synced up across multiple NSX managers. This improves the consistency in object group definitions across multiple NSX managers, and enables consistent policy enforcement
    • Universal Logical Switch (ULS) – This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of logical switches that can span multiple vCenters, allowing the network administrator to create a contiguous L2 domain for an application or tenant.
    • Universal Distributed Logical Router (UDLR) – This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of distributed logical routers that can span multiple vCenters. The universal distributed logical routers enable routing across the universal logical switches described earlier. In addition, NSX UDLR is capable of localized north-south routing based on the physical location of the workloads.
  • Operations and Troubleshooting Enhancements
    • New traceflow troubleshooting tool: Traceflow is a troubleshooting tool that helps identify if the problem is in the virtual or physical network. It provides the ability to trace a packet from source to destination and helps observe how that packet passes through the various network functions in the virtual network.
    • Flow monitoring and IPFIX separation: In NSX 6.1.x, NSX supported IPFIX reporting, but IPFIX reporting could be enabled only if flow reporting to NSX Manager was also enabled. Flow monitoring with IPFIX functionality are included in NSX Manager in layer deployment. Starting in NSX 6.2.0, these features are decoupled. In NSX 6.2.0 and later, you can enable IPFIX independent of flow monitoring on NSX Manager.
    • New CLI monitoring and troubleshooting commands in 6.2: See the knowledge base article for more information.
    • Central CLI: Central CLI reduces troubleshooting time for distributed network functions. Commands are run from the NSX Edge command line and retrieve information from controllers, hosts, and the NSX Manager. This allows you to quickly access and compare information from multiple sources. The central CLI provides information about logical switches, logical routers, distributed firewall and edges.
    • CLI ping command adds configurable packet size and do-not-fragment flag: Starting in NSX 6.2.0, the NSX CLI ‘ping’ command offers options to specify the data packet size (not including the ICMP header) and to set the do-not-fragment flag. See the NSX CLI Reference for details.
    • Show health of the communication channels: NSX 6.2.0 adds the ability to create communication channel health. The channel health status between NSX Manager and the firewall agent, between NSX Manager and the control plane agent, between host and the NSX Controller can be seen from the NSX Manager UI. In addition, this feature detects when configuration messages from the NSX Manager have been lost before being applied to a host, and it instructs the host to reload its NSX configuration when such message failures occur.
    • Standalone Edge L2 VPN client CLI: Prior to NSX 6.2, a standalone NSX Edge L2 VPN client could be configured only through OVF parameters. Commands specific to standalone NSX Edge have been added to allow configuration using the command line interface. The OVF is now used for initial configuration only.
  • Logical Networking and Routing
    • L2 Bridging Interoperability with Distributed Logical Router: With VMware NSX for vSphere 6.2, L2 bridging can now participate in distributed logical routing. The VXLAN network to which the bridge instance is connected, will be used to connect the routing instance and the bridge instance together.
    • Support of /31 prefixes on ESG and DLR interfaces per RFC 3021
    • Support of relayed DHCP request on the ESG DHCP server
    • Ability to keep VLAN tags over VXLAN
    • Exact Match for Redistribution Filters: The redistribution filter has same matching algorithm as ACL, so exact prefix match by default (except if le or ge options are used).
    • Support of administrative distance for static route
    • Ability to enable/disable uRPF check per interface on Edge
    • Display AS path in CLI command show ip bgp
    • HA interface exclusion from redistribution into routing protocols on the DLR control VM
    • Distributed logical router (DLR) force-sync avoids data loss for east-west routing traffic across the DLR.
    • View active edge in HA pair: In the NSX 6.2 web client, you can find out if an NSX Edge appliance is the active or backup in an HA pair.
    • REST API supports reverse path filter(rp_filter) on Edge: Using the system control REST API, rp_filter sysctl can be configured, and is not exposed on REST API for vnic interfaces and sub-interfaces. See the NSX API Guide for more information.
    • Behavior of the IP prefix GE and IP prefix LE BGP route filters: In NSX Release 6.2, the following enhancements are made to BGP route filters:
      • For the null route network address (defined as ANY or in CIDR format, less-than-or-equal-to (LE) and greater-than-or-equal-to (GE) keywords are no longer allowed. In previous releases, these keywords were allowed.
      • LE and GE values in the range 0-7 are now treated as valid. In previous releases, this range was not valid.
      • For a given route prefix, you can no longer specify a GE value that is greater than the specified LE value.
  • Networking and Edge Services
    • The management interface of the DLR has been renamed to HA interface. This has been done to highlight the fact that the HA keepalives are going through this interface and issues on this interface can result in split-brain situation.
    • LB Health Monitoring Improvements: Delivers granular health monitoring, that reports information on failure, keeps track of last health check and status change, and reports failure reasons.
    • Support VIP and Pool port range: Enables LB support for applications that require a range of ports.
    • Increased number of VIP: VIP support increased up to 1024.
  • Security Service Enhancements
    • New IP address discovery mechanisms for VMs: Authoritative enforcement of security policy based on VM names or other vCenter-based attributes requires that NSX know the IP address of the VM. In NSX 6.1 and earlier, IP address discovery for each VM relied on the presence of VMware Tools (vmtools) on that VM or the manual authorization of the IP address for that VM. NSX 6.2 introduces the option to discover the VM’s IP address using DHCP snooping or ARP snooping. These new discovery mechanisms enable NSX to enforce IP address-based security rules on VMs that do not have VMware Tools installed.
  • Solution Interoperatibility
    • Support for vSphere 6.0 Platform Services Controller topologies: NSX now supports external Platform Services Controllers (PSC), in addition to the already supported embedded PSC configurations.
    • Support for vRealize Orchestrator Plug-in for NSX 1.0.2: With NSX 6.2 release, NSX-vRO plug-in v1.0.2 is introduced in vRealize Automation (vRA).


Check out the release notes and documentation here:

VMware NSX for vSphere 6.2.0 – GA on August, 20, 2015
NSX–V 6.2.0 build# 2986609