VMware NSX Use Case Library – #001

Distributed Firewall – Using vCenter objects to provide policy enforcement for VM to VM traffic


This use case demonstrates the use of NSX’s distributed firewall in conjunction with vCenter object. In addition to Source and Destination IP address and Port matching the vCenter objects such as logical switch, VMtag, VMname, Datacenter or vApp level fire walling provide very granular control.

Pre Conditions

  • vCenter and NSX manager configured.
  • NSX host and cluster preparation complete.
  • IP connectivity between hosts.
  • (Optional) Applications to ensure firewall service policies are enforced.

Post Conditions

Success End Condition

  • Set firewall policy is distributed throughout environment to relevant end points requiring enforcement.

Failure End Condition

  • Distributed Firewall fails to update due to communication plane error.
  • Incorrect ruleset applied to end point resulting in no enforcement.

Minimal Guarantee

  • Distributed Firewall delivers a partial match against ruleset due to operator error.


  • Virtual Machines initiate communication with desired end point. As a packet leaves the VM’s vNIC the distributed firewall function, residing between the vNIC and vSwitch, enforcement occurs based on defined security policy.

Use Case Expected Flow

  1. Log into Networking and Security plugin of vCenter
  2. Configure the Firewall section of NSX.
  3. Populate desired rulesets through applying business requirements.
  4. Commit and save the firewall change.

Use Case Variations

  • NSX Distributed Firewall function can be replicated with an NSX edge device although the enforcement is not within the hypervisor kernel therefore is far less effective due to virtual appliance overhead.

Integration Points


Suggested Test Metrics

  • Test application availability after permitting or denying the service port.
  • Confirm expected behaviour if tiering or nesting policy.
  • Confirm rulesets are being distributed to hosts via VSIP through the command line.

Breaking down a VIRL file

When building a topology within Cisco VIRL it is a good idea to familiarise yourself with the file structure. If you’ve been exposed to XML and Cisco config files before (I would hope ALL of you would have) then nothing will look too scary.

I am going to take my INE v5 topology I’ve made for this post and look at the file format.

<node name="R1" type="SIMPLE" subtype="IOSv" location="393,166">
 <entry key="Auto-generate config" type="Boolean">false</entry>
 <entry key="config" type="String">!

Here we see a node description for R1. Denoting a Simple connection with a subtype of IOSv this routed will not auto-generate a configuration from AutoNetKit and its configuration is a String (Oooh Yeah, I know what that is. My python programming is paying off!). Immediately below this item there is the output of a Cisco Router configuration file. One that we all know and are familiar with.

! Last configuration change at 10:37:23 UTC Sun Jul 6 2014
version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
hostname R1

This is then repeated for each device within the topology. Whilst this loads and populates items on our canvas and associates a configuration with the node we still need to define our connections. After our string there is some additional information.

<interface id="0" name="GigabitEthernet0/1"/>
 <interface id="1" name="GigabitEthernet0/2"/>
 <interface id="2" name="GigabitEthernet0/3"/>

We can see there is an interface ID mapping to a named interface. A named interface is familiar to the Cisco administrators whilst VIRL has no concept. This mapping is now an object which can be referenced. It is referenced here.

dst="/virl:topology/virl:node[1]/virl:interface[1]" src="/virl:topology/virl:node[3]/virl:interface[1]"/>
 dst="/virl:topology/virl:node[2]/virl:interface[1]" src="/virl:topology/virl:node[3]/virl:interface[2]"/>

If you look at the connection destinations you can see a basic structure in how everything is addressed. The VIRL topology references a node and the node has a number associated to it. This node number highlights which instance of IOS/NX-OS running within VIRL then denotes an interface number. Node 1 interface 1 to Node 3 Interface 1. The interface ID then corresponds with the named interface and you have a topology.

For reference – a multi-point connection network – or a dumb L2 segment simply appears like this:

<node name="DMVPN" type="SEGMENT" location="653,296"/>
 <node name="Network 1-4-6" type="SEGMENT" location="573,84"/>

With that you have been able to break down the basic anatomy of a .VIRL file. It is nice and neat and keeps everything together. There is GitHub integration into VIRL as well as sharing folders. Creating a topology and sharing with co-workers for peer-review or validation just got very easy!

VMware NSX Use Case Library

This post serves as an index for a VMware NSX Use Case Library. I am planning to sharing use cases of where NSX is applicable, defining success criteria and how to go about testing an implementation. This will give administrators an idea of how to validate and test features within their NSX environment.

The template used is as follows:


The summary outlines the problem space and what the particular technology subset can solve. These can be technology problems, business challenges or simplifying deployment.

Pre Conditions

What is required to be configured, installed and running before a feature can be used. This assumes hardware such as servers and networking equipment are in place.

Post Conditions

Success End Condition

What a successful deployment will yield and what to expect.

Failure End Condition

If a technology fails to deploy or doesn’t produce the desired outcome then what would the result be?

Minimal Guarantee

If a technology fails to fully deploy resulting in it half working what functionality do I have?


This section describes the order of operations on specific function and what is occurring for the technology subset to work.

Use Case Expected Flow

The steps required to deploy the use case from start to finish.

Use Case Variations

Can the solution be implemented in a different way? There are many ways to skin a cat and that can be said about technology deployments.

Integration Points

The solution may require an existing external service such as Active Directory or another third-party service. This describes the interaction and why it is required.

Suggested Test Metrics

These test metrics can be used to test and validate the solution works and is implemented correctly.

I hope for my readers that this gives an insight into some of the use cases and problems that can be solved with VMware NSX and network function virtualisation.

BURST: INE RS v5 base topology with Cisco VIRL

I have uploaded and attached the VIRL file I have made for my studies. It is the ground floor topology that allows potential CCIEs like yours truly to get their hands dirty. To be used in conjunction with the INE paid workbooks or labs this topology is updated for v5. INE has moved to a virtual infrastructure with switches for their topology in alignment with the industry.

I had noted there was a lot of labs that didn’t adjust or alter the interfaces on switches themselves and therefore have represented them with multi-point connected networks. Whilst there is L2IOS within VIRL at this stage I have not dug deep enough to navigate the nuances of the image. So far I have done a lot of the routing labs without any hiccups.


Here is an image of the basic topology too. This topology has different interfaces than what is listen in the lab guides and startup configurations. Simply adjust the boot configuration if you’re loading the INE configuration.

What is great is that I know if any issues are occurring that is due to my configuration and not because of some black magic to get IOS to work. The problem I had with GNS3 is that I would waste valuable study time in tweaking and adjusting my lab due to issues with GNS3. Once you get your hands onto Cisco VIRL or Cisco Modelling Labs then you can use this file!

Here is the INE package that I use. 

Here is the file for VIRL.


Cisco VIRL : Where did NX-OS go?

Cisco VIRL allows administrators and network engineers to build network topologies rapidly and validate a variety of use cases on a virtual platform. Superb for change and even better when considering traditionally this used to require physical equipment or ‘guestimation’.

Many networks these days have a variety of Cisco’s portfolio within them. They vary between firewalls, campus switching, edge routing and data centre switching. Whilst this doesn’t change the underlying concepts of networking in does introduce different operating systems. There is NX-OS, IOS, IOS-XR and ASA that are currently the core flavours. Unlike JUNOS which has one (and a fork for SRX) unified OS, Cisco has a few. Whilst the merits of this can be argued another day this poses and issue for validation.

Cisco VIRL allows the running of these operating systems in software! Now it is possible to test NX-OS against ASA with a backbone network built from IOSv and IOS-XR.

Screen Shot 2014-06-30 at 10.16.12 pm

When you first login to Cisco VIRL you will notice that there is only IOSv. You need to synchronise your VMmaestro client with the VIRL server.



Screen Shot 2014-06-30 at 10.16.27 pm



Notice that there is NX-OSv listed but there is a ? within the icon and interfaces aren’t defined. Well, if you were to drag this icon to your canvas you would notice there is not an image defined either to boot from. Whilst Cisco VIRL allows you to define a particular image per OS type the server holds a raft of default configurations.

Screen Shot 2014-06-30 at 10.16.38 pm


Simply click Fetch from Server. Confirm that you want to download configuration from the server. Look at the list. Populated with more devices than you’ve traditionally been able to use. What is even better is that you don’t have to install and deploy each one individually on the canvas. You just click, connect, start and lab!

Screen Shot 2014-06-30 at 10.17.15 pm


Ah. That is better. The palette is populated with additional devices.


Screenshot 2014-07-04 06.55.36

There you go. NX-OS. Simply connect this to the other devices in the topology and you will have multi-platform test bed.

I have labbed a lot more recently since getting my hands on Cisco VIRL. This is due to having my platform hosted and always on, not worrying about recabling, addressing and all the things that eat into technology lab time. After living in the land of VMware for a little bit I lost some of my hands on skills. It’s all coming back. It must be the muscle memory!

Installing VMware NSX – Part 7

So far the topology in this series has a three tier logical application deployed. It also has an Edge Services Gateway connected to the uplink of the logical router with an uplink of its own to the physical infrastructure. The next step is informing the Edge Services Gateway about Logical Interfaces (LIFS) connected to the logical router. This can be performed by a dynamic routing protocol such as OSPF, IS-IS, BGP or traditional static routing.

Screen Shot 2014-06-26 at 3.09.16 pm

This example seeks to use an Interior Gateway Protocol (IGP) know as Open Shortest Path First (OSPF). The first configuration point will be the Logical Distributed Router. Select NSX Edges and double-click on the Logical Distributed Router that was deployed previously.

Screen Shot 2014-06-27 at 2.11.48 pm

Under the Manage tab select Routing, Global Configuration and select Edit on Dynamic Router Configuration.

Screen Shot 2014-06-27 at 2.12.40 pm

Select the Router ID. In this example this is the Uplink interface that connects to the Transit Logical Switch facing the Edge Services Gateway.

Screen Shot 2014-06-26 at 3.27.08 pm

Accept the changes and click Publish Changes. Select the OSPF tab on the left side.

Screen Shot 2014-06-26 at 3.24.22 pm

Note the default configuration of OSPF. The Area to Interface mapping, Area Definition and OSPF Configuration need to occur. Click the Edit button for OSPF Configuration.

Screen Shot 2014-06-26 at 3.24.38 pm

Tick the Enable OSPF box. The Protocol address is that of the Control VM for the Logical Router. The Control VM is responsible for maintaining the control plane of OSPF e.g. maintains OSPF state, neighbour relationships and route propagation. The Forwarding Address is the uplink interface IP address. Click OK to finish.

Screen Shot 2014-06-26 at 3.24.46 pm

Next click the Green Plus under Area Definitions. OSPF neighbours need to peer with routers with the same area ID. We defined Area 10 earlier and therefore we need to use this again.

Screen Shot 2014-06-26 at 3.27.49 pm

Select the Uplink interface. This is the interface you want to present to OSPF to be included in the routing protocol.

Screen Shot 2014-06-26 at 3.27.54 pm

Review the changes and now click Publish Changes. This will enable OSPF on your Logical Router.

Screen Shot 2014-06-26 at 3.29.04 pm

Click the Route Redistribution menu along the left side. Notice how there is already a redistribution rule for any Connected interface into OSPF.   Remember these? All these L3 interfaces are directly connected interfaces.

Screen Shot 2014-06-26 at 3.28.17 pm

By redistributing connected routes into OSPF it will allow our LIFs that are in the kernel of every hypervisor to be redistributed. This will present the LIFs via routes in OSPF to the NSX Edge Services Gateway.

Now it is time to enable OSPF on the Edge Services Gateway.

Screen Shot 2014-06-26 at 3.20.56 pm

Double click the Edge Services Gateway. This will open an advanced preference pane. Select the Manage tab and it will display settings about the Virtual Appliance. Select Routing.

Screen Shot 2014-06-26 at 3.21.08 pm

Notice the Default Gateway is already populated from the deployment window.

Select the Edit button next to Dynamic Routing Configuration.

Screen Shot 2014-06-26 at 3.21.20 pm

The Router-ID needs to be configured. Use the interface address of the Uplink interface. Do not enable OSPF from this window. Click Save.

Screen Shot 2014-06-26 at 3.22.13 pmPublish the changes by clicking the Publish Changes banner across the top. This allows administrators to configure various elements and Publish when ready. Along the left side select OSPF.

Screen Shot 2014-06-26 at 3.22.20 pm

Network Engineers will note familiar terminology here in regards to OSPF. Click the Green Plus under the Area Definitions section.

Screen Shot 2014-06-26 at 3.23.31 pm

Next create an area for OSPF. The area in this example is 10. If required, change the Authentication, and then click OK. Next, select the Green Plus under the Area to Interface Mapping section.

Screen Shot 2014-06-26 at 3.23.48 pm

Configure the interface that is required in the OSPF routing process and the area it should be residing in. Area 10 is the example used here.

Screen Shot 2014-06-26 at 3.27.54 pm

Notice the vNIC in Area to Interface Mapping is now in Area 10 with the default timers. Up the top click Enable to enable the OSPF protocol. To confirm that OSPF is enable and the routes are being received from the Logical Router.

Screen Shot 2014-06-27 at 3.35.30 pm

Here I have used SSH to log into the NSX Edge Services Gateway. The command show ip route will show that our redistributed networks on the logical router are being advertised by OSPF to the Edge Services Gateway. The default route is in place. We can see from the output of show ip ospf statistics that the Shortest Path First algorithm has been run. Show ip ospf neighbors outputs the neighbour relationship between the Logical Router and Edge Services Gateway.

This post has seen the administrator configure dynamic routing on the NSX Edge Services Gateway and the Logical Router. Now that the logical application has a connectivity method to the physical world. The next post will look at vNIC level fire walling with distributed firewall and start of policy enforcement.

Installing VMware NSX Part 6

So far the blog has configured logical routing and logical switching. This has provided us connectivity between our application tiers. It has come to a point now where an administrator will determine how their application will be accessed. To provide connectivity to the logical application network that has been built thus far one method is deploying an Edge Services Gateway.

Screen Shot 2014-06-26 at 3.03.49 pm

An Edge Services Gateway (ESG) is a virtual appliance that can provide routing, firewall, load balancer, VPN, Layer 2 bridging services and more. To deploy an ESG click on NSX Edges then the green plus.

Screen Shot 2014-06-26 at 3.04.30 pm

Ensure Edge Services Gateway radial button is selected and populate the relevant hostname and subsequent details. Click Next.

Screen Shot 2014-06-26 at 3.04.48 pm

Populate the administrator credentials and select Next.

Screen Shot 2014-06-26 at 3.04.59 pm


Here you can select the size of the appliance. The appliance sizing determines resources used when it is active. This allows the administrator a choice when selecting what is relevant for a specific application. This example chooses a Large instance.

Size CPU Memory
Compact 1 vCPU 512 MB
Large 2 vCPU 1024 MB
Extra Large 4 vCPU 1024 MB
Quad Extra Large 6 vCPU 8192 MB

This table highlights the resources required for each deployment of a NSX ESG appliance. The Large instance consumes 2 vCPU and 1024 MB of RAM.

Screen Shot 2014-06-26 at 3.05.10 pm

After selecting the size of the ESG appliance it is required that a Resource Pool and Datastore is selected. Select the Green Plus and allocate the Virtual Appliance to the Datastore.

Screen Shot 2014-06-26 at 3.07.19 pm

There are three connections required of this Edge Services Gateway. Look at the reference diagram at the start of the post. One uplink into the external network, an internal interface that connects to the Transit network that the Logical Router our application is connected to and an internal interface to a VLAN backed port-group that our management host connects on. The Uplink into the external network is a routing link to an IP address within the physical infrastructure. This is one method of connecting back to the physical infrastructure is the port-group associated with a physical uplink.


Screen Shot 2014-06-26 at 3.07.15 pm After selecting and creating these interfaces with their subsequent addressing select Next.

Screen Shot 2014-06-26 at 3.08.52 pm
To specify a default gateway select the relevant vNIC and assign a gateway IP. This will allow a default route and a next hop IP address to be installed into the routing table. Click Next.

Screen Shot 2014-06-26 at 3.08.59 pm

In the example here the radial button for Default Traffic Policy is clicked to Accept. If HA has been configured then here you can specify the keep alive link and relevant configurations. Click Next.

Screen Shot 2014-06-26 at 3.09.06 pm


Confirm the details that you have entered into the NSX edge. This will allow the administrator to review the configuration before committing to the deployment. Select Finish.

Screen Shot 2014-06-26 at 3.09.16 pm

With that the NSX Edge Services Gateway will deploy and be ready for configuration. With very simple information we have deployed a virtual appliance that delivers load balancing, routing, VXLAN/VLAN termination, firewall functions, VPN services, L2 Bridging and more. The next post will configure dynamic routing between the two routers we have configured thus far.    

Cisco VIRL : Autonetkit

Cisco Virtual Internet Routing Lab (VIRL) is a platform that allows administrators and developers the ability to test changes, new technology, or try new architectures in a safe lab environment. This is a fantastic tool for studying the CCIE. It allows you to spin up network topologies on the demand. A feature within VIRL is the ability to point and click and drop an IOSv image and automatically configure IP addressing, L3 routing, BGP and VRFs.

It is called Autonetkit. It takes a set of predefined variables and injects them into the environment. I will give you a glimpse of how I can cut down a massive amount of time configuring a base environment. When learning a new feature within BGP for example and you’re setting up GNS3 or another environment you can spend a lot of time setting up an environment. There have been many times where I may have had 90 minutes of study time. 25-30 minutes may have been thinking of a topology or addressing scheme and configuring that. Whilst not a hard task it is time that gets sucked away. Now with this feature you can have the core infrastructure up and running very quickly and focus on deploying BGP communities, MP-BGP or working with advanced functions.

Before I start working on the basic Autonetkit configuration I have simply selected IOSv and clicked three times on my canvas. I have subsequently stitched together my links with some ethernet.

Screenshot 2014-06-24 19.51.43

Clicking on the canvas shows global Autonetkit features. Here you can see features such as Infrastructure subnet, loopback subnet and VRF subnet. You can also note there is address family v4. Good news. Dual stack, v6 are there too! If you want to default the setting then click default otherwise alter it to suit.

If you select an individual router or device you can select Autonetkit features that are pertinent to that device. This allows deviation from the overarching AS or routing instance. It allows you to quickly configure the basics. If you do not want Autonetkit simply click the check box next to “Auto-generate the configuration based on these attributes.”

Screenshot 2014-06-24 19.52.11

When you start the instance (green play at the top) and get it running you notice that the actual IOS configuration is generated. On the configuration tab within the properties view you can see the actual config. It is also VERY VERY easy to save, import and export topologies and configuration. This was something that was a little immature within other free platforms.

Screenshot 2014-06-24 19.52.45

I have consoled to the respective devices and simply run basic verification commands. Show ip route immediately shows the extent of how powerful automated base configuration is.

Screenshot 2014-06-24 20.16.57

As a part of VIRL (This is an Extended Field Trial and not a final consumer product) I can load my topology against Cisco Modeling Labs visualisation engine. I believe what I have is just a visual representation) due to the visualisation engine being the same. This allows for the visualisation of the topology across a variety of views. IPv4, physical, IGP neighbours, BGP peerings are just a few. This is a view of my topology that I generated with Autonetkit that models ebgp_v4 peers. You can see I have different peelings between the green AS and the yellow AS. If I hover over the devices I get contextual information about the session.

Screenshot 2014-06-24 20.17.23


Here is a quick little video demonstrating this feature.

This is only one minor feature of VIRL but it already has saved me 30 minutes per study sessions since I picked it up. That was 25 minutes more time at the CLI working on technologies opposed to simply inputting addressing. Pretty cool stuff! Plenty more coming including GitHub integration, connections to real devices, IOS-XE, NX-OS, L2 IOU and more. That is, if I am not study in my very easy to access and consume lab! Onwards to the CCIE!

Look within the walls for the enemy

The amount of respect I have for Lisa cannot be understated. Every sentence that is summoned forth from her through her fingers and subsequently the compute device she is in front of has such calculated thought behind it. It was in her recent blog post which referenced Motte and Bailey architectures in conjunction with modern corporations, did I realise she had articulated something that was rattling in my head for a little while.

It is quite well known that I am a man who enjoys medieval fantasy, science fiction and medieval history. Throughout history there have been many civilisations that have risen and fallen on the back of warfare. Medieval warfare was harsh, brutal and very deadly. It was well known the Norman invasion of 1066 brought feudalism to England and along with it well defined castle architectures. Motte and Bailey styled castles took advantage of hilled terrain and formed natural defences that were augmented by man-made construction. For the uneducated and religion following masses that tilled the fields and supplied the lord of the land the formation of town halls and castles served as a point of refuge. This bastion of safety formed a security blanket from the harsh reality of war, invasion and pillaging.

With an evolution of defences within the realm there was a new drive in advancement of weaponry. Siege tactics, latin for “to sit”, and siege warfare introduced weaponry such as trebuchets, onagers, catapults and battering rams. Offensive siege warfare involved controlling supply lines and strangling the enemy through control of resources. Whilst modern day accounts of warfare depict rapid ends to sieges, castle strongholds could hold out against an invading army for months.

Where the object required at all cost a campaign may not have been fought by military muscle alone. Whilst the perception was that everyone on the inside of the wall had allegiance to the land holder it was often found that this wasn’t the case. The black and white notion of inside good and outside bad was blurred and smeared with loyalties that have a price. Serfs, servants, military personnel or the lowly stable boy had a price. Trained spies or paid informants leaked information, facilitated tasks or quite simply compromised fortifications during a siege.

Using the reference point of a Motte and Bailey castle architecture there is a comparison between the inside versus outside of castle defences and that of security within society, our workplaces and computer networks. Gaolers have enforced the laws of our society to keep us safe and put criminals in containers, corporations have delineated trust based on those working for and those who don’t and computer networks simply place isolation where required.

Whilst firewalls have provided a hard perimeter in our network the demarcation of inside and outside is generally defined by configurations which simply isolate two sides. We have also built our businesses this way where corporations inherently trust their staff on the inside. Our defences have improved are we have evolved and there has been more layers added to the security onion. We add varying levels of measures to increase our defences but they either provide isolation or context. As Lisa touched on, this theme has not changed for years and our IT security postures reflect this. Whilst we have token security checkpoints to ensure what control you have (and sometimes who you are)  most postures boil it down to the simplistic ideal of inside versus outside or more simply: internal or external.

If we simply can only control what is internal and external how do we deal with threats from within? Our hallowed internal zone which has limited controls is ripe for the picking. Whether it being an information gathering exercises, opening a backdoor or simply a reconnaissance task and the perpetrator biding  their time these functions undermine internal security. It doesn’t matter how thick, efficient or tight controls are isolating outside and inside, once they are in they are in.

The enemy is within the castle walls unknown to all those within. Why do we struggle when what is wrong is not the tools that we use but the ideology behind it? Why do we implicitly trust those on the inside when it is generally those on the inside whose loyalty can be bought? If we continue to enforce old ideals which are ingrained our culture can we enforce and secure how we want, what we want and where we want?

Before we talk about how we can enforce this ideology of understanding who is using our network we need to first make a mindset shift. Creating a mindset shift is started by questioning the status quo. In 2014 we should not be building inside/outside only networks. The way we enforce workloads and ensure compliance through standards and control mechanisms must change and become dictating. Compliance architectures have a flow on effect which dictate how people build networks yet we see information leak like a sieve. In 2013 and 2014 some big name companies have been exploited from the outside but the more prominent was the NSA from within.

It seems that I am promoting a paranoia state which we say we should trust no one. Well – in terms of IT and information security I counter that point. Why should we trust anyone? In 2014 the only viable security architecture is to look towards a zero-trust model. Removing the reliance on points of control at the edge of the network where we delineate inside and outside and focus on delivering user-based access control to workloads and  environments. This proposal dynamically controls with context about the security policy to deliver isolation, segmentation and correct user access without administrative overhead and complexity. No longer are workloads secured based on location or using the network centric approach to infrastructure control. Workloads need to be secured based on who is connected, what they are connecting to and why they are connecting.

Question the norm. Challenge perception. For too long we have built networks that have limited our ability to enforce, control and deliver secure architectures. Spice up your architecture with some practical paranoia. Eliminate inherent trust and defend your castle. The threats that besieged castles from the outside affect your company network.

For practical implementation and discussion on zero-trust have a look at Brad Hedlund’s recent post.

Here is a link to my closing keynote presentation for the NSX VMUG day in Sydney which this post was the inspiration for.

CCIE Study: BGP neighbors

I have been configuring a rather large topology within my GNS3 environment of late and it is composed of many peers. At this stage I had a plethora of peelings up. The diagram below highlights a small subset of a larger topology that I am using.


I have standard loopbacks which I generally use for peering. This is a simple iBGP peering running over the top of OSPF. Attempting to work through at a pace, on one such peering I put the following commands in to form the neighbour relationship.

R1(config)#router bgp 123
R1(config-router)# neighbour remote-as 123
R3(config)#router bgp 123
R3(config-router)#neighbour remote-as 123

The neighbour addresses are the addresses of the respective routers loopbacks. I prefer to use loopback addressing where possible as BGP peering is more resilient opposed to interface level peers. After a moment the peer didn’t come up (being a little impatient due to my self-imposed time restrictions) I immediately moved to debug.

R1#debug ip bgp 
*Mar  1 18:11:04.919: BGP: open active, local address
*Mar  1 18:11:04.963: BGP: open failed: Connection refused by remote host, open active delayed 34009ms (35000ms max, 28% jitter)

BGP configuration on R1 is sending the local-address to R3. R3 is looking to peer with not This poses a problem for BGP and the peering process as the connection is refused.

When the command configured to peer a BGP neighbour relationship is used it invokes a certain type of behaviour. Take the following command. Neighbour remote-as 123 for example. This will instruct BGP to send a packet with a destination IP of and the source IP address will be the interface address. In our example this is Our packet would look something like this

  • srcIP
  • dstIP
  • dstPort 179

While this is fine the neighbour there is a rule for BGP. The router must receive a TCP connection request with the source address found in the neighbour command. In our case, R3 is expecting a source IP of and is receiving a source IP address of An additional command will fix this.

R1(config)#router bgp 123
R1(config-router)#neigh update-source lo0

Our packet would now look something like the following.

  • srcIP
  • dstIP
  • dstPort 179
*Mar  1 18:11:37.595: BGP: passive open to
*Mar  1 18:11:37.595: BGP: went from Active to Idle
*Mar  1 18:11:37.595: BGP: went from Idle to Connect
*Mar  1 18:11:37.603: BGP: rcv message type 1, length (excl. header) 26
*Mar  1 18:11:37.603: BGP: rcv OPEN, version 4, holdtime 180 seconds
*Mar  1 18:11:37.603: BGP: went from Connect to OpenSent
*Mar  1 18:11:37.607: BGP: sending OPEN, version 4, my as: 123, holdtime 180 seconds
*Mar  1 18:11:37.607: BGP: rcv OPEN w/ OPTION parameter len: 16
*Mar  1 18:11:37.607: BGP: rcvd OPEN w/ optional parameter type 2 (Capability) len 6
*Mar  1 18:11:37.607: BGP: OPEN has CAPABILITY code: 1, length 4
*Mar  1 18:11:37.607: BGP: OPEN has MP_EXT CAP for afi/safi: 1/1
*Mar  1 18:11:37.607: BGP: rcvd OPEN w/ optional parameter type 2 (Capability) len 2
*Mar  1 18:11:37.607: BGP: OPEN has CAPABILITY code: 128, length 0
*Mar  1 18:11:37.607: BGP: OPEN has ROUTE-REFRESH capability(old) for all address-families
*Mar  1 18:11:37.607: BGP: rcvd OPEN w/ optional parameter type 2 (Capability) len 2
*Mar  1 18:11:37.607: BGP: OPEN has CAPABILITY code: 2, length 0
*Mar  1 18:11:37.607: BGP: OPEN has ROUTE-REFRESH capability(new) for all address-families 
*Mar  1 18:11:37.607: BGP: rcvd OPEN w/ remote AS 123
*Mar  1 18:11:37.611: BGP: went from OpenSent to OpenConfirm
*Mar  1 18:11:37.611: BGP: send message type 1, length (incl. header) 45
*Mar  1 18:11:37.631: BGP: went from OpenConfirm to Established
*Mar  1 18:11:37.631: %BGP-5-ADJCHANGE: neighbor Up 

So lets breakdown the BGP debug. We firstly see debug for the neighbour relationship of We move away from Active state which is a good sign. Into Idle state then immediate to connect. BGP is waiting to receive an Open message from It receives this message and path attributes are exchanged. As by the last two messages we move from OpenConfirm into Established. Debug reports an Adjacency change and neighbour 3.3.3 comes online.

Whilst this post stemmed from an incident of me not typing all the commands from my notepad it serves as a good reminder. It is great to understand how the CLI interprets your commands at the packet level. To understand what it means to put the command string in appended with update-source lo0 and how that effects the packet. I didn’t have any issue sorting out what was wrong but it serves as a timely remind as the march onwards to the CCIE continues that knowing how and why things operate the way they do are as important as ever.

Review: Networking for VMware Administrators

Recently Chris Wahl, VCDX #104, along with Steve Pantol released Networking for VMware Administrators. This book covered off many aspects of networking in a VMware environment.


The book starts out with fundamental network topics. This is a good sounding board for many VMware administrators. Traditionally (and you could even argue to this current day) there are many environments that work in silos. This has lead to network teams doing networking (down to the vSwitch), server team building a VMware infrastructure leaving application teams riding on top. Now that the worlds are slowly merging it is time for everyone to refresh the basics and this serves it well.

My background is data centre networking. I’ve cut my teeth on Nexus, ASA, SRX and other DC products. Where my weakness lied was in the vSwitch and virtual networking and the design considerations around it. This book provided me with additional considerations when dealing with storage networks, VMkernel interfaces and insights into why. This was good because it made me thing about my decisions. In a virtual environment, getting it right is important when you want to scale in your environment.

Chapters 12 and 13 proved extremely valuable. Titled vSwitch Design and Distributed vSwitch Design they give the reader real work discussion on deploying these technologies with validated real world experience behind it. Working with Ethernet storage was great for me too. I get compute and network virtualisation and I spend my day working with VMware NSX but when it comes to storage I defer to others. I learn that building storage networks isn’t as scary as I made it out to be and the design considerations were helpful.

I think this book is great reference for new and old network and server virtualisation administrators. There are always tid bits you forget or miss and this books rounds them off nicely. Whilst there is a fair amount of introductory topic the book is worth it for the design considerations. Good job Chris and Steve.

The book is now available!

Disclaimer: I received a copy of this book from Pearson Publishing / VMware Press. I was not asked for a review or any favourable mention. See my post here for more information.

Think ‘Cool’ for DevOps Workflow Automation

This post originally appeared on SDN Central. This is a verbatim repost.

DevOps practices rely on workflow automation to support evolving software function and programming, especially during the transition into live operations. At the heart of workflow automation lies standardization: the ability to produce something to meet demand, maintain a standard, and have the end result deliver the same amount of satisfaction through consistency.

While the culture and practices of DevOps are new, the central problem of standardization has been around for a long time. Look around at the world you live in, and you’ll notice how automated workflows and functions affect your day-to-day life. For me, the practice and benefits of standardization can be illustrated with one humble, everyday object: the ice cube.

The earliest forms of refrigeration required carving ice from a frozen lake and hauling it underground into bricked cellars. Sheets of ice varied in size, and supply was random and sporadic, as ice harvesting depended on the dynamic variable of weather and the length of winter.

As technology progressed and refrigerators and freezers became available, people gained the ability to freeze water in their own homes and make ice cubes standard in shape, consistent in size, and produced to meet demand. The original process was always manual, requiring an actual person to fill the ice cube tray with water and place it in the freezer. Now, refrigerators can be plumbed into water systems for input and automatically manage ice output.

This example is not the most glamorous, but it is a concrete way to show how a workflow with a manual process can be automated. More importantly, it shows how automation can foster consistency and repeatability, which become increasingly important as more downstream functions require the base unit to be consistent.

Many DevOps discussions apply the analogy of a production line, which requires managing raw resources and using resources wisely throughout the process. Any successful end result is usually the product of a successful system of systems that benefited from the timely, reliable delivery of all the component products.

Let’s apply this thinking to our ice analogy and extend the production metaphor to look at ice in context of a restaurant setting. Ice can be used in cocktails, water, dessert making, and many other culinary functions. Whenever many people or functions require one resource, there is a chance for a bottleneck.

What if the restaurant’s ice maker is acting erratically? When the front of the house takes an order for a table that wants a round of mojitos, some desserts, and various tapas, workflows are enacted. A common ingredient in all these workflows is ice: Will there be enough to meet demand?

As more orders pile up, the ice machine struggles to work and becomes the bottleneck, affecting orders, slowing workflows, and breaking down processes. Long delivery times to tables – analogous to long times to market in our world – become a major headache for the restaurant.

As you integrate DevOps into your own environment, think about what your “ice machines” may be. Where are your choke points? Bottlenecks disturb expectations, delivery, and consistency, and their consequences cascade to downstream workflows.

Developing effective DevOps workflow automation requires you to consider the ice machines in your organization. Make sure your ice machine is adequate for demand and protected enough by workflows to ensure consistent output. Effective planning early on makes it more likely to deliver timely, quality end products to every stakeholder downstream

Creating NSX objects with REST API

As has been purported many times it is possible to create objects via a RESTful API. Lets first look at the logical network segments in my environment. Whilst I show the GUI creation here this post highlights the the GUI method.

Screenshot 2014-06-03 08.48.13

There are none. This is a fresh instance. Now I am going to open up my web browser, in this case it is Mozilla, and click on my RESTClient plugin.

The REST client requires some details first. Authentication so it can communicate against the NSX manager and the Content-Type. It needs to know that it is posting XML data.

Screenshot 2014-06-03 09.05.25
Screenshot 2014-06-03 08.51.04
Okay so now I want to use the API to POST the attached XML snipper to create a virtual wire. The URL to access the virtual wire is https:///api/2.0/vdn/scopes//virtualwires.

This requires me to put in based on my environment POST:https://nsxmgr-l-01a.corp.local/api/2.0/vdn/scopes/vdnscopes-1/virtualwires

Screenshot 2014-06-03 08.54.34

The XML required for the creation of my switch is as follows:

Web Tier LS
Pandom Systems   

Click submit!

We can see confirmation of the creation of our logical switch with the following output.

 Status Code: 201 Created
    Cache-Control: no-cache
    Content-Length: 13
    Content-Type: text/html
    Date: Mon, 02 Jun 2014 23:07:25 GMT
    Location: /api/2.0/vdn/virtualwires/virtualwire-1
    Server: vShield Manager

Time to confirm this against the webGUI.

Screenshot 2014-06-03 09.08.33

There we are. It is simple to use POST or GET to retrieve and deploy environments. It is possible, depending on your ability to use an API at scale to manager and run your environment based on the API! Pretty cool stuff.

Installing ASAv into vCenter

As announced last month and quickly covered off by this post, Cisco announced the evolution of the ASA 1000v, the ASAv. There is no longer a requirement on Nexus 1000v.

I have a variety of technology in my lab for studies. For 2 years my bread and butter was Juniper SRX and Cisco ASA firewalls. They were the mainstay of my role and I still get questions about them from old colleagues and industry friends

The Lab

This is the lab environment that I have built. I have a firewall only environment and an environment which I have a CSR embedded into it as well.

ASAv lab topology

Installing the ASAv into vCenter

Lets install the ASA 1000v and connect it to the Web Logical Switch we setup here. My lab environment sees quite a few ASAv instances stitched together in a topology. This is great for studying expected behaviours of physical firewall changes.

Time to deploy the ASAv OVA file downloaded from Cisco.com and select the OVA file.

Screenshot 2014-05-01 18.59.43

Accept the terms from Cisco. Accept the extra options which are Thick Provisioned disk (pre allocated, written with zeros).

Screenshot 2014-05-01 19.00.08

Accept the terms of the EULA.

Screenshot 2014-05-01 19.01.41

Select the name and location of where you want to install the ASAv

Screenshot 2014-05-01 19.02.21

Select the cluster you want to install to.

Screenshot 2014-05-01 19.02.37

Select the datastore where the vmdk will be provisioned. Remember, thick provisioning requires the space upfront. Make sure you have the room.

Screenshot 2014-05-01 19.02.52

Here you can select the networks to which the ASAv attaches to. My port-group VM-traffic is connection to the dvUplink connected to my UCS fabric interconnect – for the non VMware people – the outside world. The vSwitch labelled vxw-dvs-204-virtualwire-8-sid-10007-DND-Transit is my Transit logical switch that is connected to my uplink from my logical router. If you look at the three tier application we are deploying in my Installing NSX Series Part 4.

Screenshot 2014-05-01 19.09.21

Here you can set up the initial config which is usually prompted when first enabling a device. I am sure as a part of a vCO workflow that this could be read from a central repository or something but I skip this for now.

Screenshot 2014-05-01 19.09.35
Hit finish. Now it is time to start the ASAv up.

Screenshot 2014-05-01 22.06.06

Lets have a look at the console. It’s amusing – still faithful to the older ASA’s with a Pentium II 2400 being reproduced. Screenshot 2014-05-01 22.08.00

Easy does it. A virtual ASA connected to a logical switch and the outside world. Apply your standard configuration and default policies and you have a functioning ASA. Much easier to deploy than its physical counterpart.

At the time of installing it seems that the only feature missing is ASA clustering. I cover ASA clustering here which is not a bad way of scaling out firewall function. I believe that this is purely a command enablement in the next version.

Moving to a vendor – Part 2

I previously posted a blog about what it was like moving to a vendor. It has been known and is quite well known now I work for VMware. I work in the Nicira spin-in as a part of the Network Security Business Unit. Along the way I have spoken about the company specific perks and they are pretty good. After a few months within the company (I am approaching my 6 month mark) I am realising something more.

I am working in a team of around 70 in the Network Security Business Unit. We are the spearhead with VMware’s networking portfolio. We have a tight knit group that works together to get the job done. No one knows everything and people are happy to share knowledge, share design considerations or have an impromptu whiteboard session. The last I’ve done more than a few times with my local team members! (Shout out to Dmitri, Nick and Grant)

I work with a product that is leading edge. I have moved from a customer land where we were considered technology laggards although we did some pretty cool stuff. It is a massive change going from talking about standard networking and application functionality to walking into a customer and changing their mindset. Some customers get it right away and others take time. When they do it is a great feeling that they can see what is in your head. A part of my role involves educating and teaching customers and I take great delight when you see it all piece together.

For me personally I’ve managed to meet some people I sincerely admire. Scott Lowe and Brad Hedlund have been great to meet at the VMUG during the start of the year. Martin Casado came to town and I spent a day and a half with him going to my customers. It was pretty amazing to be wrapped up into his world and see it from his point of view. Understanding his thoughts on technology, where the industry is heading and the policy battlefield was great. Also, Martin’s penchant for unique and different foods invariably leads to very interesting stories. We also wound up on the front page on one of our national newspapers!

Yes, we  did put a multi millionaire in the middle of the backseat! Squish Squish Martin!

Bruce Davie also spent some time down here the other week and it was great for me to see how operated. I also got to thank him for helping invent MPLS and in turn, allowed me to be where I am today.

As long as you are doing your core role and meeting or exceeding expectations there is a lot of opportunity at a large vendor. It has been noted by a variety of different teams that I blog and write a lot. I also have a presence on social media. This in turn has led to me working on some white papers and upcoming blog posts for the company. This is quite exciting when you get to do something like this. A professional extension to something I enjoy. These documents provide me a challenge to write a professional document that has my style, flair and technical analysis.

Immersion is something I touched on previously. By working with smart people you get to see different points of view on a technical problem, come across different use cases and have people fielding questions and solutions to problems all the time. Sometime this is 24×7 due to being a global business unit. I have personally learnt more about virtualised networking in the last 5 months than I have some other technologies I’ve worked on for years.

I’ve had a blast in the last 5 months. You’re challenged daily. I’ve learnt a lot of new things. The sales cycle is interesting. You feel part of something bigger and whilst in the end you’re just a number (the cynic in my head had to say that) the important part is you’re a powerful number. Powerful and empowered people create. It’s time to create what’s next.

Note that these have been my experiences at VMware. The people and the culture of the environment or company do allow a lot of the aforementioned to happened. I do hear that this is not blanket across all vendors.