Drinking and blogging

There is a term in the industry known as “Drinking from the firehose”.The firehose for many has been the notion of being able to consume and have access to much more information than is possible to digest. Moving to a vendor there is a plethora of information to digest. About certain products, how it works and engineering documentation. If there is something you want to know about or learn about you can.

In Star Wars there is the notion of The Force. The force, leveraged by Jedi’s, is balanced between light and dark and one cannot exist without the other. Within The Wheel of Time there is a battle for time itself between Shai’tan, the Dark One, and the Creator. Both require balance and cannot exist without one another. It is possible to draw an analogy between these themes and the firehose.

I have experienced this firehose and while there are many benefits there is a danger. While the knowledge is amazing and my technology knowledge has been developed by leaps and bounds there is a darker element to it. There is a threat to a personal brand and messaging. Corporate messaging can sink in and permeate technology information and discussion. It is something that can be deemed a negative of working a vendor. Messaging from leaders, officers within the CTO, or marketing drives can alter the language used in presentations, discussions and blogging.

Maintaining perspective is an important part of drinking from a firehose. It is necessary to maintain that sense of balance and what is what – marketing or technology when drinking. Marketing influence languages used in documents. Leaders adopt slogans and phrases. This is unique to any single business and is very common across every business world wide not just IT. If you have an online presence whilst working in vendor land then be a balanced person, understand what is messaging and don’t become a vendor shill.

Some fantastic examples of people who work for a vendor and keep their life online balanced are:

Scott Lowe – Office of the CTO – VMware
Lisa Caywood – Technical Marketing Rockstar – Brocade 

Their blogs are balanced and focus on the industry or technology and not the messaging of the company which could portray them as a vendor shill.

(Re)Launching Network Inferno

It is a new year and it was about time that I changed my blog. I have gone through a major career transition from a data centre network engineer to a technical pre-sales engineer at VMware in the network security business unit. I wanted to update my readers on my messaging and my thoughts about how Network Inferno’s messaging is positioned.

The NSX compendium is where my technical focus of NSX will be. I plan this to be THE repository for NSX information on shipping releasing. How it works, why it works and how NSX can be used as an administrator. Building solutions on this platform is hard without knowing how it works. This piece is purely technical and based solely upon facts.

The main focus of my blog is still to deliver content focused on networking, industry happenings and cool technology based on my point of view. This content I hope stays relevant from the technical point of view as Anthony Burke, the individual blogger, the same person who attended NFD4/6, written content from my experiences and kept you all as readers. I do have a ranging passion for Cisco, Juniper and VMware and this has been shown in my blog previously. I hope I stay like this and do not become a shill.

I have had for a long time a steadily increasing demand on my site and I have made many improvements to maintain this. From protecting my site with Cloudflare to increasing my hosting platform to handle capacity, ciscoinferno and now network inferno, has been costing a little lately. With a platform that receives 25,000-35,000 pages views per month there is little cost associated to it. I have decided to place some banners and some column ads up for sponsor. I provide a platform that has a varying subsection of readers and thought it might be time to recoup some costs. BuySellAds is now a way to sponsor my site. Click here to find out more.

Thank you the readers of network inferno. Your continued support is great and it has been extremely rewarding to have been able to meet many of my readers across events. In fact in the last two days I’ve met a couple of long time twitter friends in person. Let alone enjoying producing technical content and what I do with my blog it is definitely awesome to know people gain benefit from it. 2014 is going to be a good year.

A shout out to Greg Ferro and Lisa Caywood! You both have been a great compass throughout the years and of more recent times.

Micro segmentation and NSX

When you speak of security architectures the word Micro segmentation isn’t new. A Micro segment or a small subset of a larger overall has its roots in the financial industry. When micro segmentation is brought up in terms of a security architecture people have had a hard time deploying it. The notion of applying service and security function within a L3 segment has proven tough.

In a traditional environment a three tier application would require three subnets, three VLANS,  three sets of configurations for SVI’s on network devices, corresponding firewall rules on both on the primary and failover devices and potential load balancing configuration. In recent times some firewall function was delegated to virtual appliances but they still added limitations.

Until recently have leveraged virtual appliances to deliver such function. vShield App, vASA, vSRX and the like. Whilst in a virtual appliance they have had a limited feature set, limited throughput and have suffered from scale issues. Many VMs that use 2 vCPU and 4-8GB RAM surely can’t scale in massive hyper-scale environments. Let alone the cost of licensing, managing and utilising high availability.

Lets look at a micro segmentation model and how it changes they way we look at security.


Here we have a single subnet and a three tier application. It is also supported by a management host. What we do by leveraging technologies that focus on in kernel distributed function is that we can enforce context and isolation at the vNIC level.

A distributed firewall (dFW), a feature of NSX,  provides a centrally management firewall that has its rule set distributed to all hosts in the cluster. The dFW enforces rule sets at the vNIC level ensuring enforcement BEFORE communication from a VM hits a vSwitch. With a vNIC level enforcement applied in kernel and the ability not only to apply 5-tuple matching or source and destination port and IP but

So why deploy or consider a micro segmentation model? This list is not comprehensive but some outstanding benefits are:

  • Reduced attack vector leveraging hypervisor context and promoting isolation
  • Simplified deployment model without n tiers of configuration required
  • Logical security provides strong boundaries
  • Promotes strong screening of east-west traffic flows
  • Virtual networking eliminates hair-pinning of traffic flows
  • Prime for automated network blueprints and deployments

Micro segments are not new but the way we can approach and deliver them now is new. East-West traffic filtering allows for hardening within the data center whilst providing enforcement in areas that were technical tricky or operationally infeasible.

A pen may have been harmed in the writing of this post. I have attempted to spell segment with an extra e far to many times – segement.


Context vs Isolation

Security is an industry that can excite and frustrate, extract tears from the unsuspecting and cause insurmountable problems when protecting among many disparate systems. For a long time security was an after thought and something that was bolted on. If more consideration was given it would generally be a kludge that would have many undesired effects.

There are many places where enforcement can occur in a network. You can place security controls and enforcement at the application or inside the virtual machine. This provides you with fantastic context. Information about the files, devices, application, memory and pages, what processes are running on more. The trade off is that there is no isolation – what you present is a whole attack vector or surface irrespective of the security controls on the application or virtual machine.

Isolation on the other hand is generally achieved in the network. Different networks like overlays, VRFs or VLANs aggregating with security applied through ACLs or policies that permit and deny based on different rulesets. This works well but what you miss out on is rich information about the end point. If a device is compromised then if the ACL still does it job of providing isolation but you have no idea that this is occurring.

There is a common environment in the DC that provides the sweet spot for maximum context and isolation and that is the hypervisor. The hypervisor is a ubiquitous layer in the data centre. With access to rich context of the guests residing on it and the balance of isolation techniques available to it through network function virtualization, the hypervisor can deliver this new era of security. This context and isolation allows the ability to enforce security on east-west work loads and harden the DC through scalable architectures to supplement the traditional north south workloads.

We have for a long time enforced on the edge. What if there is a breach on a web DMZ VM? How do we stop east-west privilege escalation? There are many environments that try to limit the attack vector of a compromised machine but once you are end generally you can see ARP tables, the ToR switch, adjacent machines and much more.

There are many times where we have made the DC a crunchy candy. A hard shell with a soft and gooey centre. Spending 80% of the security budget enforcing the edge with Firewalling, IDS/IPS, WAF and more yet 80% of the work load is intra data centre. It is a crazy paradigm which I do hope shifts and we see architectures such as least permission, zero trust and micro segmentation leverage. It is time we start hardening our gooey centre. It is time the DC ate some concrete and hardened up and in an upcoming post I look to show you how you can harden your east-west communication today.


I am lucky in my role at VMware that I work with some very smart people. I have spent time this week talking and visiting my customers with Martin Casado, CTO of Networking at VMware. Having spent a bit of time talking to him about what he is working on gets me excited about security and networking. He is a very smart man and much smarter than I so I suggest you go check out the Interop keynote where Pat Gelsinger and Martin talk about the hypervisor being the new network security platform. 


CCIE study: OSPF Sham Link

Working through OSPF and MPLS recently I came across an interesting concept. I built out a simple MPLS cloud and had one customer joining two sites across it. The goal was to connect it over MPLS and leverage OSPF for the PE to CE connection. This was quite easy to do and very simple to manage. Proceeding to add a private link between the branch and HQ sites did I realise that OSPF no longer leveraged the MPLS cloud but used the private link.

This is the topology currently. We have two sites connecting to each other via an MPLS cloud.


Here is the routing table as it stands. I have connection from the loopbacks on C1 to the loopbacks on C2.

C1(config-if)#do ping

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:

A trace route verifies the path taken.


Type escape sequence to abort.
Tracing the route to

  1 12 msec 20 msec 24 msec
  2 [MPLS: Labels 18/20 Exp 0] 72 msec 88 msec 72 msec
  3 [MPLS: Label 20 Exp 0] 84 msec 56 msec 76 msec
  4 68 msec 84 msec 80 msec

The routing table as it currently stands shows OSPF advertising loopbacks from the Customer sites via the MPLS cloud.

C2(config)#do sh ip route 
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area 
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

O IA [110/11] via, 01:18:57, Ethernet0/0
C is directly connected, Ethernet1/0 is subnetted, 1 subnets
O IA [110/21] via, 01:18:57, Ethernet0/0 is subnetted, 1 subnets
C is directly connected, Loopback0
C is directly connected, Ethernet0/0

Note that customer routers receive information from Ethernet0/0 – the upward link to the ISP for the Customer device.

C1(config-if)#do sh ip ospf data

            OSPF Router with ID ( (Process ID 2)

		Router Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum Link count    811         0x80000005 0x00EA53 1    882         0x80000008 0x00DF34 2

		Net Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum    882         0x80000003 0x00C686

		Summary Net Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum    811         0x80000003 0x001112    811         0x80000003 0x005F55

The OSPF database shows that the other customer site are inter-area router. The type 3 inter-area LSAs show network and the uplink to the ISP, These are being advertised through the MPLS cloud and redistributed from MP-BGP SKY address family into OSPF vrf SKY.

Now what happens when we enable the backdoor connection between customer sites? The traffic takes the backdoor link and not the MPLS cloud.


We can confirm that the backdoor link is routing all traffic by checking the OSPF route table.

C1(config-if)#do sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area 
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

C is directly connected, Ethernet0/0
C is directly connected, Ethernet1/0 is subnetted, 1 subnets
C is directly connected, Loopback0 is subnetted, 1 subnets
O [110/11] via, 00:00:01, Ethernet1/0
O [110/20] via, 00:00:01, Ethernet1/0

We can definitely see now that and which were advertised previously by the MPLS cloud are now being preferred by the backdoor link. If an administrator is to adjust the interface level OSPF cost this would not affect the route.

C1(config-if)#do sh ip ospf data

            OSPF Router with ID ( (Process ID 2)

		Router Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum Link count         19          0x80000009 0x00CB95 3    978         0x80000005 0x00EA53 1    18          0x8000000A 0x000AB8 3    1040        0x80000005 0x00459D 1

		Net Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum    1049        0x80000003 0x00C686         1048        0x80000003 0x006C82         19          0x80000001 0x00FDFC

Notice there are no longer any inter-area routes / type 3 summary LSAs? The network is now a type 1 LSA along with As expected a trace route results in:


Type escape sequence to abort.
Tracing the route to

  1 28 msec 24 msec 20 msec

An MPLS link is not preferred in OSPF when there is a back door because intra-area routes are preferred over external routes. Routes that are advertised across a MPLS/VPN that are imported and exported into BGP pass the route information with it. This means upon redistribution out of BGP into OSPF, routes retain their external route marking. Therefore they are marked as external routes and no longer preferred by OSPF. They are a type 5 external LSA. The backdoor link becomes favoured and subsequently used.


An OSPF sham-link can solve this problem. The OSPF sham link provides a logical link between two VRFs. It creates a link that makes the MPLS PE’s participating in the sham link appear as a point to point link within OSPF. These links are able to fool or trick routers in the OSPF domain that this is a better path thus preserving the LSAs as type 1 or type 3.

By using two loopbacks on the respective devices advertised into the BGP address family that corresponds with the customer VRF, OSPF can create a link that is more appealing. By using the command area <area-id> sham-link <source-address> <destination-addres> cost <cost> it is possible to build this link.

area 0 sham-link cost 10
area 0 sham-link cost 10

When an OSPF sham-link is set it builds a bridge between two VRF’s. By advertising a type 1 LSA (Router) across this link, the OSPF database sees this route and the routes advertised across this link as acceptable. This is due to no longer being an external route and becoming an intra-area route.

Lets verify what we are seeing.

The routing table indicates that we are learning the other sites routes via the MPLS cloud.

C1#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area 
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

C is directly connected, Ethernet0/0 is subnetted, 1 subnets
O E2 [110/1] via, 00:01:08, Ethernet0/0
C is directly connected, Ethernet1/0 is subnetted, 1 subnets
C is directly connected, Loopback0 is subnetted, 1 subnets
O [110/31] via, 00:01:08, Ethernet0/0 is subnetted, 1 subnets
O E2 [110/1] via, 00:01:09, Ethernet0/0
O [110/30] via, 00:01:09, Ethernet0/0

The trace route shows the path we are expecting to see and no hairpin routing is occurring.


Type escape sequence to abort.
Tracing the route to

  1 24 msec 20 msec 24 msec
  2 [MPLS: Labels 18/20 Exp 0] 84 msec 84 msec 72 msec
  3 [MPLS: Label 20 Exp 0] 76 msec 68 msec 64 msec
  4 76 msec 88 msec 72 msec

So lets now take a look at the Sham link adjusted OSPF database.

C1#sh ip ospf data

            OSPF Router with ID ( (Process ID 2)

		Router Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum Link count         76          0x8000000C 0x00986B 3    89          0x80000006 0x006622 2    79          0x8000000E 0x00D48F 3    90          0x80000006 0x001B28 2

		Net Link States (Area 0)

Link ID         ADV Router      Age         Seq#       Checksum    1543        0x80000003 0x00C686         1543        0x80000003 0x006C82         368         0x80000003 0x00F9FE

		Type-5 AS External Link States

Link ID         ADV Router      Age         Seq#       Checksum Tag    233         0x80000001 0x000736 3489660929    227         0x80000001 0x0067BE 3489660929    152         0x80000001 0x000FD5 3489660929    170         0x80000001 0x006F5E 3489660929

Notice that the Sham-links have been advertised through as a Type 5 external LSA link type. The networks from the other customer site are passed over this OSPF sham link as Type 1 router LSA.

When setting a sham link up it is important to set an lower cost than the backdoor link. This will allow traffic to pass through the MPLS cloud as the preferred link and upon failure the backdoor link can be used to maintain connectivity. It is important to remember if you have a backdoor link that you should enable

So a sham link does more than create a link between loopbacks. It alters and adjusts the behaviour of OSPF in the internal OSPF database. It is amazing how much there is to learn.

Additional reading : RFC 4577

SPARK: VMware PEX and I am presenting four sessions!

VMware Partner Exchange conference is on in Sydney, Australia this week. It is an event that provides update to local partners about what is happening within the VMware eco system. There are some pretty cool updates surrounding the technologies in the Software Defined Data Centre – vSAN, NSX and more.

Yours truly is presenting four sessions this Tuesday and Wednesday. Wednesday I am co-presenting all day with my solution architect Nick Bradford. We will alternate sessions and work together to deliver updates

Tuesday 1615
Technical Update for Service Providers

This presentation goes into detail about how the evolution seen in the Software Defined Data Centre is a great platform for delivering NFV. This allows the integration of legacy function into a service chain. I speak about partner integration into NSX and some use cases.

Wednesday 1400-1700
NSX for vSphere technical introduction

This presentation seeks to introduce NSX and its role in the Software Defined Data Center. Going through core components, such as Logical Switching, Routing, distributed firewall and more, tying it all together and looking at high level use cases, viewers will leave the presentation knowing how VMware NSX for vSphere will work in their environment.

NSX for vSphere demonstration and NSX with vCAC

This presentation, co-presented with Nick Bradford, will demonstrate the building of a 3 Tier application, routing, switching and distributed firewalls to show of the power of NSX. The application template can then be easily ported into a CMP such as vCAC that will allow repeatable deployments of three-tier apps, slashing the overhead of networks that traditionally inhibited deployments, allowing network function to deploy at the same velocity as x86.

NSX for vSphere Use Cases and Design considerations

This presentation looks to follow on from the previous that focuses on design considerations. It looks at vSphere design, physical underlay considerations and NSX design considerations.

I will upload my slides after presenting.

REVIEW: CCIEv5 Configuration Practice Labs

CCIEv5 Configuration Practice Lab

Having a Safari Books Online subscription does have its benefits. Whilst looking for a reference book during the week I noticed that the CCIEv5 Configuration Practice Labs appeared. I felt it was very soon for such a book to come out so I downloaded it and began to find out more.

It is very common knowledge to aspiring CCIE candidates that version five of the expert Cisco certification has changed the topics it focuses on. While frame relay has seen the end of its like the alignment to current technology I’d apparent with the inclusion of DMVPN and more IPv6. This provides a new raft of topologies and lab scenarios to study and get your teeth into.

There are three labs provided inside this book. They each cover off a subset of technologies in a scenario format. What I liked about the format is that all the questions are covered off initially through the lab overview. Topologies – both physical and logical – are provided and this allows the representation of where you current are.

I felt that on paper these labs were going to be a little easier than other vendors like IPexpert and INE. It didn’t seem to have the same depth but it was soon into the lab did I realise where it laid. The lab requirements and set ups were very real world. It didn’t say “configure x interface with y”. What the questions asked you were “a requirement of this network segment is to eliminate the need for x without using y”. This ensures the candidate knows that there are many ways to achieve an outcome and pits limitations and constraints on what the correct solution could be.

A section which I thought was good was before revealing the answer the author has placed in a section of “Proctor questions” which guide the reader to think about how to ask the proctor a question. It is well know that simply asking “do I configure this?” Is met with blank silence and that a question that shows your thought process, elimination and workings out yields a guiding response. With that in mind when using a technology it is worth sparing a thought of how you would ask for clarification with a proctor and this section is great for this.

The lab diagrams are clear and there is minimal mistakes. The book is a good start in the deluge of new content from all the providers for the new exam. Whilst the meat of this book – the lab and the answers – are polished and well done there is a lot of supplementary information that could have been included. Unless I am reading the rough cuts there is only a paragraph or two on lab preparation and study techniques. This could of been fleshed out. Maybe it will.

This is a decent first publication for the CCIE materials by Cisco Press. In the case of study for certification there can be never enough material and practice labs can help get you in the zone. Whilst INE/iPExpert/Narbik have a more detailed and in-depth approach this is definitely good supplement to your lab. Definitely a 3.5/5 and one to add to the bookshelf.

[SPARK] VMware certification expiry – so what?

VMware certification announced that they are now putting timers on to their certifications and encouraging the recertification of titles. This has caused an uproar amongst the VMware and virtualisation community. Previously VMware certified professions have been able to certify against an exam and keep it past the date of the product going EOL. There are many VCP3′s around who are not VCP5. They are still eligible to the VCP status.

What do I think of this? So what. Welcome to the dynamic nature of IT, the industry which evolves and continuously changes. Certifications have always been a great framework for me to study against and been a good foundation for supplemental learning. I’ve always experienced the expiration of certifications having first been certified with Cisco technologies with my CCNA. This has a three-year expiration in which a candidate is expected to re-sit the same level or higher exam to retain its status. Cisco does this across their pillars – voice, security, R&S, DC, SP and wireless. Juniper has the same expiration policy on a two year cycle but chooses to keep  recertification status within its vertical. (Security only refreshes security and not other verticals.)

IT changes. IT evolves. Why walk around with a dated badge on your chest? What VMware are doing are increasing the validity and quality of their certified engineers by ensuring certified professionals are of a current and maintained standard. They are aligning to the industry and what all other vendors do. This is a great step forward and I welcome it. If you don’t like it maybe you’re one of those people who can’t handle change. Maybe its time to dust off the books and get onto the journey of continuous education.

Using an API

APIs and programmatically delivering network function is the impetus to change from older networking thought processes into the modern and future architectures. For some time now many networking vendors have been providing API function. NETCONF configuration via Python allows Juniper configurations. RESTful XML for NSX which can be wrapped in Python. All of this allows the notion of deployment via API. Dig behind Cloud Management Platforms such as vCloud Automation Center or Openstack and you will find the orchestration component uses many APIs to be consumed. They will consume lower tiered or southbound API.

Whilst this isn’t new for many people the idea of chaining multiple commands together in a Python construct could be. This snippet is taken from a template made by fellow co-worker from the NSBU – Thomas Kraus. This is one subset instruction that is part of a script that deploys 3 logical switches, connects the segments do a logical distributed router, deploys a NSX-edge and configures OSPF between the uplink interface of the LDR and the NSX-edge.

Below is the creation of transport zone between clusters within a vCenter administrative domain. Remember there is a current relationship of NSX manager and vCenter which is 1:1. The code is the following:

def create_tz(tz_name):
    url='https://' + nsx_ip + '/api/2.0/vdn/scopes'
    xml_string ='Transport-Zone-1Compute-1Compute-2Compute-3MGMT-EDGE-1'
    req=urllib2.Request(url=url,data=xml_string, headers=headers)
    return tz_id

The actions of the code – remembering this is a subset in a larger script – does the following.


  • Points to the URL https://nsx_ip/api/2.0/vdn/scopes
  • note nsx_ip=”″ is defined as a variable at the top.
  • Places the xml_string which references the compute clusters. This instructs NSX manager to inject the following information.
  • It will then return the transport zone ID.


Here is just a small sample of what you can do with rather static information. I can do what could take a bit of time to configure by hand or via GUI in a way that could draw dynamic information and potentially touch many domains. You could have it draw from a list of NSX managers to prepare a cluster for NSX. Imagine having to only dump NSX manager IP’s and have a script go off and do a repetitive task for you across many domains. This is the value of APIs. Wait until I show you how we can configure OSPF and routing! I can’t wait to share.

Disclaimer – I apologise if my Python terminology is wrong. I also don’t apologise for being an absolute hack or writing ugly code! :)

Moving to a vendor – Part 1

As many are aware I moved to a vendor recently. That vendor was VMware. I look to give some insights into moving to a vendor. I think some updates throughout the year as I settle in will be appropriate.

My role is technical pre-sales and the role has me working in the Network Security Business Unit. This BU has made quite a splash for many reasons. What I can say is that there is not a short of smart people or a wealth of information to consume. I am empowered to consume as much information as I want to. If I feel the requirement is there to read the engineer documentation of how the Logical Distributed Router updates LIF information into the hypervisor kernel I can. If I need to sync up with the sales directors and feed back information into the marketing team they are only a webex away. I am still on my path to finish off my CCIE RS before I look to expand my skill set else where and my colleagues and management support me. I share this knowledge via internal training, videos or bulletins to local, regional and global co-workers. This in turn is reciprocated. You are actively encouraged to be the best you can be.

In a customer facing role I am expected to present. I am expected to be agile and have to respond quickly to a changing environment. Having a flexible and dynamic workforce, workflow and the tools to do this is critical. VMware allows a choice of platform, a seed investment in software and all the remote collaboration and communication tools to do your work from anywhere. I can be on a plane, train, airport terminal or at home and perform the same calibre of work that I do in a traditional office. This is a massive paradigm shift from the traditional government roles I was in before. With everything provided there is nothing getting in my way of optimising my workflow for the better and being the most efficient worker I can be. It is actually quite gratifying.

With a customer facing roles comes a different role. I am the vendor. I am the person who a company comes to for help, understanding, education, insight into industry trends, technology or business problems. I’ve been on the other side and sometimes we feel funny about the vendor. We feel they have a goal just to up sell or push something you don’t need. Whilst I always believed in individual integrity, it is sometimes lost when a person walks into vendor land. What was great was that although I vowed to maintain my integrity, it was great to be welcomed into an environment that I know will never compromise it. It is great to know that I don’t need to see a solution using something that will become shelf ware. Seeing people deliver on what they say, do what they say and perform on the actions was great.

If you have considered moving to a vendor know it isn’t the dark side. There is SO much that happens in vendor land that never makes it public. The people and the culture is great. There is a real passion on being there and being the best because they people who come to vendor land aren’t the traditional 9-5 IT. I am finding more and more career IT people who are excited to share and who are so passionate in sharing our technology.

I think there will be a time to talk more about the differences and maybe some of the perks on offer when moving to a vendor such as shares, health, gym, which weigh into it all but that could be for another post. For now as I am 35,000 ft above the ground heading home to my pregnant wife. I have power points open building customer presentations, roadmaps, and technical updates. I feel like I am home. I am a network guy living in the software defined world. Maybe a vendor is like a Tardis or Alice’s rabbit hole. Bigger on the inside! With that, I leave you. Enjoy the weekend.

RESPONSE: The changing value of the CCIE

DISCLAIMER: I am working towards my CCIE and this isn’t an emotional or knee-jerk reaction. For me the CCIE is only a stepping stone for the further expansion of my knowledge.

I woke up this morning to an interesting article written by friend Colin McNamara. Colin is an all around cloud evangelist promoting the leading and bleeding edge in cloud, automation and cultural change within IT. It isn’t the first time I have heard grumblings of the CCIE becoming less valued or out of touch with the industry but I wanted to respond to Colins post.

The changing value of the CCIE by Colin McNamara

I agree with many points in his article and as I write this I have multiple python tabs open, I am looking at the VMware NSX API and have a few open documents on OpenStack integration. My role is slowly changing and I am right in the middle of seeing this. VMware NSX provides the ability to administrators of virtual environments to deliver programmatically delivered network constructs that change how we consume network functions. This can be done through a GUI (which uses the API ) , the RESTful API or an orchestration platform such as vCAC or Openstack. My role consists on understanding how to consume these, the network in the modern data center and how business benefit from such.

At some point, while monitoring for changes in their ability to earn money, these individuals saw the market, tech, whatever hit a tipping point.

Once at that point, their focus switched from monitoring the rate of change in their current skill set, to prepping to flip to the new cheese, the new cert, the new skill that will provide for them and their families for the next couple years.

The industry change is visible. For once we can see our industry changing right beneath us. The currents of change are pushing careers down different paths. Seeing this happening or if I be more accurate – choosing to see this happening – is something that will keep you ahead of the curve. The CCIE provides a great base standard of protocol knowledge mixed in with current and useful networking practices (I speak about the v5 blueprint!).

SDN, NFV, Cloud and devops all change the way we consume traditional function. We approach problem spaces in a new way with a different mindset. In this new world we have not re-written TCP/IP, we still consume IPv4 and IPv6 and deploy x86 workloads in software containers. What is changing is that no longer is big iron the way we do this or using the console to get into physical firewalls. No longer are we seeing us touching all the boxes across all sites manually. Change is programmatic and deployed across all devices. This gives us new scale and a new approach. Taking this to the new level are solutions like Distributed Firewall within NSX. Centrally managed and enforced via distribution in-kernel firewall uses the firewall technology in a different way.

Once you learn to enjoy the act of change, you will find that your perspective during tech transitions changes from fear, to wonder. That this world and industry is an amazing place.

I agree with Colin here. Embracing change is an absolute blast. Change doesn’t mean old skills are instantly or at all deprecated. A lot of times consumption methods change or we look at a problem in a differently but at the core we are looking to achieve the same end result.

I fully believe that the progression of technology will affect the hiring pool. I believe that the value that the CCIE currently holds will be replaced by the concept of “Network Developer”

I think the word replaced here is a bit dramatic and don’t agree entirely. I agree there is a progression in technology and that the hiring pool is seeking additional skills to the traditional acronym check list – BGP, MPLS, CCIE, CCNP, JNCIE. We are adding DevOps, Continuous Integration, Puppet, Chef and the like. We are adding Python and a language to this. We might see ACI or NSX or Contrail pop on here in time but I feel supplementing or core knowledge fields shouldn’t be about a product solely. It needs to be around a movement, technology, trend and the consumption of this.

The CCIE still gives a very robust knowledge platform in the networking space. What is important to remember that if you do not evolve from here and compliment core knowledge and diversify with a programming language or another technology you may go stale. If you become complacent with the notion that CCIE is the end-goal then you’re sorely mistaken. Someone who doesn’t develop on their CCIE will be looking for work at Taco Bell in time. Choosing not to evolve is what will break you not the advent of new technologies.

Colin draws on some childhood experience about changing through work, jobs and adapting to survive. Coming from little and appreciating providing for his family have driven him to not return to a place. Change has become a means to be the best and is fuel by knowing first hand what awaits for those that don’t. This context adds the reality to the situation.

I agree with change and the majority of what Colin is saying. Look where I am now. I moved from enterprise into VMware’s networking division working with disruptive network technologies. I want to boost my current depth of practical protocol knowledge too whilst learning API consumption. I too refuse to be the Lilliputian and want to always hunt for my cheese. I do owe Colin a beer or two and a beard rubbing next time I see him.

SPARK: HTTP Status codes

I have been working with the RESTful API within NSX for vSphere a bit lately. I have been standing a few environments up with a python script that makes sequential API requests to the NSX manager. This has been pretty damned fun. What I have learnt from some of the members of my team (Dmitri Kalintsev and Thomas Kraus) is that knowing what type of function you are performing is important to know what code to expect. Once you establish the expected result of a CONFIGURE or CREATE you can make the python script smart accordingly.

For my reference and for the upcoming posts I am going to cover some status codes.

201: The request has been fulfilled and resulted in a new resource being created. The newly created resource can be referenced by the URI(s) returned in the entity of the response, with the most specific URI for the resource given by a Location header field. The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. The origin server MUST create the resource before returning the 201 status code. If the action cannot be carried out immediately, the server SHOULD respond with 202 (Accepted) response instead.

204: The server has fulfilled the request but does not need to return an entity-body, and might want to return updated meta information. The response MAY include new or updated meta information in the form of entity-headers, which if present SHOULD be associated with the requested variant. If the client is a user agent, it SHOULD NOT change its document view from that which caused the request to be sent. This response is primarily intended to allow input for actions to take place without causing a change to the user agent’s active document view, although any new or updated meta information SHOULD be applied to the document currently in the user agent’s active view. The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields.

Preparations for War

I have previously written about the CCIE v5 and attempts at trying to achieve the v4. I am determined to finish and close this chapter of my certification before I move onto other things. Whilst VCDX would seem logical for many working at VMware on the NSX product I am looking to finish something I started. NFV has come and is changing the industry. It is flipping it on its head but there is one constant. IP.

Products like NSX change the architecture game but we do not re-invent the wheel. We still use firewall based on 5-tuple matching, we route with well known and defined protocols and we implement traditional hardware based services in software. Only the thinking has changed and that is covered off here. We are just changing the architecture and redefining networking but not reinventing the wheel.

So the CCIE for me is still very relevant. With that said whilst we wait for the transition to v5 the blueprint has a lot of overlap. OSPF is still OSPF. BGP-4 doesn’t differ. There are new technologies such as DMVPN and the there is the removal of FR. There are some great primers from the leading training vendors coming out covering off new topics. Whilst I sit at the car dealership waiting for my car to be serviced I am getting these videos under my belt.




Logical boy in a logical world

There always has been to the idea of thinking in the logical headspace. Since the inception of Virtual Local Area Networks the wizards of the ether, Network Administrators, have had a notion of a logical and physical representation of the network. A layer 2 domain could span multiple switches and have a routed gateway. This may be partitioned by a transparent firewall or another appliance. This thought process has evolved with Virtual Route Forwarding (VRF) and can be seen used in conjunction with MPLS which abstracts the data plane and the control plane.

Traditional application stacks that reside in data centres generally consist on three tiers. These are the web front end, an application engine and the data base tiers. These tiers generally consist of security policies that separate zones. These can be enforced by logical constructs such as ACL’s, Rule sets, NAT, VLANs and physical barriers such firewalls (generally multiple!) or physical isolation.

In application stacks that reside in Data Centers today that deliver our business critical applications there has been a shift from a handful of devices delivering to a service to thousands upon thousands at each tier. This generates a vast amount of east west traffic generally built upon old three-tier network styles. Unfortunately as applications have evolved our network architectures have been stagnant and rather physical.

With the strong presence of virtualization within modern data centres we have brought a new capabilities traditionally found in hardware into the hypervisor. Once in the hypervisor, network functionality capability has the same feature set as any physical counterpart. With services such as switching, routing, fire walling, load balancing and VPN functions to name a few now residing in software constructs, administrators need to start thinking about how this all maps together. The logical landscape is vastly different to the physical landscape and getting the mindset into gear requires some thinking.

So I put it to you my network colleagues – think about drawing out your logical networks. Consider an entire network that is virtualized that leverages the physical network as IP transport. Picture a reduction in inane traffic hairpins and sub-optimal packet walks. Program yourself to think logically and you will have taken your first step into a larger world. A logical boy in a logical world!

Nuage Networks at #NFD6

Nu-âhj: French, meaning ‘cloud’ is a division acquired by the venerable Alcatel Lucent, presented to the delegates around their Network Virtualized Services Platform (VSP).  The goal of Nuage is to deliver the instantiation of networks on demand when a workload requires it. The key focus of this presentation was rapid provisioning and providing an alternative for ISP or large enterprise to not rely on the lock in of Amazon Web Services (AWS).

There was a forward by Sunil Khandekar surrounding the vision of Nuage networks which set the theme for the presentation. A highly detailed rocket powered deep dive into the inner workings of Nuage networks SDN solution. Dimitri Stiliadis, Chief Architect of Nuage Networks, delivered the bulk of the presentation and stood his ground against the peppering of questions presented by the delegates.

The ability to reduce the time to market on network services is what provides a point of differentiation. Even though this is not seen by a customer initially, whist your service or cloud attempts to scale, the complexities inherit in older technologies will directly affect your time to market. What should be taking minutes or hours takes weeks. Do you want application developers doing the following: Configure routers, routing protocols (OSPF and BGP), connectivity, ACL and security rules, load balancers and firewalls? Definitely not and even for a well-trained network engineer this can take days if not weeks.

Nuage networks made an announcement during our presentation. They announced the 7850 Top of Rack (ToR) switch.  Today’s CIOs and cloud service providers are looking to cloud architectures to deploy applications most rapidly while maintaining security and compliance. Technologies like SDN help them to make their datacenter network instantaneous and boundary-less. The Nuage Networks Virtualized Services Platform (VSP) fully automates the datacenter network, creating secure “slices” for each user group or tenant while reducing provisioning time from days to seconds.

Designed to stitch virtual hypervisor based fabrics back to physical appliances this device has Virtual Extensible Local Area Network (VXLAN) enabled hardware delivering VLAN Tunnel End Point (VTEP) services. It runs the proven SR-OS networking software which is built from the stable Linux 3.2 kernel. It has a high density for a 2 rack unit switch with 96x10GbE ports. The density is that high the faceplate forgoes the Nuage logo.

Nuage networks received great feedback from the delegates. I have to admit I didn’t quite intake and digest everything presented to me initially due to the fact it was so spot on. I was witnessing what could be a great SDN solution for ISPs! Whilst there are many large vendors that have solutions this hasn’t stopped Nuage delivering a great solution.

Disclaimer: I attended Networking Field Day 6 as a delegate, but was not compensated to attend. I am free to write (or not write) about any of the presentations as I see fit without fear of censorship by the vendor or Tech Field Day. My general disclaimer is here