Linux and MBPr

The beautiful and wonderfully talented Docker Core member, Jessie ‘FrazelleDazzel’ Frazelle has a wonderfully detailed guide on her blog. It outlines how to install Debian ‘Jessie’ natively on a Macbook Pro retina. It steps through how to approach a clean install or partition of SSD, breaking the EFI into a usable section and then installing Linux.

There are a few caveats with Yosemite installs due to the LVM now being used as the boot partition. These are listed on the rEFIt page linked within Jessie’s article.

I am yet to give this a go as I am a single work/home Mac man but I know the retina display with Fedora Core would be gorgeous.

Check our Jessie’s blog here.

You will be able to containerise your desktop due to native support of LXC and Docker with distributions such as Ubuntu 14.10 and Fedora Core 21. Have you tried it? Let me know how you get on.

Log Insight 2.5 – Integrated Load Balancer

Log ALL the things

With the importance of logging on the increase you need to ensure you have a distributed log management system that can ingest thousands, if not hundreds of thousands of inputs per second. The Log Insight log appliance from VMware allows the ingestion of many logs. Coming from an ELK background there is a sense of familiarity here. Ingestion, RegEx and search, and awesome visualisation. Keeping the logging service up and running is important. This has been traditionally been solved with an external load balancer.

It is no longer BYO Load Balancer

The architecture of Log Insight is clustered in nature. Whilst you can have a stand alone server processing logs, if you require high log ingest then you need to have additional workers. The master is the active node in which lookups are performed against. Workers take logs in from the External Load Balancer (ELB) or Integrated Load Balancer (ILB). ILB and ELB allow logs to continue to be ingested when a worker or node become unavailable.

The image below highlights the sources of how my Log Insight cluster is set up.


The ILB here is enabled on the IP address of It is also found through the FQDN of log-l-01.corp.local DNS entry. The VIP takes the L2 MAC of the current master and acts as a proxy. In this case the current Master node is The ILB allows log ingestion to be balanced across worker nodes in a LI cluster. When a worker is put into maintenance mode it can be done safely without loss of log ingestion.

The image below shows the LogInsight workers, the Master, and the VIP.

VIP-MAC-LI with MAC 00-50-56-03-1b-25 shares the MAC of the current master.

Tips from the coal face

There are some things to watch with regards to migrating or changing from ELB to ILB, changing VIPs and adjusting DNS entries.

  • Watch the TCP session. It is possible that long, persistant TCP sessions could be still flowing across an ELB.
  • If using a FQDN ensure that the DNS TTL is decremented for cut-over.
  • There is a L2 requirement of workers, master and the VIP. ARP is used to discover neighbours so factor this into the design.

Note : If using an ELB you have a different source IP of logs. This can cause confusion when looking at log entries. The ILB honors the source IP of the log therefore giving more accurate logs.


Logging is paramount with the numerous events that occur. Having a robust and stable platform that not only uses logs but allows users to CONSUME their logs is important. Visualisation of data is important and having the correct and accurate data produces valid outcomes. Protecting your cluster’s log and ingestion will help immensely in troubleshooting scenarios.


NSX Controller Syslog via API

Lay of the land

Just a quick one today. It is possible to set a syslog server for each NSX controller. This will allow for reporting of events to a syslog server. In my case, I have LogInsight (LI)clustered behind the VIP of This is the IP of the FQDN representing my LI master and three workers. NSX manager is installed and three controllers have been deployed.

Web API access

I am going to use the Mozilla firefox REST API client against the NSX manager. I need to ensure the following

  • content-type application/xml
  • basic-authentication

Checking controller settings

Time to check existing settings on the controllers. By issuing the following GET command you will see nothing configured.

Here we can see nothing is configured for our Syslog servers for the NSX controllers. Where controller-1 is the name of the controller you can substitute the name of other controllers.

Posting our desired settings

Okay. Time to issue a POST against the NSX manager to configure the settings. This will need to be repeated against all controllers.

POST https://nsxmgr-l-01a/api/2.0/vdn/controller/controller-1/syslog


Where controller-1 is the name of the controller, substitute for other controllers.
* controller-1
* controller-2
* controller-3

Now what is a change without a validation of the work performed.

Time for an API Validation

Time to issue the command we used earlier to check our change was successful.

GET https://nsxmgr-l-01a/api/2.0/vdn/controller/controller-1/syslog

There is some XML output stating our change has been committed correctly. Check all controllers to ensure this has been changed.

API all the things

Whilst this was done with a REST API and browser client it would be possible to this with python, powershell, or programming language of choice. The key thing here is a rather clicky clicky UI change can be done very simply and efficiently. ALL HAIL API.

computers programming python 1680x1050 wallpaper_www.wall321.com_48

Bulk creation of NSX rules with Python

A customer was wanted to validate the impact of numerous firewall rules within NSX. The thought was how much impact on cores of the CPU host and distributed firewall throughput when NSX had 100, 500, and 1000 rule sets loaded. There would be a method that would have taken me a very long time to do – clicky clicky GUI. We have an API so why not use it. The script below generates XML in the format required for distributed firewall rule sets.

The python script will print XML. It will create a section called POC-test-rules. It will loop and print 1-100 for j and repeat this for i 0 – 5. This will make over 600 rules for our test environment.


So there is some XML that can be uploaded into the firewall section by a REST post. But before we do that lets have some more authentic rules. Here is an adjusted script to do some dynamic ports.

The output should look a bit more real world!


There is a nicer output with IP’s and ports.

I have demonstrated REST API POST via a browser here. Alternatively it is possible to use a subsequent script to push this information.

Here I am pushing a file called fw.txt. This is the output from the previous script saved into a text document. (I have not got a file to save correctly yet from the first script.) A breakdown of this script is as follows. Conn.request will POST a connection defined by conn to It will post the contents of the file to the L3 dFW segment. The body is defined to open fw.txt. Fw.txt is found in the directory of /Users/aburke/Desktop. header2 indicates the connection type, content type and defines that it should be parsed as XML. The two print commands will print the result of the actions – 400, 404, 500. HTML response codes which you can read more on here.

So here is a practical use of Python and the NSX API. What have you been doing to be more efficient lately?

(Safely) Owning File permissions like a boss

I uploaded some themes to my wordpress install for my blog recently. What I found was that the wordpress theme failed to read the stylesheet.css. This was frustrating and then I thought about the error. Read. File permissions. Linux. So I SSH’d in and found a very telltale sign something was wrong.

So lets change our files for this WordPress theme to what is deemed to be the industry standard.

  • Change all the files to 644 (-rw-r–r–)
  • Change all the directories to 755 (-rwxr-xr-x):

Time to check our style.css file.

Now to do this to the folders under the theme directory.

{} is replaced by the file path upon execution.

To issue this to all folders and sub-folders within my theme we use the -R switch. This switch means the change is Recursive. The command would look like the following.

Some tips

Something to remember. If you have a very large amount of files it may be worth considering other methods of executing this function. The chmod command will spawn a process for every file and folder it is run against. If the recursed directory has 20,000 files then CPU and Storage may take a hit whilst the action occurs.

Integrating vSRX into VIRL

Cisco VIRL is a learning platform which allows you to run real devices. It is built on an OpenStack architecture that allows rapid deployment of instances of NX-OS, IOSv, IOS-XE ASA and vSRX. I am going to show you the tips on getting it installed into OpenStack.
Thanks to those who want to remain anonymous for the tips, testing and variables.
Here are the steps so that you can inject a configuration file into the vSRX:
 1. Convert ‘thin provision’ image to ‘fat provision image’. This can be done usingthevmware-vdiskmanager as per below:
2. The image needs to be modified to accept configuration file injection. This must be done BEFORE loading the image into VIRL via the User World Management (Skinned OpenStack) interface.
You can run the command above on your VIRL VM, so copy the image into the VIRL VM and execute there.
login as ‘root’Edit the file /etc/fstab (nano /etc/fstab). The /etc/fstab should look like this (thevtbd1 disk is theconfig disk)


* /dev/bo0s1e /config ufs rw 2 2 is the old configuration disk.

* /dev/vtbd1s1 /config msdosfs rw 0 0 This is the FAT configuration disk.

Save the file.
Now we need to remove the SSH key. Remove the file with:
Shut the VM down.
3. The VM image is now ready to be loaded into UWM as a vSRX image.

Using the vSRX image in VIRL

You can add the vSRX image to your VIRL server under the ‘admin/images/’ menu by selecting ‘add’ and choosing ‘VSRX’ from the pick list, as per the picture:

 NOTE – If you want to make the vSRX image your default vSRX image, leave the Name/Version field blank. You can put release version information in the ‘release’ field.
If you create a topology with a vSRX node in it, at simulation start time, the system will look for a default vSRX image. If there is no default image, the simulation will not start and you will need to specifically set the VM_image and VM_flavor field values to the vSRX image that you’ve registered.
Configuration text placed in the ‘configuration’ field for the vSRX, will be automatically loaded into the VM at boot time. A correctly formatted JUNOS configuration will be applied assuming that there are no syntax errors! If you want to provision the VM with a basic set of user accounts, the configuration snippet below can be applied:
NOTE – if you do NOT pass in any configuration, vSRX will not like you and will crash on you!!! Make sure you pass in a minimal config, like the one below.

Your mileage may vary with this. VIRL is fun because there are lots of things happening behind the scenes.


Prophetic workflows for antiquated office rituals

Brothers and Sisters, Woman and children. Mmmhm. Oh, I have been enlightened. I have had a glimpse into the productivity and workplace future. Cast down the shackles and burdens of traditional and norm and embrace change in the workplace.

  • Work at times that suit you and not conventional hours
  • Throw down the granite slab of creation and take up a tool that caters to your work style.
  • The habitat created for work should know no bounds.
  • Where attendance was mandated substitute with output.
  • Cast down those that horde information and exchange knowledge and learnings freely.
  • Collaborate across many mediums and leave the digital mail where it belongs.
  • Don’t be afraid to speak up at work (and up the corporate ladder) and share what your thoughts are, for you will never know who echoes such sentiments.

Whilst the above list seems to have been pulled from a prophetic tablet of some description recently I feel many workplaces are happy to stay in the norm.

Historically people have worked 9-5 in a punch-in and punch out style. This has led to people defining their workday around this. In a modern-day where society is more connected that ever, online 24/7 and the other side of the world is 200ms away this isn’t productive. But if you’re going to change the office hours Anthony, you might ask what about work? How do employers ensure they are getting value out of their staff?

By focusing on output. Output and what work the person is producing is far more important than sitting at a desk between 9-5. 9-5 doesn’t suit most people of my generation. Imagine a day where you could wake up and go for a 5am run, get home, do some emails, have breakfast with the significant other and maybe the kids, do a conference call, host some meetings, have lunch with family, spend the afternoon building a report or solution, maybe duck down to the shops and then finish a little more correspondence after dinner. Sound good?

Well if this was to work it requires to more things. Working from anywhere (like a home office or cafe) and working on any device. The mentality that the office is “inside” the corporation and everything is “outside” is slowly dying. The user’s identity is the fluid domain that is the “inside” of the corporation. Any device should be able to be used for the individuals work flow after all, we aren’t all laptop people.

Share your knowledge and keep learning. No one hires anymore for someones wealth of knowledge. Quite simply put – technology is evolving so rapidly that hordes of knowledge on a single technology get stagnant fast. Sharing knowledge and voraciously learning new things is important. Don’t wait for a formal training course to learn something new. Find that something new yourself.

Email is the one of the biggest time-sink necessities. For some reason it has become the defacto standard of communication in business and it needs to change. There is so much email that even dealing with its volume via rules and managing lists is a full-time job in itself. There are sometimes days where email is dealt with solely. With instant sending there is an expectation of instant response and quite simply that isn’t how it should be used. Collaboration platforms such as Slack, HipChat, Project Squared offering integrations into popular workflow tools help modernise these traditional smoke-signal mediums.

Whilst this post may come across as a far-fetched dream for some it is definitely how workplaces need to become. I say my generation as I teeter on the upper bounds (27 at time of writing) but the younger generations these days work like this. This is what they know. These are the inventors, creators and some of the brightest minds in the world. What these ideals do is allow us to get on with creating and producing in an environment that is comfortable to the individual.

I am lucky enough that my identity allows me the aforementioned privileges and all I need to work is a simple Internet connection and any device. Whilst I am not the epitome of productivity I do not like using tools that hamper my output. Many of our legacy systems and work place practices have an affect in orders of magnitude. Be efficient. Be awesome. Challenge.

UI and UX are paramount

Single Purpose applications

Look at how applications are developed these days. Single use or single purpose applications. Twitter, Facebook, Calculator, Reminders, and Email. Just some examples of applications from an application catalog that are single purpose. They are gateways for a function. In this world we have choice and diversity on how we perform all functions of our world. How we interact is critical to a functions success.

This single applications whilst stand alone have elements of unity. Interactions between each other through API calls and contractual relationships. File sharing, link sharing, mirrored services. The ability to have applications talk to one another helps establish an ecosystem of isolated containers. Think bridges between pontoons. Or Bora Bora!

A bad UX is very damaging

A bad user experience will tarnish an application or product no matter how revolutionary, game-changing or insert marketing buzzword here it is. It may have all the bells and whistles and do all the things but quite frankly will be dropped. What it will be dropped for is something that does far less but is easier to use. That is not because we humans are simple, lazy or don’t want to learn things. A simple UX or UI can make complicated and/or convoluted tasks easier to approach.

So, quite simply put – if your user design and experience is poor, convoluted or not conducive to human interaction then it will simply be discarded for something better. The way your application or product is consumed and used is very important.

SPARK: OVN brings networking function to OVS

The OVN project launched today. The OVN application adds routing, switching, security policy and L2-4 ACLs upon many network abstractions. Very cool stuff here. What I found even more interesting was that from the OpenStack survey completed in 2014 was that Open vSwitch was the most tested and used switching platform in OpenStack deployments.

Read more here from the brains trust behind OVN and OVS.

If you want to know about the architecture of OVN, read here.

One year at VMware

The Role

People have asked me what it is like to work at a vendor and in particular as an pre-sales SE. It has been a good ride thus far. I have learned so much about vendor and customer interactions, product development, product improvement, sales cycles, and how to learn new tech voraciously.

As a Systems Engineer you can be as technical as you like or stay salesy. This is allows a wide spectrum of people to fit the role and also probably tarnishes many technical SE’s with a sales brush. I talked about that here with regards to myself wearing a tie. I get to present what is possible with a customer, work with them on their problems, and hopefully solve them with good solutions. It is a great role.

What I have learned.

Exposure to so much cool technology, strategies and learnings. My gosh. Over the last year my knowledge of NSX has gone from customer level to very high. That would be expected considering that it is my job to know it, right? Also how to speak in public, convey a message, articulate problems, describe bugs, resolve problems, design solutions, eco-system integration, manage people and process, deal with negotiations, and provide constructive feedback. I have also discovered many tips with regards to travel. Both domestic and international. I have gone from nothing to Platinum with my local carrier within 11 months. I know the nuances of plane travel, luggage packing, where to eat, stay, how to get around, coffee and food shops in many capital cities and countries abroad. Life-skills!

I managed to knock off the VCP-NV exam for NSX too! That was a bonus.

The Team

One could not ask for a more awesome group of people to work with. My primary domain is Networking, secondary would be management and operations and tertiary would be automation and x86 virtualization. When you work in a diverse skill set team you can draw upon the pool of knowledge very easily. People give information freely. The collective knowledge of the vendor borg cube is astonishing. A simple question to Dmitri has led to multi-hour whiteboard sessions on numerous occasions. People are more than happy to share and it is great. So many rooms, so many people and never do you feel like you are the smartest person in it. I am sure many vendors as like this. It is pretty important you have a good management structure and peers as you travel with them a whole heap. They are like a second family and this is great. At times it makes the travel a little more bearable.


I have done over 30 public facing presentations at events. This has increased from a VMware User Group presentation to Technical presentations in front of 500 people at vForum. It has been great to create content myself, deliver solutions and usable information to a variety of people. My favorite presentation of 2014 was co-created with Andrew Babakian and centered around a Kill-chain, how a phishing attack compromised a network we were both familiar with and how to defend against it. This subsequently turned into a blog post by Matt Berry and myself.

I also did a Keynote presentation. Now this was very interesting. The idea of is a keynote is to present a concept, an idea or theory to get people to think. Challenge the norm. What was cool was a massive gear-shift from the other presentations I do. I was on stage with introduction music, I had my name on a fancy video and there was smoke and strobes. That was pretty rock star. Whilst I initially perceived this as “markety” it ended up being a massive learning experience. What I took away from working on a keynote was that it is a bit like the movie Inception. You need to seed the idea in its most basic, rawest form for it to stick. It was good to have follow-up technical conversations with the people who had this stick.

Pecha-Kucha is the idea of telling a story in 6 minutes with 20 seconds per slide/image. At our technical summit which saw 1000 VMware staff from the region get together in Macau, our MD/VP challenged people to deliver an impromptu deck. Roman and I got up on stage and role-played a buzz-word happy CIO and a VMware staff member. It was great to retell the VMware story with Meme’s and cliche-busters. (Roman : After all, it is awesome that Jimmy, the network architect, got network deployment times down from one year to 10 months! ;)) The big takeaway here was adapting and doing. We were given the opportunity and it was there for the taking.


Well that was 2015. My 2015 plans are in place and I want to do a whole lot more. OpenStack, PaaS, application creation and CI are all part of what I want to learn. I also want to keep working on my log management and event alerting skill set. Whilst I will keep supplementing and learning in my primary domain, I am going to back up and reinforce learnings in my secondary and tertiary domains. Here’s to 2015 and what new things it will bring.

Beware of the IT guy in a tie.

I’ve been in IT since 2010. I first took on the role as an IT support administrator in schools. This involved me slicing my week up where I would spend time at a variety of sites. I did everything from customer service, desktop support, server administration and network support. Since day one, four days a week I have worn a shirt and slacks. On Friday, I wear a casual shirt and jeans.

But what are the origins of the tie? Why would someone want to wear one.

From Wikipedia

The boy-king Louis XIV began wearing a lace cravat about 1646, when he was seven, and set the fashion for French nobility. This new article of clothing started a fashion craze in Europe; both men and women wore pieces of fabric around their necks.

Before this it was believed that Croatian mercenaries wore these as a sign of allegiance to the French crown. There are traces of Roman Soldiers wearing neck ties to denote rank, legion or function. But as many may point out that ties are now purely used in the realm of sales only.

As my career grew and I moved positions into a government agency that dealt with a myriad of executives and IT staff it was deemed appropriate I wear a tie. Since then that piece of neck material has adorned my throat for many years. I was designing, a CLI warrior and delivering new network architectures. I had a team of Level 1 support teams who would rack and stack. I was comfortable in my attire. Plus the Minister of War, Finance and Social imagery, the wife, deems the style pretty impressive.

So they look good but what is your problem?

There is a problem though with IT and ties. There is a damned awful assumption. The assumption is that a Tie == sales man who knows nothing. A sales grunt pushing a sales-technology-speak to make a sale. This is something that irks me. When I walk into a room for a technical discussion I am instantly dismissed as a sales drone. I find great amusement in this. Whilst my role is technical pre-sales I hold my head high knowing I can put the technical in technical pre-sales.

As conversations open, my sales guy will lead and we both will talk. If I am presenting I work my way through my presentation or demonstration gauging the depth of those in the room. More often than not the conversations go very deep, get extremely technical and I can speak at a depth that is more than suitable for most discussions.

Judging a book by its cover

It is amusing that in 2015 that the lessons our parents teach us still don’t stick. I get looked down on by many the first time I walk into a room and when someone walks out, they walk out with a different opinion. Whilst I don’t conform with the shirt, jeans and sports jacket look that Silicon Valley has imposed on IT, I do hold to what I like to wear, what I am comfortable wearing and I will keep doing it.

If you live in Asia, Pacific and Japan then I will see you soon in my shirt and tie! Maybe I can change the notion of Tie =/= sales drone, one meeting at a time.

Implementing a Zero Trust Security Architecture

The old mantra of “trust but verify” just isn’t working. “Never trust and verify” is how we must apply security in this era of sophisticated breaches.

Looking at 2014 in the rear-view mirror there has been a myriad of security breaches. Target’s breach exposed over 40 million credit card numbers, 70 million pieces of personally identifiable information resulting in over 1 billion dollars in related costs(1). To further the hit list of this years victims, as of late November 2014, Sony Pictures Entertainment’s entire corporate network is offline due to a suspected breach, which has seen staff cease electronic work entirely due to a corporate wide, self imposed network shutdown(2).

In a time where attacks are both prevalent from inside and outside of business IT and security departments must ask themselves the question Who can you trust?

This blog first appeared on NetworkInferno, vNetadmin and

What is a Zero Trust Model?

In 2013 Forrester published a cyber security framework called the Zero Trust Model. The paper proposes a vendor neutral approach to changing the way cybersecurity is thought about. The major benefits of a Zero Trust Model include:

  • Zero Trust is applicable across all industries and organizations
  • Zero Trust is not dependent on specific technology or vendor
  • Zero Trust is scalable


To implement such a model there needs to be a shift in how people see information and security architectures. The notion of a trusted network (usually seen as internal) and an untrusted or lower security network (external) must be removed. In a world of Zero trust, all network traffic is untrusted. The three technology enablers of this are:

  1. Ensure all resources are accessed securely regardless of location
  2. Adopt a least privilege strategy and strictly enforce access control
  3. Inspect and log all traffic

These three key facets to a Zero Trust Model allow security professionals to scale their security posture and approach enforcement a new manner.

Why is a Zero-Trust Model any Easier with NSX?

Micro-segmentation and Service Composer transform the way security is applied in the virtual data center, making “Zero Trust” a reality.


VMware NSX also provides a stateful firewalling capability, distributed every NSX enabled hypervisor, known as Distributed Firewall (DFW). The DFW, applied between the virtual NIC (vNIC) and vSwitch is image07integrated into the hypervisor kernel and provides near-line rate firewall throughput, scaling horizontally as hosts are added in the data center.

The DFW allows administrators to wrap security controls around the virtual machine itself, removing the dependence on in-guest firewalling which is often easily compromised by application based exploits. In addition, having a firewalling capability right at the VM’s point of entry to the network allows for a vastly different approach to the traditional multi-tier app equals multiple-subnets in the network and 3 legs off the firewall, e.g.:


Figure 1 – Unit-level segmentation

With NSX DFW, a single layer 2 network segment can now be divided into “micro-segments” where all that’s required is a security policy to define the different application tiers. So the 3 Tier app which normally requires 3 VLANs, 3 Subnets and 3 firewall legs now becomes:

Firewall rules are managed centrally using Service Composer (see below), yet the compiled rule set is actually pushed out as a filter to each hypervisor and is specific to each virtual machine. Filtering is stateful and can be applied from layer 2 to 4. Rules can match on MAC, IP, Port plus on specific data center objects, like VM security tag, VM name, Port Group or Logical Switch attachment, Cluster membership and many other criteria.

NSX DFW also offers the ability to do service chaining where IDS/IPS and Anti-Malware virtual appliances from best-of-breed security vendors can be inserted inline between virtual machines – something that has been traditionally very hard to do.

VMware NSX Service Composer

VMware NSX introduces a capability known as Service Composer, which allows helps you provision and assign firewall policy and security services to applications in a virtual infrastructure. You map these services – in the form of a policy – to a security group, and the services are applied to all virtual machines that are a member of that security group.


Figure 2 – Service Composer

Services such as IDS/IPS, anti-malware and next-gen firewalling can be inserted into the traffic flow and effectively chained together between VMs on a granular, per workload basis. API driven tagging of VMs allows services to be applied dynamically, allowing instant reaction to new threats. NSX and Service Composer provides the foundation for creating granular, zero-trust security architectures as discussed below.

Security Groups

A security group defines the assets (virtual machines, IP ranges etc) that you want to protect. Security group membership may be static i.e. a specific VM or set of VMs can be made a member or dynamic where membership may be defined in one or more of the following ways: 

  • vCenter containers such as clusters, port groups, or datacenters
  • Security tags, IPset, MACset, or even other pre-existing security groups. An example of this could be to include in a “Quarantine” Security Group any VM tagged with a security tag of “AntiVirus.virusFound”
  • Directory groups exposed to NSX if Active Directory is registered to NSX Manager
  • Regular expressions such as virtual machines with ‘custA’ or ‘web’ in their name
  • Operating System of the Virtual Machine

Note that dynamic security group membership is being continuously assessed. For example, a virtual machine tagged with the AntiVirus.virusFound tag might be added to to the Quarantine security group due to a match on the tag. When the virus is cleaned and this tag is removed from the virtual machine, it is then dynamically removed from the Quarantine security group. Note that when a VM becomes a member of a new a new group, it typically remains a member of any existing groups. So a policy weight is used to determine which firewall rules are applied first (see below).

Security Policy

A security policy is a collection of stateful firewall rules and/or specific security service configurations.

Security features provided by a security policy

  • L2-L4 Stateful Firewall rules that define what traffic is allowed to, from or within a security group. Note that the rule enforcement point is in the vSphere hypervisor  between the vNIC and vSwitch – thus creating a rule that denies VM to VM traffic for VMs on the same subnet is achieved with just one rule. This capability, while similar to private VLANs, provides far greater flexibility as specific ports can still be permitted while all others can be denied..
  • Guest Introspection  services for data at rest such as Anti-virus, Vulnerability Management and Data Loss Prevention (DLP) scanning.
  • Network Introspection services for data in-flight such as Intrusion Detection/Prevention (IDS/IPS) next-generation firewalling and WAN optimisation.

Attaching a Security Policy to a Security Group

Administrators can attach a security policy (say SP1) to a security group (say SG1). The services configured for SP1 are then applied to all virtual machines that are members of SG1.

If a virtual machine belongs to more than one security group, the order in which services are applied to the virtual machines depends on a weight value  applied to  each security policies attached to the security group in question – whereby the higher weight policy rules are applied first, followed by the rules in the lower weight policies.

Zero-trust in practice

A Zero Trust Model with VMware NSX Distributed Firewall and Service Composer is built around the fact that policy can be applied to the workload, dynamically enforced, anywhere within the infrastructure, and implemented right at the virtual machine’s connection to the vSwitch.

The image below depicts the implementation of a Zero-trust model. Traditional security zones are represented here by the colored rectangles. Each Zone is constructed using an NSX Security Group and has Virtual Machines or in some cases physical machines as members. (physical machine group membership is based on an “IP Set” i.e. a list of IPs or subnet(s) that defines physical hosts). An NSX Security Policy (red cylinder in diagram) is then attached to the Security Group, where access to or from that group is required.


Figure 3 – Zero Trust model

The Zero Trust model applied in Figure 3 – Zero Trust model is such that all users – even those on the WAN – are considered external to the data center. All traffic to the application front end (Zone 1 – Web Tier in this example) must enter via existing physical perimeter firewalls and then be proxied in the DMZ via a set of application gateways..

It’s then only the Application Gateways such as Reverse Proxies and Web Application Firewalls (WAFs) that are allowed to reach the front end of the Zone1 Web servers. This ensures that all traffic to the application, regardless of its source is inspected, validated and permitted only once the application gateway allows it.

Note also that within the Zone1-Web Tier we’ve illustrated that these VMs are on the same Layer 2 network, however as shown by the small firewall icon, security enforcement is applied right at the vNIC so the Zone1 policy can easily deny VM to VM traffic within the group, regardless of the underlying network between these VMs.

To see how these Security Groups are represented within NSX, we open up the the Service Composer Canvas view:

image03Figure 4 – Service Composer Canvas

Looking at the VM-In-Production Security Group in Service Composer,a click on each icon reveals: which VMs currently belong to the group the policies that apply to it; and what is being enforced. For example, a click on the blue icon representing Virtual Machines displays a list of dynamically added VMs.


Figure 5- Virtual Machines matched by Service Composer

”Production” is a Security Policy that is applied to the “VM-In-Production” Security Group and includes VMs that are already a member of Zone1-Web, Zone2-Application, and Zone3-Database Service Composer groups. A Security Policy is also applied to Zone1-Web, Zone2 and Zone3, so VMs that are members of both the Zone policy and the “Production policy will have rules applied in the order of the weight assigned to each policy.

In this example, the Quarantine Policy has been given the highest weight and thus Quarantine firewall rules will move to the top of the Firewall rule table shown in Figure 6 – Security Policy Weighting.

image05Figure 6 – Security Policy weighting

This ensures that a VM in the Quarantine group is denied network access before other rules are processed that might grant network access further down in the rule table.

As a consequence of weighting applied to each policy as per the diagram above, a VM that is a member of the two groups where Zone1 AND Production policies have been applied, will have Zone1 rules applied first followed by Production rules in the firewall rule table.

Here’s the firewall rule table generated by Service Composer and the policy weighting shown above. As you can see the Quarantine group is first and contains a Block action for Source of the Quarantine group and Destination any: 


Figure 7 – Quarantine Group in Firewall view

Further down in the rule table, a VM in the VM-In-Production group gets access to essential network services like DNS and Active Directory. This is represented by the VM In-Production Security Group in Figure 3 – Zero Trust model. Below in Figure 8 – VM In-Production Policy is the relevant rules.


Figure 8- VM In-Production Policy

Note also that clicking on the Policy icon in Service composer also reveals which policy and thus firewall rules and services have been applied to VMs in this Security Group. This is highlight below in Figure 9 – VM-In-Production Security Policy.


Figure 9 – VM-In-Production Security Policy

The Production Security Policy has three rules which match on source, destination, service and then performs an action. Rule 1 is defined to match the source based on Policy’s security group it is attached to the destination Zone5-Production Security Group. It is matching on DNS and Microsoft AD services and permitting these.


Figure 10 – Production Security Policy

The Figure 11 – Zone1-Web Canvas shows members of the Zone1-Web group. This particular topology has been deployed via a vRealize Automation blueprint. Built in to the blueprint is the instruction to add these VMs to the Zone1-Web security group at provision time, thus security is automatically applied to these web application VMs without manual intervention or manual creation of firewall rules.


Figure 11 – Zone1-Web Canvas

To find out what Security Policy is being applied to Zone1-Web click the Security Policy icon on the Service Composer group shown in Figure 12 – Zone1-Web Security Policy Canvas.


Figure 12 – Zone1-Web Security Policy Canvas

On closer inspection of the Web-Tier Security policy in Figure 13 – Web-Tier Access Policy overview, we can see that SSH, HTTP and HTTPS (22, 80, 443) are being permitted from Zone0-DMZ to Zone1-Web. The policy then allows any communication between Zone1-Web and Zone2-Application. It would be very easy from here to restrict what is being communicated further by isolating Application only ports.


Figure 13 – Web-Tier Access Policy overview

To ensure nothing can communicate to applications of machines other than those specified, a Default All Server VMs policy is created which has an implicit deny any any rule associated to it. This ensures all workloads within this security architecture cannot communicate to anything unless explicitly specified.

Looking at the logical security topology, an administrator can easily  determine how to protect and enforce application topologies across the infrastructure. Jump hosts  can be authorised to reach specific targets in the data center based on the Active Directory credentials of the user that is logged into that jump host.

So, because NSX simultaneously provides centralised policy control (Service Composer) with  distributed enforcement (Distributed Firewall), building a zero-trust model is now operationally viable and provides enforcement where there was previously none.


Figure 14 – Zero Trust model overview


Service Insertion and third-party integration via Network Extensibility

With the use of service composer it is possible to insert additional services for consumption on a per application basis. Using VMware’s Network Extensibility (NetX) framework, third-party partners can integrate into VMware NSX. This allows services such as remediation, advanced load-balancing, packet capture and analysis, IDS/IPS, and anti-virus to be applied to service composer groups. In turn this provides additional services to those applications that require it. Application services are no longer defined by the hardware architecture that exist for a lifecycle of the investment. Services can be dynamically spun up and the torn down based on business requirements that can change rapidly.


Figure 15 – Zero Trust model overview with 3rd Party security integration

Figure 15 – Zero Trust model overview with 3rd Party security integration is an example of Symantec anti-virus scanning being applied the VM In-Production policy. This insertion will provide Symantec’s agent-less anti-virus, anti-malware capability to VM In-Production Service Composer group. The result is that Zone 1 – Web Tier, Zone 2 – App Tier and Zone 3 – Database VMs that are “In-Production” will all receive this service. New workloads assigned to the VM In-Production group will also receive AV scanning. VMs that are destroyed or fall out of the matching criteria of these zones will no longer receive this additional service.

Automatic Policy

Service Composer provides the ability to automate the application of a security policy by leveraging Security Tags. A feature of NSX, Security Tags can be a matching criteria for Service Composer groups and based on what the tags is, additional actions can be taken.

In Figure 15- Zero Trust model overview with 3rd Party security integration. the Quarantine Security group has a match criteria of Security Tag ‘Quarantine’. With the Symantec AV service protecting the VM In-Production Security group, if the VM is found to have a vulnerability, the “Quarantine” tag will be applied. This tag will now make the VM a member of Quarantine group and thus be enforce the Security Policy associated with the Quarantine group. Firewall rules applied to the Quarantine group, will effectively block communication to “any” other address and additionally only permit a connection from the remediation group devices.. The remediation service provided by Rapid7 will clean the VM  and automatically remove the ‘Quarantine’ tag which results in the VM being placed back into production.

Whilst this is a strict and severe remediation policy, administrators can tailor this to their environment. This policy could destroy the VM and deploy another from template, leave in production and notify through Event and Log Notifications or apply additional security controls restricting access to certain assets. This highlights the flexibility of Service Composer, dynamic matching of Security Policy and the powerful nature of automated Service Insertion through our Network Extensibility partners.

Hardening your existing infrastructure from the inside out

Network virtualization with VMware NSX allows simplified integration into brownfields environments seeking to implement zero-trust security policies. The ability to deliver this on a per application basis gives enterprises the edge in securing production workloads without interrupting  production.

With the ability to take existing workloads and enforce security policy in a dynamic and context aware way, without the need for new physical equipment or complex topology changes, makes network virtualization a key tool in revolutionising data center security..


(1) – Target CEO fired for more than security
(2) – Sony Pictures down for a second day after network breach

Further Reading

VMware® NSX for vSphere Network Virtualization Design Guide
WhitePaper: NSX Distributed Firewalling Policy Rules Configuration Guide
Getting Started with Micro-Segmentation with NSX vSphere

About the Authors

Matt Berry – Matt Berry has spent over twenty years in the IT industry and in that time, attained CCIE #10473 while working for AT&T in backbone operations. Matt has also spent time on the customer side of the fence, the channel and other vendors, primarily focused on networking, application performance and data center security. Matt joined the VMware Networking and Security business unit in mid-2013 and can’t get enough hypervisor-based packet filtering and forwarding with VMware NSX. Matt’s now the proud holder of VCP-NV – VMware’s Certified Professional for Network Virtualisation.

Anthony Burke – Anthony is a Systems Engineer within the Network and Security Business Unit (NSBU) at VMware. He delivers scale out data centers powered by network virtualization. Anthony has spent time in the emergency service sector before VMware as a Network Architect of critical infrastructure. He has a penchant for evangelizing and evolving legacy security architectures into something that can handle threats now and into the future. Anthony has a few letters after this name – CCNP, VCP-DCV and VCP-NV for a start. He writes frequently at and can be found on Twitter as @pandom_


Motonori Shindo – Motonori Shindo is a Staff System Engineer at VMware, focusing primarily on NSX, a network virtualization platform by VMware. He is known as a distinguished speaker in the computer and networking industries. He has contributed to many books and magazines, among which “VMware NSX” is the world’s first book specifically about NSX. His blog can be found at

Upgrading to Log Insight 2.5

Log Insight provides log management, aggregation and visual representation of events within both the physical and virtual network. It has the ability to manage the logs of thousands of distributed devices and systems and allow administrators a simplified portal for log management.

Log Insight just got an upgrade to version 2.5. My current pseudo-production environment was running 2.0 GA and I wanted to take advantage of the new features in 2.5 – namely the cluster load balancer and events trend analyser.

At a high level Log Insight supports the continuous ingestion of logs from a number of different platforms. It can take logs in and provide the ability to perform complex visual lookups, provide integrated regex lookup through logs, and graph events based on these logs. The architecture is quite simple and very powerful.


When Log Insight is upgraded the master node is logged into and upgraded first. The worker nodes continue having logs ingested.

Under the administration tab of Log Insight select the Appliance tab. Select Upload PAK.

Screen Shot 2014-12-10 at 1.16.29 pm

Select your PAK file you want to upgrade with.

Screen Shot 2014-12-10 at 1.16.50 pm

If you are running a single master node and no workers in your Log Insight environment you will need to accept a period of missing logs due to the downtime of the upgrade. If you are running a Log Insight cluster, the Master is upgraded first, which will only stop queries being run against logs. Ingestion of logs will still continue by the workers. Select upgrade.

Screen Shot 2014-12-10 at 1.17.09 pm


Upload of the PAK begins.

Screen Shot 2014-12-10 at 1.17.26 pm

After a period of time the Master is upgraded and you will see the current release and build number. Happy days.

Screen Shot 2014-12-10 at 1.23.25 pmVMware Log Insight is now upgraded to 2.5 – vRealize Log Insight.
Screen Shot 2014-12-10 at 1.23.17 pmIf you are running a Log Insight Cluster you will need to drop the individual LI workers into maintenance mode and upgrade them. If your LI worker pool is protected by an external Load Balancer you will need to drop them from the Server Pool as well.

First Pause the worker. This will drop them into maintenance mode.Screen Shot 2014-12-11 at 4.33.14 pm

The green Upgrade icon appears. The worker is ready to be updated. This status will change to an amber signal stating upgrading. After a period of time this will have upgraded.Screen Shot 2014-12-11 at 4.35.02 pm


The more astute reader would have noticed a 2.0.1 worker being upgraded to 2.0.2. At this stage I am offline from my lab and had to borrow from my powerpoint deck!

With that, I have successfully upgraded my worker. I will put him back into the load-balancer pool. Repeat the process for all workers in the LI cluster and you will be running version 2.5.


SPARK: VIRL is launched!

Cisco VIRL

Cisco VIRL has been officially release. Cisco VIRL is Ciso’s network topology platform that allows the study, testing, simulation and validation of enterprise and service provider topologies in a lab environment. Built on KVM using OpenStack, this platform enables network administrators to build powerful topologies that allow test, validation and architecture exploration of new technologies. It also allows old dogs like me to study against it for my CCIE certification.

As per the VIRL site,‘s words

  • Build highly accurate models of existing or planned networks.
  • Design, configure, and operate networks using authentic versions of Cisco’s network operating systems – IOSv, IOS-XRv, NX-OSv, and CSR1000v.
  • Integrate 3rd-party virtual machines, appliances, and servers.
  • Connect real and virtual networks into high-fidelity, high-scale development and test environments.
  • Design and test anywhere – VIRL is portable!

There are two ways to purchase VIRL. Both are annual subscriptions which in my opinion are a fair and reasonable price. One is 199.99 USD per annum. This is the non-commercial personal use. The other is academic pricing. Both have a gift-card format which allow parents or educators to share the love! Quite a simple shopping cart feature for such a powerful reward. Regarding price – I have two mindsets about this.

Firstly, 200 dollars per annum is great when you’re in full study mode. I’ve spent more on rack rentals in a year. It is great to be able to pay for something and it just works. I can think of the countless hours spent wasted troubleshooting scenarios only to find out it was a hung process in dynamips or GNS3 didn’t do something or I had the wrong revision of a code for my physical device.

Secondly, I think the price has the sticker shock element to it. This might be to an internal struggle to not allow it to ship for free and recouping costs to the fact they wanted a litmus test. The fact that VIRL was touted as free has made this per annum model harder to swallow. I see it in alignment with all other training and service styled solutions. INE’s All access pass, Cisco Learning Network subscription all have you pay a per annum fee and you get updates. I don’t see why you wouldn’t get updates with this.

NX-OS excites me as you can test and validate code. I’ve used VIRL in its many forms over the last few months and I’ve integrated into my physical network, VMware’s corporate WAN and my Singapore cloud for additional testing capability. It’s a flexible platform and I think I’ve logged more CCIE hours against this than GNS3. I know there has been less time wasted troubleshooting the GNS3 platform than my environment with VIRL – I always never knew if it was CPU exhaustion to GNS3 or PEBKAC when I built CCIE labs in GNS3. Having a new born child, an hour of study is really 55 minutes of study thats certification focused and not stuffing around with setting up GNS! – VIRL50 coupon will get you 50 dollars off at checkout on the annual personal edition!

If you need documentation or support you can find it all here –

Disclaimer I received a 100% discount coupon for my first years annual subscription for VIRL. This was through feedback from the BETA in which I was a participant. My standard disclaimer applies to this post, like all others. I have also paid for an Academic copy too.

Don’t forget how to be a user

## Don’t forget how to be a user

I work for a large technology company that provides many product offerings. I live in a world that is at the forefront of the network industry that delivers a leading edge technology. There are so many parts of my companies offerings that I have nothing more than a users view of the world. You know what? That is absolutely fine.

I look at end-user computing with absolute marvel. I do not know how to administer it. I do not know how to optimise and scale the environment. I do not know why certain applications are published in a certain way. What I do know is I can use any application published to my app catalogue on my iPad, hand over to my laptop and continue working seamlessly. Where application publishing is not enough I can gain access to a floating Windows desktop and perform more intensive tasks. I know if I lose my iPad that the content on there is secure inside the provided containers and I can initiate a remote wipe.

As a consumer this experience is conducive to my on the road lifestyle. It suits my workflow and I have tightly integrated a work anywhere mentality irrespective of my device I use. Being close to my product that I work with in a technical capacity I realised it is important not to forget something. Don’t forget to look on it as a user. How does the user actually deploy said technology. How does a user work with technology to solve a problem? Am I actually solving problems or creating more? Is the interface intuitive or convoluted?

This is user experience. Think about the word consuming without the context of an API for once. Think of it as a user trying to get their job done. It is sometimes important to put the hat of a user on and see it from their eyes. In certain roles the lines between administrator and user blur and I think it is important to step back and look at what you’re doing.