AppCatalyst – So simple a networker got it running

AppCatalyst

AppCatalyst is a thin hypervisor based on VMware Fusion that is targeted at Developers. Developers who want to use container based deployments on Virtual Machines can user AppCatalyst to deploy their workloads. VMware Photon is a minimal Linux Container Host that provides Docker, rkt, and Garden container specification out of the box. It is included in AppCatalyst as the default image.

Where VMware Fusion has a GUI there is none to be found with AppCatalyst. A REST API or CLI are primary touch points to rapidly spinning up and bringing down workloads. With a REST API exposed other automation tools can drive AppCatalyst. Vagrant anyone?

AppCatalyst and the Vagrant plugin for AppCatalyst is free. Go download today if you’re running Mac OS X 10.9 or later.

First steps – Installing

  1. Download and install AppCatalyst
  2. Add the PATH to your current Shell if you desire with:
export PATH=$PATH:/opt/vmware/appcatalyst/bin

or
Browse to /opt/vmware/appcatalyst/bin and run appcatalyst

Executing appcatalyst will yield the following

Overwatch:\~ aburke$ appcatalyst

VMware AppCatalyst Technical Preview June 2015
Build # 2836636
Copyright (C) 2015 VMware Inc.
All Rights Reserved

Usage:
appcatalyst ...

Virtual Machine Operations
vm list     List all available VM IDs.
vm create     Create a VM from the default template.
vm clone     Create a clone of the parent VM.

Virtual Machine Power Operations
vmpower list    List all powered-on VM IDs.
vmpower on     Power on the VM, or resume a suspended VM.
vmpower off     Power off the VM immediately.
vmpower shutdown     Shut down the guest OS.
vmpower suspend     Suspend the VM, saving the current state.
vmpower pause     Pause the VM.
vmpower unpause     Unpause the VM.

Guest OS Operations
guest getip      Get the IP address of the VM.

Checking for updates...
No updates available

As you can see there is a very basic but power CLI.

Furthermore I create a VM very quickly. If I am wanting to develop two different platforms or different virtual machine I can edit my AppCatalyst.conf and change the image if you desire.

Overwatch:~ aburke$ sudo appcatalyst vm create NetworkInferno

Info: Cloned VM from /opt/vmware/appcatalyst/photonvm/photon.vmx to /var/root/Documents/AppCatalyst/NetworkInferno/NetworkInferno.vmx

Validate the list of AppCatalyst VMs

Overwatch:~ aburke$ appcatalyst vm list
Info: VMs found in /Users/aburke/Documents/AppCatalyst
NetworkInferno

Power on the new VM

Overwatch:~ aburke$ sudo appcatalyst vmpower on NetworkInferno
2015-06-24T17:29:26.619| ServiceImpl\_Opener: PID 17246
Info: Completed power op on for VM at /private/var/root/Documents/AppCatalyst/NetworkInferno/NetworkInferno.vmx

Once this has started issue the following to find the IP address of your new Photon VM.

Overwatch:~ aburke$ sudo appcatalyst guest getip NetworkInferno
192.168.113.130

Time to SSH to our box with a pre-built key.

Overwatch:Vagrant aburke$ ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_ssh_key [email protected]
The authenticity of host '192.168.113.130 (192.168.113.130)' can't be established.
RSA key fingerprint is 77:e8:6e:02:72:a2:13:98:f3:36:00:ba:c1:06:ae:22.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.113.130' (RSA) to the list of known hosts.
Last login: Fri Jun 19 20:51:42 2015 from 192.168.217.1
photon [ ~ ]

Now that we are in it is time to see what is installed already.

photon [ ~ ]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
photon [ ~ ]$ sudo docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64

Great! I can see I have Docker installed. I could actually run a script that would have automated the pulldown and deployment of a particular Docker container if I wanted to.

AppCatalyst Daemon and REST API

The AppCatalyst framework has a Daemon that can be run. This opens the ability for Vagrant to use AppCatalyst. It also allows an administrator to use the API browser. You can access this via http://localhost:8080.

This is the daemon that allows REST API access to AppCatalyst.

Vagrant UP all the things

First you will need to install the Vagrant AppCatalyst plugin. One of the first things noted if you have used Vagrant previously with Fusion is that it cost money. Vagrant and AppCatalyst plugin is free!

aburke$ vagrant plugin install vagrant-vmware-appcatalyst
Installing the vagrant-vmware-appcatalyst plugin. This can take a few minutes...
Installed the plugin vagrant-vmware-appcatalyst (1.0.1)!

With the plugin installed it is time to initiate the AppCatalyst Daemon.

Overwatch:/ aburke$ /opt/vmware/appcatalyst/bin/appcatalyst-daemon
2015/06/24 20:26:12 DEFAULT_VM_PATH=/Users/aburke/Documents/AppCatalyst
2015/06/24 20:26:12 DEFAULT_PARENT_VM_PATH=/opt/vmware/appcatalyst/photonvm/photon.vmx
2015/06/24 20:26:12 DEFAULT_LOG_PATH=/Users/aburke/Library/Logs/VMware
2015/06/24 20:26:12 PORT=8080
2015/06/24 20:26:12 Swagger path: /opt/vmware/appcatalyst/bin/swagger
2015/06/24 20:26:12 appcatalyst daemon started.

With that started time to deploy two VM’s with Vagrant. This is what my VagrantFile looks like.

Set our default provider for this Vagrantfile to 'vmware_appcatalyst'
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_appcatalyst'

nodes = [
{ hostname: 'aburke-1', box: 'vmware/photon' },
{ hostname: 'aburke-2', box: 'vmware/photon' }
]

Vagrant.configure('2') do |config|

Configure our boxes with 1 CPU and 512MB of RAM
config.vm.provider 'vmware_gantry' do |v|
v.vmx['numvcpus'] = '1'
v.vmx['memsize'] = '512'
end

Go through nodes and configure each of them.j
nodes.each do |node|
config.vm.define node[:hostname] do |node_config|
node_config.vm.box = node[:box]
node_config.vm.hostname = node[:hostname]
node_config.vm.synced_folder('/Users/aburke/Development', '/development')
end
end
end

With a simple vagrant up –provider=vmware_appcatalyst
we are away. There are two boxes in my Vagrant file that deploy aburke-1 and aburke-2. These virtual machines have 1 vCPU and 512MB ram as defined by numvcpus and memsize.

With the command vagrant ssh aburke-1 I now have access to my machines I have deployed.

Rapid

For a network guy I find this stuff pretty amazing. The possibilities of deploying entire networks are feasible if the virtual machines I were deploying were SRX firewalls or Vyatta routers. I could subsequently deploy a Docker image that is Quagga or running exaBGP and instantly have a service.

Service Composer, Security Groups, and Security Tags

Service Composer

Service Composer within VMware NSX provides an administrator the ability to define a scalable and tiered security policy independent of the underlying infrastructure or routed topology. This is the feature with the NSX platform that allows security to scale. Providing security that is enforced at a unit level, protecting virtual to physical or physical to virtual communications and allow event-driven security actions, Service Composer is the beating heart of NSX.
This post will introduce the numerous concepts of Service Composer, their respective touch points, and how to securely enforce application workloads.

Security Groups

Security Groups provide administrators a mechanism to dynamically associate and group workloads. This abstraction allows a membership definition based upon one of many vCenter constructs. An administrator has the ability to create numerous Security Groups.

Security groups can have the following types of memberships:

  • Dynamic Membership based on object, abstraction or expression
  • Static Membership based on manual selection
  • Inheritance through another Security Group. Also known as Nested.

My definition of object, abstraction, or expression is one of the following – Security Tag, IP Set, Active Directory Group VM Name, OS Type, Computer Name, Security Group, etc. Something that is express in vCenter that is not a note, folder, or label.

It is possible to match on one or more of the aforementioned objects. These objects can match based on one or more or must match all. Whilst the granularity and control is here – this policy or logical box allows matching the right workload.

If a workload is instantiated that matches one or all of the parameters defined by the Security Groups membership rules it will be associated with the Security Group. At this stage all that has occurred is a manual, dynamic, or inherited membership of workloads.

Security Tags

Security Tag is a labelling mechanism that can be used as an abstraction to describe a state. This can be impressed upon a workload or be the matching criteria to a Security Group.
An administrator can create numerous labels to suit how they want to identify a specific workload. Given that the matching criteria of a Security Group can be a Security tag, a workload that is tagged can be automatically placed into a Security Group. Whilst an administrator can express a Security Tag onto a workload via the Web Client, the API or 3rd Party integration can be used to Tag a workload.

Something that uses the API directly would be a cloud management platform such as vRealize Automation. When a blueprint is selected by a user or an administrator it can be configured or set to tag workloads one or many security tags. As a result the workloads will inherit membership of the relevant group.

A 3rd party integration that uses Security Tags to change the group membership of a workload is Endpoint Security. An agent-less anti-virus solution could scan the VMDK associated to a selection of workloads. On detection of a severity one threat the anti-virus solution could revoke a particular tag (say, Production Tier) and invoke a new Security Tag upon the workload.

But why not use Labels?

Security Tags may be dose of deja vu to VMware administrators who have used Labels for a long time. Security Tags are specific and exclusive to NSX. The story goes that NSX Security tags were introduced to the product due to the heavy usage of labels and folders. Heavy usage is a good thing – the problem is that they have been used solely with a compute mentality in mind. This meant that where roles and responsibilities were isolated there was a chance that Labels and Folders used by Compute administrators could adversely alter the security domain.

Security Policy

Security Policies are re-usable rulesets that can be applied to Security Groups. Security Policies are created by administrators that express three types of rulesets:

  • Endpoint Services – guest-based services such as AV solutions
  • Firewall rules – Distributed Firewall policies
  • Network Introspection services – network services such as IDS, IPS, and encryption

Security Policies are created in such a way that any combination of rulesets can be derived.

An example of Security Policy and multiple labels – Security Policy A may take advantage of Firewall rules. It is applied to Security Group A which matches workloads based on the Security Tag – Web. Security Policy B may take advantage of Network Introspection rules that redirect tcp/1433 for Deep Packet Inspection. It is applied to Security Group B which matches workloads based on Security Tag – DB Inspection.

The workload ‘Web VM’ is subsequently tagged with the Security Tag – Web. It inherits the Distributed Firewall rules defined in Security Policy A. This policy explicitly states the following:

Security Policy ‘A’

  • SRC – 172.16.42.0/24, 172.16.43.0/24
  • DST – Policy’s Security Group
  • PRT – 443, ICMP
  • Action – Permit

Due to the reusable nature of Security Policy it is possible to match source or destination to Any, Source Security Group, Destination Security Group, or a selection of other Security Groups. This allows the Security Policy to have its Source or Destination modified based on what it is applied to further extending its reusability.

The administrator is told that all Web VMs must have Deep Packet Inspection to determine if DB queries are legitimate or of a malicious nature. Traditionally that involved a bit of network wizardry (buy me a beer and I’ll tell you how 000/911 used to do it!) that may have had substantial lead times.

The 3rd Party Network Introspection solution registered with this environment is Palo Alto Networks. It enables advanced services on a per cluster basis. In short – it deploys a virtual appliance to each host in the cluster. The Distributed Firewall has an option to redirect the packet via VMCI (kernel path) to the virtual appliance. The Panorama management platform is aware of the kernel redirection rule defined by the Security Policy applied to the Distributed Firewall. The rule exposed into Panorama can have Palo Alto rules applied to it. In this case that is Deep Packet Inspection – SQL Injection. Based on the outcome of the advanced rule the packet is dropped or passed back via the kernel and out of the Distributed Firewall.

By applying the relevant third party redirection service (in this case by an Advanced policy applied to a Security Group with membership based on a Security Tag) it is possible for administrators to define advanced function on a per application basis independent of the underlying topology.

Third party service chaining

With the ability to provide advanced Network Introspection and End Point services based on membership of a Security Group or application of a Security Policy it is possible to provide per application chaining.

The ability to Service Chain workloads and provide advanced services to workloads comes about from two core abilities:

  • Application of multiple Security Tags
  • Nesting of Security Groups

These two methods define how and what is matched for the advanced service. What if I have a workload that is tagged with two Security Tags that becomes a member of two different Security Groups? Which Security Policy takes precedence.

Security Policies can have a weighting applied to them. This weight is an arbitrary number ranging from what I have tested 1,000-16,000,000. A Security Policy with a higher weight will have a higher precedence. An example of a highly weighted policy that takes precedence over other policies attached to a work load might be a Quarantine workload. If a Endpoint Service detects a threat it may apply a tag ‘threat.found.sev.HIGH’ to a workload. A Quarantine Security Group is matching group membership based on the Security Tag ‘threat.found.sev.HIGH’. The Security Group has the Security Policy ‘Quarantine – High’ applied to it. It’s rules are:

Security Policy ‘Quarantine – High’
Rule 1

  • SRC – Policy’s Security Group
  • DST – AV-Remediation Security Group
  • PRT – Trend Deep Security Remediation Service Group (4118, 4120, 5274, 80, 443)
  • Action – Permit
  • Log – Disabled

Rule 2

  • SRC – Policy’s Security Group
  • DST – Policy’s Security Group
  • PRT – All
  • Action – Deny
  • Log – Enabled

The workload that was tagged with ‘threat.found.sev.HIGH’ based on an Endpoint service detecting a threat will automatically inherit the above two rules. Due to the weighting of these two policies being drastically higher than other rules that matched the workloads original group – Security Group A.

Weighting of Security Policy allows such precedence:

  • Security Policy ‘Quarantine – High’ Weight = 9000
  • Security Policy ‘A’ Weight = 5000

This would enforce the entire Security Policy ‘Quarantine – High’ in its entirety BEFORE Security Policy ‘A’. This is in a scenario where a workload is dual tagged based on a event.

How many links in the chain?

What if I wanted to have two different network introspection services on one flow type? What comes first? What is the order of operations? The Distributed Firewall for Network Introspection services is key for redirection into 3rd Party Integrations. The Distributed Firewall has 16 slots of which VMware reserve 0-3 and 12-15. Slots 4-11 can be used for registered Network Introspection services. This gives the administrator the flexibility to register services and use the correct 3rd Party Integration based on the desired outcome.

If an administrator had Palo Alto Networks and Symantec registered to the NSX for vSphere platform for IDS functionality it can be deployed on a per application basis. With the redirection policy enforced by a Security Policy applied to a Security Group there is choice down to a flow level what action is taken. Application A could leverage Symantec IDS on a flow, Application B could leverage Palo Alto IDS on a flow, and Application C could use both in order for a dual vendor strategy. The flexibility of the architecture leaves the choice to the administrator.

A security framework

This is an example of how it is possible use unit-level enforcement, object abstractions, and couple it with 3rd Party Services to provide an automated security architecture. The deployment of Security Groups, Security Policies, and unit-level firewalls have made it easy through a centralised management point that can distribute the architecture independent of the network.