Ubuntu Container Host – Zero to Hero
The follow takes an administrator through manually preparing an NSX-T container cluster. This assume only NSX-T pre-requisites are configured and that the container hosts are connected to their networks.
This example uses a 3 NIC container host. This means:
- vNIC1 is connected to k8s_pod_access Logical Switch / ens160
- vNIC2 is connected to k8s_transport Logical Switch / ens192
- vNIC3 is connected to Management_Network VSS Port Group / ens224
This approach provides management traffic isolation where NCP will communicate via vNIC 3 to a management network connected via vNIC3
Note:All YAML files for the Kubernetes configuration are located at the end of this document.
NSX Pre-requisites
NSX-T and Kubernetes integration requires the following configuration
Type | Name | Value | ID |
---|---|---|---|
IP Block | k8s-pod-network | 172.16.0.0/16 | e77b1435-54af-43bc-8045-9a22b8e3bdf9 |
IP Pool | k8s-external-ippool | 10.1.1.128/25 | b1b80f47-97de-4d1a-81d8-93bf6c98d613 |
Transport Zone | TZ | Overlay | eea034f0-4406-4ae6-bfff-10490873449d |
Firewall Section | TOP | n/a | aedcc7ec-ccf1-44ee-a66e-b371371b87e9 |
Firewall Section | BOTTOM | n/a | 001fd432-cd48-4339-926e-d692032cb0e7 |
T0 Logical Router | Tier0_LR | n/a | 6beaf18d-2e7e-4074-94c1-38ea9f7ba3e6 |
T1 Logical Router | k8s_pod_router | n/a | n/a |
Logical Switch | k8s_pod_access | 172.16.253.0/24 | n/a |
Logical Switch | k8s_transport | n/a | n/a |
These need to be configured and are referenced in later configurations.
Add hostname for all nodes
Update all nodes with their hostname in /etc/hostname
[email protected]:~$ cat /etc/hostname k8s-master01a
You can also use hostnamectl sethostname
if desired
Add hostname for all nodes
Update /etc/hosts
with node name and loopback IP.
[email protected]:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 k8s-master01a # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
- Validate host networking correctly
[email protected]:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto ens160 iface ens160 inet dhcp auto ens192 iface ens192 inet manual # Set this to manual to avoid 5 minute reboot timer due to DHCP π auto ens224 iface ens224 inet static address 192.168.120.201 netmask 255.255.255.0 dns-nameservers 192.168.110.10 dns-domain corp.local dns-search corp.local up route add -net 192.168.110.0 netmask 255.255.255.0 gw 192.168.120.1
Some comments about the networking
– note that the interface ens160 is connected to pod_access and acquires a static DHCP lease
– ens192 is connected and used for overlays between container hosts for pods in a namespace
– ens224 is the container host management network. It connects to a port-group in a “management VRF”. It needs a static route to reach the network 192.168.110.0/24
. This is where nsxmgr-01a.corp.local
resides. This provides a persistent method.
– The contents of /etc/network/interfaces
should represent each container host.
Restart networkmanager
, networking service, container host, or run ifdown/ifup
.
- Reboot all nodes with
sudo reboot
Installing Kubelet
Elevate to root for this shell.
sudo -H bash
Update the node’s package list to include google then add the required sources for Kubernetes.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - sudo touch /etc/apt/sources.list.d/kubernetes.list echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubectl
Disable Swap
Disable the swap with sudo swapoff -a
. This is not persistent. There is a great write up on why you need to disable swap here from Frank D.
You will need to modify /etc/fstab
for persistence.
Comment out the swap /dev/mapper/1604--template--vg-swap_1
below.
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # #/dev/mapper/1604--template--vg-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=56a9f6ec-66d3-4691-a1c7-9cc1d36f65dc /boot ext2 defaults 0 2 #/dev/mapper/1604--template--vg-swap_1 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
This will ensure a reboot does not impact restarting the docker daemon.
Install Kubernetes to all nodes.
Note the =1.11.5-00
specifies the version of packages to install.
apt-get install -y kubelet=1.11.5-00 kubeadm=1.11.5-00 kubectl=1.11.5-00
Install Docker.io to all nodes
Install docker below on all the hosts.
[email protected]:/home/vmware# sudo apt-get install docker.io
Validate docker
[email protected]:/home/vmware# sudo systemctl status docker β docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2018-09-05 14:24:32 SGT; 18s ago Docs: https://docs.docker.com Main PID: 4027 (dockerd) Tasks: 18 Memory: 17.2M CPU: 656ms CGroup: /system.slice/docker.service ββ4027 /usr/bin/dockerd -H fd:// ββ4035 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start
And with:
[email protected]:/home/vmware# docker version Client: Version: 17.03.2-ce API version: 1.27 Go version: go1.6.2 Git commit: f5ec1e2 Built: Thu Jul 5 23:07:48 2018 OS/Arch: linux/amd64 Server: Version: 17.03.2-ce API version: 1.27 (minimum version 1.12) Go version: go1.6.2 Git commit: f5ec1e2 Built: Thu Jul 5 23:07:48 2018 OS/Arch: linux/amd64 Experimental: false
From an external host copy the required NSX files
scp -r nsx-container-2.3.0.10066840 [email protected]:~/ scp -r nsx-container-2.3.0.10066840 [email protected]:~/ scp -r nsx-container-2.3.0.10066840 [email protected]:~/
Installing images into Dockers local registry
Install the NCP image into Docker:
sudo docker load -i ~/nsx-container-2.3.0.10066840/Kubernetes/nsx-ncp-ubuntu-2.3.0.10066840.tar
Container Network Integration
Install CNI
sudo dpkg -i ./nsx-container-2.3.0.10066840/Kubernetes/ubuntu_amd64/nsx-cni_2.3.0.10066840_amd64.deb
Open vSwitch
Change into the directory in which OVS is extracted to.
cd ~/nsx-container-2.3.0.10066840/OpenvSwitch/xenial_amd64
Install OVS manually with the following commands. You need to use this version of OVS and not OVS from upstream.
sudo dpkg -i libopenvswitch_2.9.1.8614397-1_amd64.deb sudo dpkg -i openvswitch-datapath-dkms_2.9.1.9968033-1_all.deb openvswitch-common_2.9.1.9968033-1_amd64.deb openvswitch-switch_2.9.1.9968033-1_amd64.deb
Without dependencies the install will fail.
sudo apt-get -f install
will install dependencies requested.
Re-run the above OVS install.
sudo ovs-vsctl add-br br-int sudo ovs-vsctl add-port br-int ens192 -- set Interface ens192 ofport_request=1 sudo ip link set br-int up sudo ip link set ens192 up
NSX-T requirements
The network interface ports associated with the container hosts connected to the transport switch need to be tagged with the relevent information.
k8s-master01a ncp/node_name k8s-cluster1 ncp/cluster k8s-node01a ncp/node_name k8s-cluster1 ncp/cluster k8s-node02a ncp/node_name k8s-cluster1 ncp/cluster
Kubernetes
The following files are in the current home directory. The example assumes these are already prepared like the examples below in the Appendix.
Creating the K8s cluster
Create K8s cluster with kubeadm
curl https://raw.githubusercontent.com/pandom/nsxt-k8s/master/nsxt-k8s/single-master-node/2.3/ubuntu/kubeadm.yaml` then `sudo kubeadm init --config kubeadm.yaml
This step will create the Kubernetes cluster using the kubeadm
bootstrap process.
Join nodes to master
sudo kubeadm join 10.1.1.129:6443 --token qe9x7x.csl24ocedde5cfup --discovery-token-ca-cert-hash sha256:c5311472a4597071e651dc7f4b3c1872ece922cef755704b5fe04927248de863 --ignore-preflight-errors=CRI
Validate the nodes are connected with kubectl get nodes
[email protected]:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01a Ready master 14h v1.11.3 k8s-node01a Ready none 14h v1.11.3 k8s-node02a Ready none 14h v1.11.3
Create NS
A namespace is required for the ncp and node agents. The example of this is within nsx-system
[email protected]:~$ kubectl create ns nsx-system namespace "nsx-system" created
[email protected]:~$ kubectl get ns NAME STATUS AGE default Active 16h kube-public Active 16h kube-system Active 16h nsx-system Active 16h
Apply cluster RBAC
The next step is to apply a basic RBAC for the cluster.
Apply the RBAC with:
kubectl apply -f https://raw.githubusercontent.com/pandom/nsxt-k8s/master/nsxt-k8s/single-master-node/ubuntu/rbac-ncp.yaml -n nsx-system
Once the RBAC configuration has been applied lets have a look at the service accounts for nsx-system
namespace!
[email protected]:~$ kubectl get sa -n nsx-system NAME SECRETS AGE default 1 16h ncp-svc-account 1 16h nsx-node-agent-svc-account 1 16h
Create the replication controller
The next step is to create the repplication controller for the NCP
Apply the configuration with kubectl apply -f https://raw.githubusercontent.com/pandom/nsxt-k8s/master/nsxt-k8s/single-master-node/ubuntu/ncp-rc.yaml -n nsx-system
[email protected]:~$ kubectl get rc -n nsx-system NAME DESIRED CURRENT READY AGE nsx-ncp 1 1 1 16h
Create the daemon set for node agent
This will create the daemonset for the NSX node agent.
Apply with configuration kubectl apply -f https://raw.githubusercontent.com/pandom/nsxt-k8s/master/nsxt-k8s/single-master-node/ubuntu/nsx-node-agent-ds.yaml -n nsx-system
[email protected]:~$ kubectl get ds -n nsx-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE nsx-node-agent 2 2 2 2 2 16h
And confirm the number of pods to the ds nsx-node-agent
:
[email protected]:~$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nsx-ncp-vh7l5 1/1 Running 1 23d 172.16.253.212 k8s-node02a none nsx-node-agent-v2cs5 2/2 Running 4 23d 172.16.253.211 k8s-node01a none nsx-node-agent-wctrf 2/2 Running 4 23d 172.16.253.212 k8s-node02a none
Beauty!
Topology
This is your current topology
Appendix
Note all YAML files here take the template included with container plugin download and modified desired fields for this example.
All outputs have been stripped of whitespace and commented lines beginning with #.
Configuration Files
https://raw.githubusercontent.com/pandom/nsx-kubernetes/master/nsxt-k8s/single-master-node/2.3/ubuntu/kubeadm.yaml
https://raw.githubusercontent.com/pandom/nsx-kubernetes/master/nsxt-k8s/single-master-node/2.3/ubuntu/ncp-rc.yaml
https://raw.githubusercontent.com/pandom/nsx-kubernetes/master/nsxt-k8s/single-master-node/2.3/ubuntu/nsx-node-agent-ds.yaml
https://raw.githubusercontent.com/pandom/nsx-kubernetes/master/nsxt-k8s/single-master-node/2.3/ubuntu/rbac-ncp.yaml