Container Management – How to Build a Kubernetes Development Cluster

Introduction

Navisite has recently started using Kubernetes and containers to increase efficiencies with a continuous innovation and continuous delivery(CI/CD) model for updates to internal service and support systems.

As a Principal Cloud Engineer here at Navisite I thought it would be interesting to put together a test cluster to get an understanding Kubernetes cluster management. It is very handy to be able to spin up a cluster as needed on any desktop machine I might be using to test various aspects of cluster management before applying to a production cluster.

This article focuses on that experience – building a platform for learning container management using a Kubernetes development cluster running on a local machine.

I am sharing that experience here for any of our clients who have been thinking about how to get started with understanding containerization. Please reach out to us if you are interested in exploring beyond experimenting with this development cluster, or even if you have questions/suggestions related to this project.

Future articles will focus on how to deploy Kubernetes production clusters in private and public cloud environments. Read on and have fun building a Kubernetes cluster on your local machine.

Project Overview

Containers have caused a major impact on how cloud-native applications are developed and have some benefits over traditional “hypervisor” approach to application deployment. This approach allows for a more reliable way to move software environments from one computing environment to another.

A container consists of the entire runtime environment, application and just the resources needed to run it in a single bundled package. This is unlike an application built with standard virtualization practices for which each application typically lives in a VM and runs a single operating system / application.  Here is a visual comparison:

 

Containerized applications are much lighter weight (MB rather than GB in size) compared to virtual machines(VMs). A single server can host many more containers than virtual machines.

Alternatively, running containers in a VM can also be beneficial as it may allow for less VMs required for the application potentially driving down virtualization licensing costs. This also makes it possible to run containers in cloud environments. In either scenario, container proliferation can quickly become difficult to manage. Kubernetes provides a container-centric management environment to simplify management and deployment.

Before Getting Started

This project is an automated deployment of a Kubernetes multi-node VM cluster installation, for those wanting to get familiar with container orchestration and developing/deploying containerized applications from a local desktop or laptop.

Designed to run on MacOS, Linux or Windows. Some basic Linux system administration skills are required for this project. Any of the OS platforms will require a user with local administrative privileges to install the required software packages on the local machine where the VMs reside.

The supplied Vagrantfile handles the provisioning of the VMs and uses embedded shell script provisioning for Guest OS, Kubernetes/Docker deployment.  While shell scripting is not the most efficient, this method is used to make it easy to modify as opposed to requiring tools like Ansible, SaltStack, Chef, Puppet etc.

Variables have been defined for easy modification of the VM configuration parameters to expand the number of worker nodes in the cluster. Note that it is possible to create a Kubernetes multiple master node deployment, it is unnecessary overhead for a development cluster.

A note on security – in a production environment there are a number of security considerations that should be understood before deploying a container environment. These considerations are outside the scope of this project and were not applied here. Security in this environment is as good as the perimeter of the laptop or desktop the VM’s run on and are the responsibility of the user.

A high-level architecture drawing is provided in the following figure:

Fig. 1 Architecture Diagram

Software Prerequisites

On the local machine (Mac OS, Windows or Linux) install the following applications in the order listed below. Follow instructions from the respective websites:

  1. Vagrant (Deployment tool for building the environment)
  2. VirtualBox (Virtual Machine provider)
  3. VirtualBox Extensions (add on software needed for VirtualBox guests)
  4. Git (utility needed for downloading this project from GitHub)
  5. Minikube (Used for generating a unique token for multi-node cluster build)

Cluster Installation Overview

This project is intended as a learning tool and should not be considered a production level deployment of Kubernetes.

The cluster will be comprised of a Single Master Node with a user defined number of Worker Nodes. All nodes will run the Linux distribution Ubuntu 18.04 (ubuntu/bionic64) in a VirtualBox Virtual Machine.

By default “Addon” features Kubernetes Dashboard and Metallb LoadBalancer in conjunction with an NGINX webserver to demonstrate the cluster is working properly after installation.

The Kubernetes Dashboard is deployed with role based authentication control token authentication. Installation instructions provide commands for accessing the dashboard from the local system the cluster is installed on.

The default private internal network 172.16.35.0/24 will be created and nodes are assigned a static address starting at 172.16.3.100 for the master. The nodes can be accessed using the upcoming command example when run from the same directory the vagrant up command was executed from during installation.

Replace NodeName with a VM hostname from Table 1. List of nodes and IP Addresses

Table 1. List of nodes and IP addresses

 

VM Hostname IP Address
node1 172.16.35.100
node2 172.16.35.101
node3 172.16.35.102

If more than two worker nodes are created the pattern would continue node4 with ip 172.16.35.104 and so forth. Note that the nodes /etc/ssh/sshd_config file has been modified to allow ssh login via the “private network”, the 172.16.35.0 network.

Cluster provisioning scripts for the master and worker nodes are embedded in the Vagrantfile. These are fairly straight forward bash shell scripts: $masterscript and $workerscript. Check the echo statements in the code to understand the operations.

User should edit variables as needed. Note: there is a requirement to provide a unique token value for KUBETOKEN. Do not skip the Minikube pre-requisite as that is required for generating the token.

Currently Flannel is the only network overlay the provisioning script provides. If a different network overlay is desired, the embedded $mastershell script can be edited.

Vagrantfile Customization

“Table 2. Variable Defaults” displays the default values for the variables defined in the Vagrantfile. These should be edited as prescribed in “Table 3. Variable Definitions”. For Linux, Mac OS, use a command line text editor like vi. For Windows, try Notepad++.

IMPORTANT: KUBETOKEN should be a uniquely generated value using “Minikube”, instructions are provided in the “Table 2. Variable Definitions” section below.

If making changes to theVM_SUBNET and NODE_OCTET values check “Addons” section for other required edits (or the Addons may not work properly)

 Table 2. Variable Defaults

Variable Default Value
KUBETOKEN “03fe0c.e57e7831b69b2687”    Note: replace with unique token from Minikube
VM_SUBNET 172.16.35.
NODE_OCTET 100
MASTER_IP #{VM_SUBNET}#{NODE_OCTET}
POD_NTW_CIDR 10.244.0.0/16
BOX_IMAGE ubuntu/bionic64
NODE_COUNT 2
CPU 1
MEMORY 1024

Table 3. Variable Definitions

Variable Definition
KUBETOKEN Generate a unique token Minikube from Cluster Insallation procedure, copy and paste value replacing default value into Vagrantfile.
VM_SUBNET Default is “172.16.35.” . Change accordingly if default creates IP conflict with local machine. Do not overlap with POD_NET_CIDR.
NODE_OCTET Default is 100. The master will get 100 and node1 101, node2 102 etc.
MASTER_IP Default is VM_SUBNET + NODE_OCTET.
POD_NET_CIDR Default is "10.244.0.0/16". This value is required for Flannel to run
BOX_IMAGE Default is "ubuntu/bionic64". Changing OS value may require script changes.
NODE_COUNT Default is 2 Set desired number of worker nodes
CPU Default is 1. Recommend at least 2 if the system has the resources
MEMORY Default is 1. Recommend at least 2 if the system has the resources

NOTE: If Changing VM_SUBNET, NODE_OCTET, be sure to check the “Addons” section as IP changes in the layer2.config-yaml configuration file for MetalLB will require edits.

Cluster Installation Procedure

Step 1

KUBETOKEN Generate a unique token from the Minikube VM using the following command:

Open a terminal session on the LocalMachine and Download the repository https://github.com/ecorbett135/kubernetes-dev-cluster.git 

Step 2

Ensure all variables have been edited to desired values and KUBETOKEN is a uniquely generated token value.

Install and configure the cluster from the LocalMachine:

Once installation completes final line in output will look something like:

Step 3

Login using ssh with port forward, check node status, start proxy and get dashboard token (copy to paste into web browser)

Windows Hint: Git comes with a bash shell

General Hint:  vagrant user password is vagrant. Change it using the passwd command from within the VM)

Step 4

Copy the kubectl describe secret output to the clipboard in the previous step

From local machine VM’s are running on enter the following url or click below:

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Now select TOKEN radial button and past token copied from terminal session on the provided line.

  Cluster Administration Tips

Note that the master node vagrant user .bashrc is configured with some aliases:

These are shortcuts to increase efficiency for example, instead of the following command:

use the ks alias:

Stopping and Restarting the Cluster

It is important to note that the VM management should be done with Vagrant. This is because a vm.synced_folder option is used in the Vagrant file and this folder will not mount properly to the VMs if they are managed from the

  1. To stop the cluster, use the command vagrant halt from the local machine from with the directory the cluster was deployed:

2. To start the cluster run vagrant up from the local machine from with the directory the cluster was deployed:

 

Add-ons

Load Balancer

MetalLB is a LoadBalancer application for Kubernetes (primarily designed for bare metal K8s installs).  Metallb will install by default – can be commented out in the Vagrantfile if installation is not desired.  Edit the Vagrantfile and put a # at the beginning of these 4 lines:

 

For this project a simple Layer 2 configuration deployment is sufficient. By default the Load Balancer has a defined IP pool range

Changing the default VM_SUBNET will require edits to the red text below to match the new subnet value. Also, it will require changing if NODE_OCTETis in within the 240 – 250 range

Contents of ~/kubernetes-dev-cluster/addon/metallb/layer2.config-yaml

 

Web Server Deployment:

NGINX Web server is deployed with a “LoadBalancer” configuration by default. It can be accessed typically by the first IP address in the range defined in the layer2.config-yaml. By default http://172.16.35.240

The index.html is loaded from the ~/kubernetes-dev-cluster/addon/metallb/ on the local machine. This file can be customized by the user.

Thank You

Thanks to Steve Carlton, Chris Moore and Gary Pratt from the Navisite OSS team for their input.

Thanks to Javid Azadzoi from Navisite Messaging team for his objective view and coaching.

Thanks to Scott Hyslep from Navisite ACS team for validating the process on the Windows Platform.