July 30, 2019

Deploy and Manage Applications on a Kubernetes Cluster

What is Kubernetes?

Kubernetes is a container management technology developed by Google Labs to manage containerized applications in different kinds of environments across physical, virtual and cloud infrastructure. It is an open source system which helps to create and manage containerization of the application.

Prerequisites

We assume anyone who wants to understand Kubernetes should have an understanding of Docker, how Docker images are created, and how they work as a standalone unit. If you want to find out more about Kubernetes, what can we do with it, and why we use it, a Google search will provide plenty of good reading material.

Here I have used Kubeadm to bootstrap a Kubernetes cluster.

Content

  • Install Docker,
  • create the cluster (on a Virtual Machine),
  • build the Dockerized application,
  • deploy the application on a Kubernetes cluster,
  • install Helm,
  • deploy the application using a Helm Chart, and
  • expose the application to the internet using a Kubernetes Ingress.

0.0 Preparation

In my set I have three Virtual Machines (VMs) which run RHEL7. All the inbound and outbound firewall rules are under my control. These VMs are called “k8s-master”, “k8s-node-1”, and “k8s-node-2”.

192.168.1.10   k8s-master
192.168.1.11   k8s-node-1
192.168.1.12   k8s-node-2

As an optional step, I can set the host-name for each of the nodes.

# on master node
hostnamectl set-hostname 'k8s-master'
exec bash
# on worker node 01
hostnamectl set-hostname 'k8s-node-1'
exec bash
# on worker node 02
hostnamectl set-hostname 'k8s-node-2'
exec bash

Update the IP tables — bridge and forward chain networking to be correctly set up.

sudo echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Turn swap off

sudo swapoff -a
sudo sed -i \'/ swap /d\' /etc/fstab

You can temporarily disable SELinux by changing its mode from “targeted” ****to “permissive”.

$ sudo setenforce 0

To permanently disable SELinux on your CentOS 7 system, open /etc/selinux/config and set SELINUX to disabled.

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#       targeted - Targeted processes are protected,
#       mls - Multi Level Security protection.
SELINUXTYPE=targeted

Save the file and reboot your CentOS system.

1.0 Install Docker-CE

Install the required packages.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add the Docker-CE repository.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker-CE 17.03.2.

yum install -y --setopt=obsoletes=0 docker-ce-17.03.2.ce

Start and enable the Docker service.

systemctl enable docker && systemctl start docker

Confirm Docker is installed and running properly.

[root@k8s-master ~]# docker version
Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue july 2 10:21:36 2019
 OS/Arch:      linux/amd64
Server:
 Version:      17.03.2-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue July 2 10:21:36 2019
 OS/Arch:      linux/amd64
 Experimental: false

2.0 Create the Kubernetes Cluster

Do this for each node. If you don’t have your own DNS server then update /etc/hosts file on each node also.

192.168.1.30   k8s-master
192.168.1.31   k8s-node-1
192.168.1.32   k8s-node-2

Add the Kubernetes repository.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

Install the required Kubernetes packages.

yum install -y kubelet kubeadm kubectl

Enable and start the Kubelet service.

systemctl enable kubelet && systemctl start kubelet

Check if Kubelet is installed and running properly.

[root@k8s-master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Tue 2019-01-01 20:19:01 +0530; 9s ago
     Docs: https://kubernetes.io/docs/
  Process: 1365 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 1365 (code=exited, status=255)
Jan 01 20:19:01 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jan 01 20:19:01 k8s-master systemd[1]: Unit kubelet.service entered failed state.
Jan 01 20:19:01 k8s-master systemd[1]: kubelet.service failed.

Run the Kubeadm init command on the master node.

kubeadm init --pod-network-cidr=10.244.0.0/16

This produces the below output.

[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.30.233 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.26.30.233 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.26.30.233]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.502868 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6lrhwg.vg3whkvxhx0z2cow
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

The init command spits out a token. You can choose to copy this now, but don’t worry, we can retrieve it later. At this point you can choose to update your own .kube config so that you can use kubectl as your own user in the future.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now setup a Flannel virtual network.

sudo sysctl net.bridge.bridge-nf-call-iptables=1

kubectl apply -f <https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml>

kubectl apply -f <https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml>

Run the following for networking to be correctly set up on each node:

sudo sysctl net.bridge.bridge-nf-call-iptables=1

We need a join token to connect each worker node to the master. We can retrieve it by running the following command on the master.

sudo kubeadm token create --print-join-command

Then run the join command on each of the worker nodes.

sudo kubeadm join 10.68.17.50:6443 --token adnkja.hwixqd5mb5dhjz1f --discovery-token-ca-cert-hash sha256:8524sds45s4df13as5s43d3as21zxchaikas94

Now we have created our cluster. You can check nodes by running the get nodes Kubectl command.

kubectl get nodes

Output should show three nodes, one with the master role and two without.

[root@k8s-master ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
k8s-master          Ready    master   4m     v1.13.1
k8s-node-1          Ready    <none>   5m9s    v1.13.1
k8s-node-2          Ready    <none>   1m27s   v1.13.1
[root@k8s-master ~]#

You can provide a role to each worker node.

kubectl label node node-name node-role.kubernetes.io/worker=worker

Now the get nodes command will show a worker role for both of the worker nodes.

[root@k8s-master ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
k8s-master          Ready    master   12m     v1.13.1
k8s-node-1          Ready    worker   10m     v1.13.1
k8s-node-2          Ready    worker   6m33s   v1.13.1

If I write further steps in here then my story will not be a nice story, so I decided to write few more stories to cover the rest of my top content. In my next story I’ll cover how to build the Dockerized application and deploy it onto the Kubernetes cluster.

By the way, check out our best AWS deal: http://avmconsulting.net/well-architected-review