Reading Time: 11 minutes

What is Kubernetes?

k8s logo

In this article, we review what Kubernetes and Kubeadm are, how to install, create a cluster, and set up worker nodes using Kubeadm. If you are not yet familiar with Kubernetes, we recommend reading our article on the fundamental basics of Kubernetes.

Kubernetes (or K8s as it is informally known) is an open-source system used for automating deployments, scaling, and management of containerized applications. Some benefits of Kubernetes include:

  • Automatic deployment and rollback of systems. Kubernetes gradually makes changes to an application or its configurations while monitoring its health to make sure it doesn't destroy all the instances simultaneously. If something goes wrong, Kubernetes will roll back the changes for us.
  • Service discovery and load balancing. Kubernetes gives Pods their IP addresses and DNS name for the set of Pods and can distribute the load between them.
  • Orchestrating repositories. Users can automatically mount local or cloud storage systems.
  • Automatic container packaging. Kubernetes automatically distributes containers based on their resource requirements and other constraints. The better the containers and their resources are allocated, the better the system performance will be.
  • Using packages in Kubernetes. Apart from services, Kubernetes can manage your package settings.
  • Self-recovery. Kubernetes monitors the state of containers, and if something goes wrong, it can replace them with new containers. Also, containers that are already broken are recreated.
  • Kubeadm. It automates the installation and configuration of Kubernetes components, including the API server, Controller Manager, and Kube DNS.

What is Kubeadm?

kubeadm

Kubeadm is a toolkit that comes with the Kubernetes distribution that provides a best-practice method to build a cluster within an existing infrastructure. 

It carries out the steps needed to get a minimum viable cluster up and running.

Prerequisites

Server Configurations

To install Kubernetes, we use three servers with the following system specifications.

  • Master server - 4 CPU and 4096 RAM
  • Worker1 server - 2 CPU and 4096 RAM
  • Worker2 server - 2 CPU and 4096 RAM

All servers use Ubuntu 18.04 and each uses the following IP and DNS.

  • Master server - IP 192.168.50.58
  • Worker1 server - IP 192.168.50.38
  • Worker2 server - IP 192.168.50.178

Update

The first task is to ensure all of our packages are updated on each server.

root@host:~# apt update -y && apt upgrade -y

Add Host File Info

Now we will log in to each machine and edit the /etc/hosts configuration file using this command.

root@host:~# tee -a /etc/hosts< 192.168.50.58 master
> 192.168.50.38 worker1
> 192.168.50.178 worker2
> EOF
192.168.50.58 master
192.168.50.38 worker1
192.168.50.178 worker2
root@host:~#

Now we can verify the changes in the hosts file.

root@host:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 host
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.50.58 master
192.168.50.38 worker1
192.168.50.178 worker2
root@host:~#

Disable Swap Memory

For kubelet to work correctly, it is essential to disable SWAP memory. This change is applied to each server. The hard drive's paging space is used to temporarily store data when there is not enough space in RAM. We accomplish this using the following commands.

root@host:~# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
root@host:~# swapoff -a

Install Kubelet, Kubeadm, and Kubectl

Next, we select the components we will install. In this case, we will begin with the following elements.

  • Kubelet - This is a system service that runs on all nodes and configures the cluster's various components.
  • Kubeadm - The command-line tool that installs and configures various cluster components.
  • Kubectl - This is a command-line tool used to send commands to the cluster via the API. It also makes working with commands in the terminal easier.

Add Repositories

Using the following commands, we will add additional repositories to the package manager and run key checks against them. We begin by installing apt-transport-https to securely add our external HTTPS sources to the apt sources list. Then we will install kubelet, kubeadm, and kubectl. To begin, we will demonstrate the initial installation on the Master server. First, we will install apt-transport-https.

root@host:~# apt-get update && apt-get install -y 
root@host:~# apt-transport-https curl

Next, we add the Kubernetes repository and the key to verify that everything is installed securely.

root@host:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK
root@host:~#

Now, we will add our repositories.

root@host:~# echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
root@host:~#

Install

We can now update the server again to recognize our new repository and then install the packages.

root@host:~# apt update
root@host:~# apt -y install vim git curl wget kubelet kubeadm kubectl

Set Mode

We set the kubelet to standby mode because it restarts every few seconds as it is in a standby loop and waiting for further actions.

root@host:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
root@host:~#

Verify Installation

Now we check our installed components version using the following command.

root@host:~# kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
root@host:~#

Configure Firewall

Next, we configure iptables to allow traffic across the network bridge. This change is vital because iptables (the servers default firewall) should always examine what traffic passes over the connections. Now we will load the br_netfilter add-on module.

root@host:~# modprobe overlay
root@host:~# modprobe br_netfilter

In the sysctl configuration from K8s, we assign the value 1, which means to check traffic.

root@host:~# tee /etc/sysctl.d/kubernetes.conf< net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
root@host:~#

Now, reload sysctl.

root@host:~# sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/kubernetes.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
root@host:~#

Install Docker

31920.docker

Docker is a container runtime product that uses OS-level virtualization to launch our containers. These containers are separated from one another and contain the software, libraries, and configuration files needed to run an application. The next step is to add the repositories to the package manager and check the keys. As a reminder, we must complete these tasks on each server.

To begin, we will again run a quick update using apt. Then, we will install the gnupg2 package and then add the gpg key. Finally, we add the Docker repository, update the server again, and install docker-ce (community edition).

root@host:~# apt update
root@host:~# apt install -y curl gnupg2
root@host:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
OK
root@host:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
root@host:~# apt update
root@host:~# apt install -y containerd.io docker-ce

Create Directories and Configurations

Next, we will write the Docker configuration so that it can run in the background.

root@host:~# mkdir -p /etc/systemd/system/docker.service.d
root@host:~# tee /etc/docker/daemon.json < {
> "exec-opts": ["native.cgroupdriver=systemd"],
> "log-driver": "json-file",
> "log-opts": {
> "max-size": "100m"
> },
> "storage-driver": "overlay2"
> }
> EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
root@host:~#

Restart Docker

Now, we reload, restart, and then enable the Docker daemon.

root@host:~# systemctl daemon-reload
root@host:~# systemctl restart docker
root@host:~# systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
root@host:~#

Verify Docker Status

Now we can verify that Docker is up and running.

root@host:~# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset:
Active: active (running) since Fri 2020-10-23 19:31:23 +03; 1min 6s ago
Docs: https://docs.docker.com
Main PID: 16856 (dockerd)
Tasks: 13
CGroup: /system.slice/docker.service
└─16856 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.570268584+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.570272311+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.570275888+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.570389541+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.626550911+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.650814097+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.661596420+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.661656042+03:00"
Oct 23 19:31:23 host dockerd[16856]: time="2020-10-23T19:31:23.671493768+03:00"
Oct 23 19:31:23 host systemd[1]: Started Docker Application Container Engine.
lines 1-19/19 (END)

Create the Master Server

To begin, we need to ensure that the br_netfilter module is loaded using the following command.

root@host:~# lsmod | grep br_netfilter
br_netfilter 28672 0
bridge 176128 1 br_netfilter
root@host:~#

The br_netfilter kernel module is needed to enable bridged traffic between the Kubernetes pods across the cluster. It allows members of the cluster to appear as if directly connected to each other.

Start Kubelet

Next, we will enable kubelet and then initialize the server to run the K8s management components, such as etcd (cluster database) and the API server.

root@host:~#  systemctl enable kubelet
root@host:~#  kubeadm config images pull
[config / images] Pulled k8s.gcr.io/kube-apiserver:v1.19.3
[config / images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.3
[config / images] Pulled k8s.gcr.io/kube-scheduler:v1.19.3
[config / images] Pulled k8s.gcr.io/kube-proxy:v1.19.3
[config / images] Pulled k8s.gcr.io/pause:3.2
[config / images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config / images] Pulled k8s.gcr.io/coredns:1.7.0
root@host:~# 

Create Cluster

Now we will use the following parameters to create a cluster using the kubeadm command.

  • --pod-network-cidr — This is used to configure the network and set CIDR ranges (Classless Inter-Domain Routing), which is a method of classless IP addressing.
  • --control-plane-endpoint — This is a set of common control endpoint for all nodes if using in a high-availability cluster.
root@host:~# kubeadm init \
> --pod-network-cidr=10.0.0.0/16 \
> --control-plane-endpoint=master
W1023 21:29:58.178002 9474 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [host kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.50.58]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.004870 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node host as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node host as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: bf6w4x.t6l461giuzqazuy2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!

To begin using our cluster, we need to allow and configure our user to run kubectl.

root@host:~# mkdir -p $HOME/.kube
root@host:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@host:~# chown $(id -u):$(id -g) $HOME/.kube/config

We should now be able to deploy a pod network to the cluster.

root@host:~# kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Now we can connect any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following command as root.

root@host:~# kubeadm join master:6443 --token bf6w4x.t6l461giuzqazuy2 \
--discovery-token-ca-cert-hash sha256:8d0b3721ad93a24bb0bb518a15ea657d8b9b0876a76c353c445371692b7d064e \
--control-plane

The next command allows us to join any number of worker nodes by running the following on each as root.

root@host:~# kubeadm join master:6443 --token bf6w4x.t6l461giuzqazuy2 \
--discovery-token-ca-cert-hash sha256:8d0b3721ad93a24bb0bb518a15ea657d8b9b0876a76c353c445371692b7d064e
root@host:~#

Discovery Token

The final output of the command is a unique token needed to add other Master servers. If we need to add them, we can do it using this command.

kubeadm join master:6443 --token bf6w4x.t6l461giuzqazuy2 \
--discovery-token-ca-cert-hash sha256:8d0b3721ad93a24bb0bb518a15ea657d8b9b0876a76c353c445371692b7d064e \
--control-plane

Configure Kubectl

Now we can begin configuring kubectl using the following commands.

root@host:~# mkdir -p $ HOME / .kube
root@host:~# cp -i /etc/kubernetes/admin.conf $ HOME / .kube / config
root@host:~# chown $ (id -u): $ (id -g) $ HOME / .kube / config

Check Cluster Status

Next, we check the status of the cluster.

root@host:~# kubectl cluster-info
Kubernetes master is running at https://master:6443
KubeDNS is running at https://master:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, we use the 'kubectl cluster-info dump command.
root@host:~#

Install Calico

We will now install and configure the Calico plugin. This plugin is a host-based networking plugin for containers that virtual machines use for security purposes. We begin the installation using the following command.

root@host:~# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
root@host:~#

Verify Running Pods

This command allows us to check the working Pods.

root@host:~# watch kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-7d569d95-wfzjp 1/1 Running 0 2m52s
kube-system calico-node-jd5l6 1/1 Running 0 2m52s
kube-system coredns-f9fd979d6-hb4bt 1/1 Running 0 7m43s
kube-system coredns-f9fd979d6-tpbx9 1/1 Running 0 7m43s
kube-system etcd-host 1/1 Running 0 7m58s
kube-system kube-apiserver-host 1/1 Running 0 7m58s
kube-system kube-controller-manager-host 1/1 Running 0 7m58s
kube-system kube-proxy-gvd5x 1/1 Running 0 7m43s
kube-system kube-scheduler-host 1/1 Running 0 7m58s
root@host:~#

Verify Configuration

The last check we do is to verify that the Master server is ready and available.

root @ host: ~ # kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
host Ready master 46m v1.19.3 192.168.50.58 Ubuntu 18.04.5 LTS 5.4.0-52-generic docker: //19.3.13
root @ host: ~ #

Create Workers Nodes

After setting up the Master, we add workers nodes to the cluster so that K8s can handle the load. To add Workers to the cluster, we use the command with the token we received earlier when creating the cluster.

Add Worker 1

root@host-node1:~# kubeadm join master:6443 --token bf6w4x.t6l461giuzqazuy2 \
> --discovery-token-ca-cert-hash sha256:8d0b3721ad93a24bb0bb518a15ea657d8b9b0876a76c353c445371692b7d064e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver, and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@host-node1:~#

Add Worker 2

root@host-node2:~# kubeadm join master:6443 --token bf6w4x.t6l461giuzqazuy2 \
> --discovery-token-ca-cert-hash sha256:8d0b3721ad93a24bb0bb518a15ea657d8b9b0876a76c353c445371692b7d064e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver, and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@host-node2:~#

Verify Cluster Members

Now on the Master server, run the following command to check if both Workers 1 and 2 have been added to the cluster.

root@host:~# kubectl get nodes
NAME          STATUS ROLES  AGE   VERSION
host          Ready  master 55m   v1.19.3
host-worker-1 Ready         4m48s v1.19.3
host-worker-2 Ready         3m5s  v1.19.3
root@host:~#

Add Application

Next, we deploy the application to the cluster, check its performance, and start the test application.

root@host:~# kubectl apply -f https://k8s.io/examples/pods/commands.yaml
pod / command-demo created
root@host:~#

Verify Pod Status

Finally, we use the following command to check and see if the pod has started.

root@host:~# kubectl get pods
NAME         READY STATUS    RESTARTS AGE
command-demo 0/1   Completed 0        30s
root@host:~#

This completes our installation and configuration of Kubernetes.

Conclusion

In this tutorial, we have learned how to install and configure Kubernetes in a production environment. We also demonstrated how to use kubelet, kubeadm, kubectl, and configure the admin user.

Reserve Your Server Now!

We pride ourselves on being The Most Helpful Humans In Hosting™!

Our Support Teams are filled with experienced Linux technicians and talented system administrators who have intimate knowledge of multiple web hosting technologies, especially those discussed in this article.

Should you have any questions regarding this information, we are always available to answer any inquiries with issues related to this article, 24 hours a day, 7 days a week 365 days a year.

If you are a Fully Managed VPS server, Cloud Dedicated, VMWare Private Cloud, Private Parent server, Managed Cloud Servers, or a Dedicated server owner and you are uncomfortable with performing any of the steps outlined, we can be reached via phone at @800.580.4985, a chat or support ticket to assisting you with this process.

Avatar for Margaret Fitzgerald

About the Author: Margaret Fitzgerald

Margaret Fitzgerald previously wrote for Liquid Web.

Latest Articles

Blocking IP or whitelisting IP addresses with UFW

Read Article

CentOS Linux 7 end of life migrations

Read Article

Use ChatGPT to diagnose and resolve server issues

Read Article

What is SDDC VMware?

Read Article

Best authentication practices for email senders

Read Article