Reading Time: 8 minutes

What is K3s?

k3s Logo

K3s is a lightweight version of Kubernetes. It is a highly available Kubernetes certified distribution designed for production workloads in unattended, limited resource, remote locations, or inside an IoT appliance. The developers of K3s declare that K3s is capable of almost everything that K8s can do. 

So, what makes it such a lightweight distribution?

The memory usage is reduced by running many components within a single process. This eliminates the significant overhead that would otherwise be duplicated for each component. The binary file size is smaller by removing third-party storage drivers and cloud service providers.

  • Running it requires less memory.
  • A small, 40Mb binary file that contains all the non-container components for starting a cluster. 

Features

K3s is a fully compatible K8s distribution with the following features:

  • It is packaged as a single binary file.
  • It contains a lightweight backend storage medium based on SQLite3 (a lightweight embedded database management system) as the default storage engine. Etcd3, MySQL, and Postgres databases are also available.
  • K3s uses a simple launcher that handles many complex TLS duties and other functions.
  • Added features include such features as a local storage provider, load balancer, Helm controller (tooltip: a packager that helps install and manage the lifecycle of K8s applications), and the Traefik login controller (tooltip: a Docker-enabled reverse proxy that provides a built-in dashboard).
  • All control components operate within a single binary file and process.
  • Most all external dependencies have been minimized (only kernel and cgroup required) to reduce size.
k3s

Prerequisites

  • Two nodes cannot share the same hostname, so care must be taken to implement a naming scheme that conforms to this requirement in advance. 
  • If nodes already share the same hostname, we can use the "--with-node-id" flag to affix a random suffix for each node. Otherwise, we must create a unique name to assign using the --node-name flag or $K3S_NODE_NAME for each node we add to the cluster.

Dependencies 

The K3s packages require the following dependencies to work:

  • containerd
  • Flannel
  • CoreDNS 
  • A CNI - Container Networking Interface
  • Host utilities (iptables, socat, etc.)
  • An Ingress controller (traefik)
  • A Built-in load balancing service
  • A Built-in network policy controller

Internal Configuration

K3s Architecture

Compared to K8s, K3s has no clear distinction between the master node and the worker nodes. This means the modules can be scheduled and managed at any node. Therefore, the master node and work node designations are not strictly applicable to K3s.

In a K3s cluster, the node that runs the management components and Kubelet is called the server. The node that only runs the Kubelet is called the agent. The server and agent have a container runtime that manages tunneling and network traffic in the cluster. In a typical K3s environment, we run one server and multiple agents. If you pass a URL during installation, the node becomes an agent. Otherwise, you start another stand-alone K3s cluster with its management.

Another critical point is how the state of the cluster is managed. K8s relies on etcd (an open-source distributed key-value store), and a key-value distributed database to store the rest of the cluster state info. K3s replaces etcd with a lightweight SQLite database. K8s management level becomes highly available by running etcd on at least three nodes. On the other hand, because SQLite is not a distributed database, it becomes the chain's weakest link.

K3s Installation and Configuration

To deploy a K3s cluster with a master and worker node, we need three servers running Ubuntu 18, each with at least 1gb of RAM and 1 processor. One server will be used as the primary server and the other two as workers.

Note:
For our test servers, we have opened all of our ports, but we cannot do this in a production setting. But since we have a test option, by default, we have opened all the ports. We will need to open port 6443.

Update Servers

First, we will update each server. We will display the basic information and main commands on the master.

Next, update the master and any other included servers using this command. 

root@host-master:~# sudo apt update && sudo apt -y upgrade 

Now, we need to find out the IP of the virtual network, we can accomplish this using the following command. 

root@host-master:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:35:cf:4e:e6:fe brd ff:ff:ff:ff:ff:ff
    inet 172.31.11.188/20 brd 172.31.15.255 scope global dynamic eth0
       valid_lft 3158sec preferred_lft 3158sec
    inet6 fe80::35:cfff:fe4e:e6fe/64 scope link
       valid_lft forever preferred_lft forever
root@host-master:~#

Master IP 172.31.11.188

Now, locate the IP of other servers in the same manner. Be sure to add any virtual network entries to the /etc/hosts file on each server.

root@host-master:~#
sudo tee -a /etc/hosts<<EOF
172.31.11.188 k3s-master
172.31.10.103 k3s-worker1
172.31.10.103 k3s-worker2
EOF

root@host-master:~# sudo tee -a /etc/hosts<<EOF
> 172.31.11.188 k3s-master
> 172.31.10.103 k3s-worker1
> 172.31.10.103 k3s-worker2
> EOF
172.31.11.188 k3s-master
172.31.10.103 k3s-worker1
172.31.10.103 k3s-worker2
root@host-master:~#

Once again, as a reminder, these tasks need to be completed on every server. This way, the agents know where and on which hosts to look for updates. 

Install K3s on Master

Next, we can install K3s on the master server. There are many ways to do this, but this is the simplest method.

root@host-master:~# curl -sfL https://get.k3s.io | sh -
/etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
root@host-master:~#

Verify Installation

K3s should start automatically after the installation. Let's verify using this command.

root@host-master:~# systemctl status k3s
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2020-09-22 18:39:02 UTC; 29s ago
     Docs: https://k3s.io
  Process: 13245 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 13240 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 13249 (k3s-server)
    Tasks: 78
   CGroup: /system.slice/k3s.service
           ├─13249 /usr/local/bin/k3s server
           ├─13285 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
           ├─13718 /var/lib/rancher/k3s/data/19bb6a9b46ad0013de084cf1e0feb7927ff9e4e06624685ff87f003c208fded1/bin/containerd-shim-runc-v2 -namespace k8s.io -id fbb9d4ec16b5e70b08ae9c968d4ae96f35b0d3328303c1
           ├─13720 /var/lib/rancher/k3s/data/19bb6a9b46ad0013de084cf1e0feb7927ff9e4e06624685ff87f003c208fded1/bin/containerd-shim-runc-v2 -namespace k8s.io -id da00cf5a839e4d0cc89849317ed96b3269ba78ce92108e
           ├─13721 /var/lib/rancher/k3s/data/19bb6a9b46ad0013de084cf1e0feb7927ff9e4e06624685ff87f003c208fded1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3271695d6f8cb7d38d86901faf719f840c236e236f9490
           ├─13723 /var/lib/rancher/k3s/data/19bb6a9b46ad0013de084cf1e0feb7927ff9e4e06624685ff87f003c208fded1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 23cdd690357129d8e0eb4a520a282c9428ce57ef359524
           ├─13809 /pause
           ├─13816 /pause
           ├─13822 /pause
           ├─13829 /pause
           ├─13944 /coredns -conf /etc/coredns/Corefile
           ├─13963 local-path-provisioner start --config /etc/config/config.json
           ├─13976 /metrics-server
           ├─14098 runc --root /run/containerd/runc/k8s.io --log /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/9f95dec13ba16620529007939afb598816a14073b175ca44c7e6ac8b7b0384b8/log.json --log-form
           └─14109 runc init

Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.601394 13249 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.601853 13249 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.603951 13249 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.606755 13249 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.607633 13249 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.612464 13249 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.612479 13249 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.612994 13249 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.614222 13249 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
Sep 22 18:39:17 ip-172-31-11-188 k3s[13249]: I0922 18:39:17.615668 13249 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
lines 1-35/35 (END)

Why is it more convenient to install k3s using this method? 

Because it immediately installs the following programs for us

  • kubectl - A handy CLI-based program for interacting with K3s via a console or terminal.
  • crictl - This is a program used for communicating with the containers and other container runtimes.
  • k3s-killall.sh - : This is a bash script that cleans up all containers and network components after the install.
  • k3s-uninstall.sh: This is a bash script that removes all clusters and scripts.

Configuration

The kubeconfig file is written to /etc/rancher/k3s/k3s.yaml. This file is required for the Kubernetes configuration.

root@host-master:~# sudo cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFl3TURjNU9Ua3pOREFlRncweU1EQTVNakl4T0RNNE5UUmFGdzB6TURBNU1qQXhPRE00TlRSYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFl3TURjNU9Ua3pOREJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkdvU3pZQnBxWWduMzdTTnozZU9kOUgxcU1YSEpmNzljLzVOZm1xN2k3c1oKWWZ4TndPcXBkM3VtQ2NFLzl1MHgzYjVmUHRaR3g5RHUxK0RQTTQwYjlybWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSUc3eUtsTVFabmQ0ClVTRTJSbG5WdWlGeDJheCs4SjBkajcxY3BUTExlbUJ2QWlFQTl6L0tDZjAzR3poN3JCTjlaRnZOWWdydFBldkYKZDZBLzd5Q0RraEMwNlRZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: 7dff7a3015f352089c2cd1bfe3048a3d
    username: admin
root@host-master:~#

To install K3s on the server, the agent needs to pass K3S_URL along with the K3S_TOKEN or K3S_CLUSTER_SECRET variables. 

We will use the K3S_TOKEN variable, we can see it in the following file

root@host-master:~# sudo cat /var/lib/rancher/k3s/server/node-token
K10a5185782c494893a4efada19b97fca7fd0b1e628a8a9f70eb7d21413a2fa2a3b::server:56b49545f3bf8dd01810268ea2579e12
root@host-master:~#

Now, let's add our variables. Insert the data into the token, and also if we have a different URL, we can insert our own URL here.

k3s_url="https://k3s-master:6443"
k3s_token="K10a5185782c494893a4efada19b97fca7fd0b1e628a8a9f70eb7d21413a2fa2a3b::server:56b49545f3bf8dd01810268ea2579e12"
curl -sfL https://get.k3s.io | K3S_URL=${k3s_url} K3S_TOKEN=${k3s_token} sh - 

Worker 1 Configuration

Now enter the following info into the worker1 server.

root@host-worker1:~# k3s_url="https://k3s-master:6443"
root@host-worker1:~#
root@host-worker1:~# k3s_token="K10a5185782c494893a4efada19b97fca7fd0b1e628a8a9f70eb7d21413a2fa2a3b::server:56b49545f3bf8dd01810268ea2579e12"
root@host-worker1:~#

It is important not to put any spaces after and before equals. If it doesn't work, read about variables in Linux. Next. begin the installation.

root@host-worker1:~# curl -sfL https://get.k3s.io | K3S_URL=${k3s_url} K3S_TOKEN=${k3s_token} sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.8+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent
root@host-worker1:~#

Worker 2 Configuration

Now we go to the worker2 server and repeat the entire process.

root@host-worker2:~# k3s_url="https://k3s-master:6443"
root@host-worker2:~#
root@host-worker2:~# k3s_token="K10a5185782c494893a4efada19b97fca7fd0b1e628a8a9f70eb7d21413a2fa2a3b::server:56b49545f3bf8dd01810268ea2579e12"
root@host-worker2:~#
root@host-worker2:~# curl -sfL https://get.k3s.io | K3S_URL=${k3s_url} K3S_TOKEN=${k3s_token} sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.8+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent
root@host-worker2:~#

Verify Configuration

Go to the master server and check everything. We check that the cluster is working and everything is fine using this command.

root@host-master:~# sudo kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use the 'kubectl cluster-info dump' command.

root@host-master:~# kubectl cluster-info dump

Next, we can check our installed agents using the following command.

root@host-master:~# sudo kubectl get nodes
NAME STATUS ROLES AGE     VERSION
k3s-master        Ready      master     10m     v1.18.8+k3s1
k3s-agent1 Ready worker 3m3s v1.18.8+k3s1
k3s-agent2 Ready worker 2m12s v1.18.8+k3s1
root@host-master:~#

Verify Running Containers

Now, let's check that the containers are up and running. 

Master server

root@host-master:~# sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4dc55cbcce136 aa764f7db3051 18 minutes ago Running traefik 0 778fac0e94f4d
b02177043c6aa 897ce3c5fc8ff 18 minutes ago Running lb-port-443 0 eae6017fa3438
838e79e984c34 897ce3c5fc8ff 19 minutes ago Running lb-port-80 0 eae6017fa3438
d212118979a23 9dd718864ce61 19 minutes ago Running metrics-server 0 23cdd69035712
102a4627832f3 9d12f9848b99f 19 minutes ago Running local-path-provisioner 0 da00cf5a839e4
512d452933667 4e797b3234604 19 minutes ago Running coredns 0 3271695d6f8cb
ubuntu@ip-172-31-11-188:~$

Worker1 server

root@host-worker1:~# sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
2356fce5b5c8d 897ce3c5fc8ff 12 minutes ago Running lb-port-443 0 7a5f8c5198106
b68b8db0d827f 897ce3c5fc8ff 12 minutes ago Running lb-port-80 0 7a5f8c5198106
root@host-worker1:~#

Worker 2 server

root@host-worker2:~# sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
472a69994eedc 897ce3c5fc8ff 19 minutes ago Running lb-port-443 0 ff52371139711
08efc4ccaa735 897ce3c5fc8ff 19 minutes ago Running lb-port-80 0 ff52371139711
root@host-worker2:~#

Conclusion

In this tutorial, we discussed how k3s is installed, how it is configured. More importantly, K3s provides a quick, cost-effective method to scale-out fully functional Kubernetes clusters and to employ multi-purpose applications economically.

Get Started Today!

We pride ourselves on being The Most Helpful Humans In Hosting™!

Our Support Teams are filled with experienced Linux technicians and talented system administrators who have intimate knowledge of multiple web hosting technologies, especially those discussed in this article.

Should you have any questions regarding this information, we are always available to answer any inquiries with issues related to this article, 24 hours a day, 7 days a week 365 days a year.

If you are a Fully Managed VPS server, Cloud Dedicated, VMWare Private Cloud, Private Parent server, Managed Cloud Servers, or a Dedicated server owner and you are uncomfortable with performing any of the steps outlined, we can be reached via phone @800.580.4985, a chat or support ticket to assisting you with this process.

Avatar for Margaret Fitzgerald

About the Author: Margaret Fitzgerald

Margaret Fitzgerald previously wrote for Liquid Web.

Latest Articles

Blocking IP or whitelisting IP addresses with UFW

Read Article

CentOS Linux 7 end of life migrations

Read Article

Use ChatGPT to diagnose and resolve server issues

Read Article

What is SDDC VMware?

Read Article

Best authentication practices for email senders

Read Article