Single-node Kubernetes Installation

Installing Kubernetes on a single node to allow for ease of managing and deploying applications.

Shipping containers being moved in a shipyard.

Note to the reader: this post has been sitting in my drafts for the best part of a year, if not more. I've polished up a bit to get it somewhere that might be useful for others and decided to publish it just to get it done.


Kubernetes (abbreviated as K8s) is an open source tool that facilitates automated maintenance and deployment of applications. In less technical terms, it allows me to quickly set up and publish the various projects that I work on, and until recently my main website and several Discord bots were all hosted using K8s, along with several other services I run for friends.

As I mentioned in my last post, I had started to teach myself about Kubernetes and the ecosystem around it. I've been using Docker and docker-compose to manage many of my deployments while now, but I've been interested in potentially getting a more resilient and complete solution in place. This post will largely be documenting my trials and errors, and any resources I found useful. I mean this mostly to be a useful for self reference, but if it ends up helping anyone else I'm glad I could do that.

Typically multiple nodes would be used in a Kubernetes setup, but unfortunately I do not have the ability to create that kind of system. A lot of this is my testing to see if a single node will be able to do what I'd like. A lot of reference material is from this article by James McDonald (it provided heavy inspiration for this post), as well as from the Kubernetes documentation. To be fully honest, at the start of this experimentation I had zero experience with K8s and this article is liable to have incorrect information.

As an overview for what I'm attempting to get running, the final product will be an Ubuntu 20.04 server running Kubernetes with networking powered by Flannel. Applications set up to test said server will include static HTTP servers, a NodeJS powered API backend (math-api in this case), and a Discord bot that interfaces with a database.

Initial Setup

I started with a base Ubuntu 20.04 server image, running in VirtualBox. I wanted to get a feel for if this is something I wanted to manage my server with, and this seemed like the easiest way. As always, I started off updated with apt update and apt upgrade.

Installing Docker and Kubernetes

The next step was to install Docker. I followed the official instructions, slightly modified for my needs, and followed McDonald's article for configuration. Note: these instructions are correct to the best of my current understanding. There may be more up to date information when you read this, make sure to check for yourself.

sudo mkdir /etc/docker

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/docker.list
deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable
EOF

sudo apt update
sudo apt install -y --no-install-recommends docker-ce

From there, installing K8s goes smoothly.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Flannel requires that we set --pod-network-cidr.

kubeadm init --pod-network-cidr=10.244.0.0/16

Once that's complete, we'll install Flannel for networking.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

And finally, untaint the master node so we can run pods on it.

kubectl taint nodes --all node-role.kubernetes.io/master-

We'll need an ingress to direct traffic to different pods, so we'll also install ingress-nginx.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
cat > nginx-host-networking.yaml <<EOF
spec:
  template:
    spec:
      hostNetwork: true
EOF
kubectl -n ingress-nginx patch deployment nginx-ingress-controller --patch="$(<nginx-host-networking.yaml)"

With that, we should have a fairly complete (if barebones) setup for Kubernetes.


Testing

To test things out, I initially created a simple nginx container. I used xip.io[1] to configure the ingress for the container -  set the hostname to a domain that includes the server's IP, navigate to the domain in your browser, and you're done.

Things went well and pretty soon I had some other services running alongside things - a Discord bot, as well as a Minecraft server. At the time of writing and editing I don't have much more to add here, as unfortunately I have lost record of most of the initial testing I did.

Summing things up

In the end, I was really happy with how my node functioned. I was able to set up and have my required services running, new deployments were super easy, and it felt nice to have a tool that I was able to work with like Kubernetes.

I did run into several issues mostly unrelated to K8s, but K8s did definitely complicate the remediation process. At one point the IP address assigned to my server was rerolled, which created a very painful situation. In the end the issues began to compound until I decided to just recreate the node entirely - an atomic solution, but in the end it resolved the issue in less time than I had already spend trying to resolve it otherwise.

Disk failure and other storage issues also threw some interesting issues into the mix, with K8s resource exhaustion recovery kicking in and eventually killing all services that I was running.

In the end, while this is definitely not a very resilient solution and not recommended for any K8s implementation that requires scaling and failover, I do think that it was very useful for getting more familiar with K8s and the different ways that it can be used.


1. xip.io appears to have been dead for the greater part of 2021, similar services such as nip.io also exist.