K3S with HAproxy Ingress and Metal-LB

Oren Oichman
10 min readJul 28, 2024

--

Keep it small & keep it simple

I many cases I can across developers and PAAS admins that wanted a small Kubernetes cluster to start working with and running it with a very small footprint… and for free.
as many Vanilla Kubernetes exists on the Open Source world I personally really like to deploy a cluster with a very small footprint and adding to it everything I need.

components

For me, there is a Kubernetes Cluster that gives me a very good solution for my requirements even if I need to go rogue to get it configure as I need it to be.
The components of this cluster are :

  • K3s for the cluster
  • Metal-LB to provide services with a public IP address
  • HA proxy ingress which will be my ingress controller.
  • NFS Storageclass — as my main Storage class

In the following tutorial I will show how to deploy the OS and quickly deploy the cluster and all of it’s components. (and some additional tools which I want available to me).

NOTE!!!
Before we begin please make sure all the following commands are being run with a non-root user , if it’s on your own PC use your username or if it is on a remote machine create a user for the following tasks.

Deployment

Operating System

For the Operating System I will use Debian which has a KVM image build in advanced.

First download the image from here

# wget https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-generic-amd64.qcow2

Now that our image is in place let’s update the root password of the new VM but first we need to install the guestos tools with the following command :

# dnf install -y guestfs-tools virt-install

Now let’s run the following command :

# virt-customize -a debian-11-generic-amd64.qcow2 --root-password password:<pass>
[ 0.0] Examining the guest ...
[ 11.6] Setting a random seed
virt-customize: warning: random seed could not be set for this type of
guest
[ 11.7] Setting the machine ID in /etc/machine-id
[ 11.7] Setting passwords
[ 12.3] SELinux relabelling
[ 12.8] Finishing off

Once we’ve update the root password let’s increase the size of our root (“/”) file system.

First we need to see the current FS table and each size of the logical disks from the original file size :

# virt-filesystems --long --parts --blkdevs -h -a debian-11-generic-amd64.qcow2
Name Type MBR Size Parent
/dev/sda1 partition - 1.9G /dev/sda
/dev/sda14 partition - 3.0M /dev/sda
/dev/sda15 partition - 124M /dev/sda
/dev/sda device - 2.0G -

In our tutorial we want a 30G disk for our root file systems so we need to first create a new file :

# qemu-img create -f qcow2 -o preallocation=metadata \
debian-11-generic-amd64-new.qcow2 30G

Now let’s Copy the old to the new while expand the appropriate partition (assuming our disk partition from step 2 was /dev/sda1)

# virt-resize --expand /dev/sda1 debian-11-generic-amd64.qcow2 debian-11-generic-amd64-new.qcow2
[ 0.0] Examining debian-11-generic-amd64.qcow2
**********

Summary of changes:

virt-resize: /dev/sda14: This partition will be left alone.

virt-resize: /dev/sda15: This partition will be left alone.

virt-resize: /dev/sda1: This partition will be resized from 1.9G to 29.9G.
The filesystem ext4 on /dev/sda1 will be expanded using the ‘resize2fs’
method.

**********
[ 2.2] Setting up initial partition table on debian-11-generic-amd64-new.qcow2
[ 12.9] Copying /dev/sda14
[ 12.9] Copying /dev/sda15
[ 13.0] Copying /dev/sda1
100% ⟦]
[ 16.6] Expanding /dev/sda1 (now /dev/sda3) using the ‘resize2fs’ method

virt-resize: Resize operation completed with no errors. Before deleting
the old disk, carefully check that the resized disk boots and works
correctly.

Once the process had completed let’s change the new disk name to the old disk name :

# mv debian-11-generic-amd64-new.qcow2 debian-11-generic-amd64.qcow2 

Now that the steps for preparing the disk are complete we can set and run the VM with the following command :

# virt-install --name k3s-server \
--memory 4096 \
--vcpus 2 \
--boot bootmenu.enable=on,bios.useserial=on \
--disk $(pwd)/debian-11-generic-amd64.qcow2,bus=sata \
--import \
--os-variant debian11 \
--nographics \
--network network='default'

After VM had booted , login to it and make sure your root password is working.

Once we are logged in we can now make a few important changes to the OS to make sure it is ready for K3S deployment such as setting up the hostname , making sure we are using a static IP address and setting up a DNS server which will correspond to the application URL.

For a Static IP address (in my case it will be 192.168.122.10 and the GW and the DNS of 192.168.122.1) I will run the following command :

# cat >> /etc/network/interfaces << EOF
auto lo
iface lo inet loopback

# The primary network interface
auto enp1s0
iface enp1s0 inet static
address 192.168.122.10
netmask 255.255.255.0
gateway 192.168.122.1
dns-domain k3s-dom.local
dns-nameservers 192.168.122.1
EOF

Set up the hostname and make sure the /etc/hosts file has to right records for the Server IP address :

# hostnamectl set-hostname k3s-server.k3s-dom.local
# echo '127.0.0.1 localhost.localdomain localhost 0' > /etc/hosts
# echo '192.168.122.10 k3s-server.k3s-dom.local k3s-server' >> /etc/hosts

SSH Fix

There is an issue with the ssh service that needs keys to start. To fix this issue we will go to the /etc/ssh directory and generate the needed keys and restart the ssh service :

# cd /etc/ssh/
# ssh-keygen -A
# systemctl enable --now ssh

To be able to login with root user we will copy the public key of our workstation to the authorized_keys files on the server:

# mkdir /root/.ssh/
# chmod 700 /root/.ssh/
# cat > /root/.ssh/authorized_keys
< COPY the public key here>
CTRL+d
# chmod 600 /root/.ssh/authorized_keys

Now shutdown the server in order for the changes to apply

# poweroff

For the final step of the OS part let’s add the domain to our user list with the following commands :

# virsh --connect qemu:///session dumpxml k3s-server > /tmp/k3s-server.xml
# virsh --connect qemu:///session undefine k3s-server
# virsh define /tmp/k3s-server.xml

Now to see the machine run :

# virsh list --all
Id Name State
-----------------------------
- k3s-server shut off

Boot the machine and Make sure you can login with ssh.

# virsh start k3s-server

Tools

For the tools part we will download helm , oc and kubectl so it will make us easy to work with our cluster when we logged in.

Login in to the server :

# ssh root@k3s-server

First let’s download helm:

# wget https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/helm/3.14.4/helm-linux-amd64
# mv helm-linux-amd64 /usr/local/bin/helm && \
chmod a+x /usr/local/bin/helm

Now download the oc and kubectl binaries :

# wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.12/openshift-client-linux.tar.gz
# tar -zxvf openshift-client-linux.tar.gz -C /usr/local/bin/
# rm -f openshift-client-linux.tar.gz

Up until now we set up everything we needed For the server. for our next part we will start with the Kubernetes deployment and configuration.

Kubernetes

The command to deploy k3s is very simple to use. It’s a single line of code telling it the deploy k3s without the ingress controller:

# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh \
-s - --disable=traefik --disable servicelb

Before we do anything else we need to make sure that we can login to k3s Kubernetes cluster before we continue with all the installation parts :

# mkdir ~/.kube && chmod 700 ~/.kube
# cp /var/lib/rancher/k3s/server/cred/admin.kubeconfig .kube/config

Metal-LB

Metal-LB is an operator that enable us a way to give external IP address to services with-in the Kubernetes cluster. (for more information about it please read here)

In order to Install the Metal-LB Operator use the following command :

# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml

Once the process is complete let’s create the Metal-LB CR which will create a pull of IP address for the services to be available upon :

# cat <<EOF | kubectl apply -f - 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.122.30-192.168.122.40
EOF

And make sure we are adverstizing the right IP addresses Pool :

# cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
EOF

Now to add the HAproxy ingress controller we will use helm that we installed earlier just before that let’s see the amount pods which are running on the server :

# kubectl get pod -A
AMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6799fbcd5-7vqgg 1/1 Running 0 152m
kube-system local-path-provisioner-6f5d79df6-4w747 1/1 Running 0 152m
kube-system metrics-server-54fd9b65b-mcbqh 1/1 Running 0 152m
metallb-system controller-77676c78d9-cn5z5 1/1 Running 0 6m6s
metallb-system speaker-fktkx 1/1 Running 0 6m6s

Create a new namespace for the haproxy ingress controller :

# kubectl create ns haproxy-ingress

and set it as the current namspace :

# kubectl config set-context --current --namespace=haproxy-ingress

Let’s run helm to make sure the ingress controller is deployed :

# helm repo add haproxytech https://haproxytech.github.io/helm-charts
# helm repo update
# helm upgrade -i haproxy haproxytech/kubernetes-ingress

To make the service for the ingress controller work with Metal-LB and those give it the IP address we want we need to make a few adjustments to the service by adding annotation to it and change it’s type. For that we can generate a file from it’s configuration, delete the service and recreate it with the new confiugration.

# kubectl get svc/haproxy-kubernetes-ingress -o yaml > /tmp/ingress-svc.yaml

with your favorite editor make sure the annotation part as a reference to the external IP address we want it to use :

apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: haproxy
meta.helm.sh/release-namespace: haproxy-ingress
metallb.universe.tf/loadBalancerIPs: 192.168.122.30

and at the button make sure the type is “LoadBalancer” and not NodePort :

selector:
app.kubernetes.io/instance: haproxy
app.kubernetes.io/name: kubernetes-ingress
sessionAffinity: None
type: LoadBalancer

Save the changes , delete the old configuration and apply the new file :

# kubectl delete svc/haproxy-kubernetes-ingress
# kubectl apply -f /tmp/ingress-svc.yaml

Run kubectl get svc and see the service with the new changes :

# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-kubernetes-ingress LoadBalancer 10.43.4.142 192.168.122.30 80:32392/TCP,443:31349/TCP,443:31349/UDP,1024:32054/TCP,6060:32114/TCP 7m34s

For the last step all we need to do is to add the External IP address as a sub interface to our cluster (This part is relevant only if it is not being done automatically by Metal-LB)

# cat >> /etc/network/interfaces << EOF
iface enp1s0 inet static
address 192.168.122.30
netmask 255.255.255.0
dns-nameservers 192.168.122.1
EOF

Now reboot the Server and make sure the IP addresses are available after boot :

# reboot

StorageClass

In regards to the Storage Class the K3S cluster comes with a build-in storage class of “local-path”. This is good for “ReadWriteOnce” requirements. In case we want to create a storage Class for “ReadWriteMany” we can add an NFS storage Class (A very good article about NFS storageClass in OpenShift can be found here).

Log back into the Server:

# ssh root@k3s-server

And create a new directory under /var named “nfsshare” and give it “rwx” for all permissions.

# mkdir /var/nfsshare && chmod 777 /var/nfsshare

Install the NFS package on the server :

# apt update && apt install nfs-kernel-server git -y

Configure the /etc/exports file to export our directory and make sure it’s exported :

# cat > /etc/exports << EOF
/var/nfsshare *(rw,sync,no_wdelay,no_root_squash,insecure,fsid=0)
EOF
# exportfs -arv

Testing

On another machine (we can also run the test on the same machine) , make sure you have “nfs-utils” installed and mount the directory from the server :

$ mount -t nfs nfs-server:/var/nfsshare /mnt
$ touch /mnt/1 && rm -f /mnt/1

If the command was successful and you can create a file and delete it , we are good to go!

NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

Note: This repository is migrated from https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. As part of the migration:

  • The container image name and repository has changed to k8s.gcr.io/sig-storage and nfs-subdir-external-provisioner respectively.
  • To maintain backward compatibility with earlier deployment files, the naming of NFS Client Provisioner is retained as nfs-client-provisioner in the deployment YAMLs.

namespace
We will begin by creating the name space :

# kubectl create ns kube-nfs-storage

Clone the git directory :

#  git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git nfs-subdir

Updates :

Before we deploy the storageClass we will need to make a few configuration modifications.

switch to the new directory:

# cd nfs-subdir
# kubectl config set-context --current --namespace=kube-nfs-storage

and run the following :

# NAMESPACE="kube-nfs-storage"
# sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
# oc create -f deploy/rbac.yaml

In the “deploy/deployment.yaml” file let’s make the following changes from the original configuration :

env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfs
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfs

TO :

env:
- name: PROVISIONER_NAME
value: storage.io/nfs
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfsshare
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfsshare

Lets’ create a storageClass.yaml file :

# echo 'apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: storage.io/nfs
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.name}"
onDelete: delete' > deploy/class.yaml

And deploy the rest of the files :

# oc create -f deploy/deployment.yaml -f deploy/class.yaml

Testing

for testing you new storage class the deployment comes with 2 simple images we can use for testing. Just run the following deployment :

# oc create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

To delete the pods just run :

# oc delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

In case you want to use your newly created storage class you can use the following pvc as an example :

# cat > deploy/pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
EOF

and create it

# oc create -f deploy/pvc.yaml

In our case this StorageClass will not be the only Storageclass you have in the Cluster.
For this scenario we would want to make this our default StorageClass by adding a value to the StorageClass annotations :

# kubectl patch storageclass managed-nfs-storage -p \
'{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
# kubectl patch storageclass local-path -p \
'{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'

Now you have a full functioning k3s cluster … go ahead and deploy any application you wish and direct the name to the service external IP address (in our example it’s 192.168.122.30)

If you have any question feel free to respond/ leave a comment.
You can find me on linkedin at : https://www.linkedin.com/in/orenoichman
Or twitter at : https://twitter.com/ooichman

--

--