Add shared storage to MicroK8s Kubernetes cluster

A big feature of any Kubernetes cluster is its capability to deploy running workloads among the different compute nodes depending on many factors, but primarily assuring availability in the case of any node failure.

This task is “easy” while working with stateless workloads (compute only apps, or the ones depending on an external storage solution or database), as this stateless apps can be recreated anywhere in the cluster without depending on the availability of a specific storage drive or node condition.

For the case of storage dependent workloads, the ones that use a persistent volume, the challenge here is related to the availability of that specific storage location among the different compute nodes.

This is where CSI (Container Storage Interface) drivers come into play.

By default any Kubernetes cluster can create persistent volumes using local storage or host path drivers. But these storage classes create the corresponding volumes using the compute node local storage. So once the application or container is first created and the persistent volume claim triggered, the corresponding persistent volume is created on the same compute node where the container is running. From that moment on, that container will always be allocated on the same compute node, restricting the capability of dynamic allocation previously mentioned.

To avoid this, you have to implement a kind of external storage solution and define the corresponding storage class, so persistent volumes can be created on the external storage and attached to containers independently on which compute node they are running on.

On a previous post I created an NFS share using a RAID 1 disk array, I’ll be using this NFS as an external storage solution for my cluster.

In order to install the CSI driver for NFS, first make sure to install Helm package manager:

$ microk8s enable helm3

Then enable the corresponding repo:

$ microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
$ microk8s helm3 repo update

Now install the CSI NFS driver:

$ microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet

Check that the corresponding pods are running on each compute node:

$ microk8s kubectl --namespace=kube-system get pods

NAME                                         READY   STATUS    RESTARTS   AGE
hostpath-provisioner-76f65f69ff-5qvkh        1/1     Running   0          171m
calico-node-jsch9                            1/1     Running   0          164m
calico-node-csbmd                            1/1     Running   0          165m
coredns-66bcf65bb8-srgkv                     1/1     Running   0          172m
calico-kube-controllers-66f8dbf5f5-chqwr     1/1     Running   0          173m
metrics-server-5f8f64cb86-2xjdr              1/1     Running   0          163m
dashboard-metrics-scraper-6b6f796c8d-spbmc   1/1     Running   0          162m
kubernetes-dashboard-748c844d5f-cspbq        1/1     Running   0          68m
csi-nfs-node-9gpwx                           3/3     Running   0          2m10s
csi-nfs-node-7zfkq                           3/3     Running   0          2m11s
csi-nfs-controller-75d6c9589d-kjn5h          3/3     Running   0          2m11s

So there are a couple of NFS node pods, and an NFS controller, the final validation would be to list the available CSI drivers in the cluster:

$ microk8s kubectl get csidrivers

NAME             ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
nfs.csi.k8s.io   false            false            false             <unset>         false               Persistent   3m20s

Now it’s time to create the new storage class, so Kubernetes can request volumes using the driver. Create the following YAML file:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: XXX.XXX.XXX.XXX
  share: /media/ssdraid1/nfs-share-dir
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - hard
  - nfsvers=4.1

Change the values of server and share in parameters to match your NFS share. Apply this configuration to the cluster:

$ microk8s kubectl apply -f nfs-storage-class.yml

At this point everything is ready from a configuration standpoint. So let’s run a test, creating a persistent volume claim using the new storage class, here is the corresponding YAML:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfc-pvc
spec:
  storageClassName: nfs-csi
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 5Gi

Now apply the configuration and check for the newly created PVC:

$ microk8s kubectl apply -f test-nfs-pvc.yml

$ microk8s kubectl get pvc

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-nfc-pvc   Bound    pvc-efbb60b4-1fdc-4cda-8b38-40fea77c4192   5Gi        RWO            nfs-csi        37s

Everything looks fine. Now every time you need to deploy a workload that uses persistent volumes, refer to the new storage class so NFS will be used. This will guarantee that the new workload can be deployed and re deployed on any compute node as long as that node has access to the NFS share.

Let me know if you were able to implement this solution as described, any issues you may find or improvements.

Cheers!

Related Posts

ZeroTier router to access my home network

Never had the real need to access my home network from the outside but recently as working back at the office is becoming a reality, I just…

Deploying WordPress in Kubernetes exposed via Cloudflare Tunnels

So this is the story of how you are reading this blog, technically speaking. When thinking about creating this blog, I realized that already had available compute…

NFS share with external SSD Raid 1 disks

When planning to provide my Kubernetes cluster with a dynamic storage provisioning option, I decided to use NFS to be able to attach volumes to multiple nodes…

Backup script for KVM virtual machines

KVM is a great virtualization engine, open source and light way. I’m running my Home Assistant VM inside an Intel NUC without any issues and super stable….

Expose Home Assistant on CG-NAT networks

There is no doubt that Home Assistant is a great home automation software that lets automate a wide range of home appliances. Open source and in constant…

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments