Install Kubernetes NFS CSI with Fluxcd

Overview

Hello there Kubernetes explorers and “home-labers”. Here I’ll be covering the steps on how to install Kubernetes NFS CSI with Fluxcd or in another words, I will deploy NFS CSI driver on Kubernetes by using Fluxcd that is already running on the same cluster. This post follows a home-lab series on how I manage some Kubernetes and self hosting stuff on my home-lab(setup ha k3s cluster LINK, deploy fluxcd on kubernetes LINK).

Why I’m doing this? It’s simple - I want to have a dynamic storage provisioning and dynamic pvc/pv volumes expansion on my cluster. This will make storage management easier plus my Proxmox nodes don’t have currently extra storage space so, I’ll be offloading storage to my NAS and NAS runs storage in RAID and with that I have redudancy too.

Requirements:

  • NFS server with a shared folder and correct permissions defined on it,
  • kubernetes cluster and nfs server must have network connectivity,
  • kubernetes nfs csi driver

NFS shared folder setup

On our NFS server we must have a shared folder that will be used for the CSI driver only and we must set public NFC permissions on it too. I have Synology NAS that I’ll be using for the Kubernetes storage and in my case the permissions look like this:

install kubernetes nfs csi with fluxcd

On the regular nfs-server tool running on Linux the same permissions should look like this:

sudo mkdir -p /srv/nfs/data
sudo chown nobody:nogroup /srv/nfs/data
sudo chmod 777 /srv/nfs/data

sudo nano /etc/exports
/srv/nfs/data 192.168.0.0/24(rw,sync,no_root_squash,no_subtree_check,anonuid=0,anongid=0,insecure)

These NFS settings and permissions are not the safest nor best of course but this is the initial requirement of the NFS CSI driver so it can take ownership of the shared volume. For others I recomend at least to add in the export config a range of IP’s from your worker nodes. I left it as is since this is all in my local LAN and I don’t the expose anything yet online.

Deploy NFS CSI Driver on Kubernetes

Luckily, this CSI driver has the Helm chart version and this is great since I can use FluxCD to automatically install and provision the NFS CSI driver on my cluster and also configure it and create the necessary storage class. I created a folder in my repo and named it the nfs-csi-driver and in the folder I’ve added three files:

apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: csi-driver-nfs
  namespace: kube-system
spec:
  interval: 24h
  url: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts

The repository.yaml file with the repo url of the Helm chart for the NFS CSI driver

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: csi-driver-nfs
  namespace: kube-system
spec:
  interval: 30m
  chart:
    spec:
      chart: csi-driver-nfs
      version: "4.9.0"
      sourceRef:
        kind: HelmRepository
        name: csi-driver-nfs
        namespace: kube-system
      interval: 48h

The release.yaml file that I’ll be using for the Helm chart configuration(spec, values, environment variables etc)

Next component is the StorageClass. This will enable us to have dynamic PVC/PV provisioning.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: 192.168.0.240
  share: /volume1/K8S
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - hard
  - nfsvers=4

Note: Here’s important to define the NFS version that is same on your server, meaning the NFS versions on both ends needs to match, otherwise the communication will not be established. Also, synce I use Synology, the share path is speciffic to Synology, on your it will prooably be different if you’re using the NFS server on Linux machine. The share path in that case will be the defined in the NFS exports file.

Once the files are ready, it’s time to commit and reconsile the cluser with Flux. And double check if the driver and storage class are provisioned on the cluster.

install kubernetes nfs csi with fluxcd

install kubernetes nfs csi with fluxcd

So far so good.

Testing deployments with NFS storage

Now let’s if the storageclass and dynamic PVC provisioning works with deploying an app/service. I’ll be using Grafana for testing. In the deployment spec block of the application, I need to add and replace the storageClass value with the name of the storageClass of the NFS CSI which is for me nfs-csi.

persistence:
      enabled: true
      type: pvc
      storageClassName: nfs-csi
      accessModes:
        - ReadWriteMany
      size: 4Gi
    initChownData:
      enabled: false

Full Grafana release.yaml file.

install kubernetes nfs csi with fluxcd

After the Flux reconsilation, the Grafana should be updated and have the PVC and PV created and on my Synology I should have a PV volume created with Grafana files populated.

End result:

PVC and PV on the cluster

install kubernetes nfs csi with fluxcd

install kubernetes nfs csi with fluxcd

Grafana volume on the NAS and volume contents

install kubernetes nfs csi with fluxcd

install kubernetes nfs csi with fluxcd

And there it is.

Some key takeaways to keep in mind

The perspective I want to point out is - even though this setup works and this is fine for me now, since I’m limited with hardware at the moment and for anyone else building a home lab, this setup will suffice too.

But this is not an ideal solution nor it is not a production ready setup for Kubernetes. Why? Because the main bottleneck and limitation will be your network and your drives. I currently have 1gbit speed LAN network and mechanical HDD’s in my NAS. So the performance will not be best but it will be enough for me and Kubernetes loves fast storage and fast network.

In an ideal scenario, the best hardware choice would be SSD’s for storage(minimum SSD’s or better drive type) and at least 10G network. Some storage solution implementation for Kubernetes and to be produciton ready would be either - Longhorn storage cluster(with each kubernetes having access to an SSD), CEPH/vSAN by VMware or Starwind or even it can over network with NFS or iSCSI but again for best performance, SSD’s and 10G network.

Summary

Quick recap - Installed NFS CSI driver on the Kubernetes cluster, managed by Flux, NFS CSI provisioner handles the PVC and PV provisioned, gained the dynamic storage provisioning and expansion and lastly the data is stored on NAS with hard drive redudancy. I would say it’s a decent setup for a home lab.

Thank you for your time.