RWX with the NFS Ganesha Provisioner

26 May, 2023

Marc Zimmermann
Marc Zimmermann
Manager SaaS

Marc ist bei NETWAYS 2021 vorbeigekommen und wurde eingezogen. Sein Einstieg in die Welt der IT begann schon in seiner Jugend. Anfangs noch mehr mit Windows und DOS bis er von einen Freund von diesen "Linux" hörte. Wie sollen wir sagen, er ist darauf hängengeblieben. Bis heute 🙂

by | May 26, 2023

You have the requirement that your application must scale across multiple nodes for load balancing, but need access to a common PVC? For this purpose, you need a PVC that is ReadWriteMany (RWX)-capable. As part of our Managed Kubernetes Cluster, it is possible to create a CSI Cinder Blockstorage. However, due to technical limitations, this is only ReadWriteOnce (RWO) capable. We will show you here how we can still create an RWX-capable PVC using a workaround based on a practical example! What PVCs are is explained in the tutorial Creating persistent volumes in Kubernetes on which this tutorial is based.

Let’s take a web server, for example, which has HTML pages, images, JS etc. stored on its PVC. These files are to be uploaded once and made available to all instances of the web server at the same time. To distribute the load better, the deployment is scaled accordingly. This means that several pods of the same type run on different servers. To ensure that all pods – across host boundaries – have access to the files, we create an RWX storage class with the nfs-ganesha-server-and-external-provisioner. An NWS Managed Kubernetes Cluster serves as the basis for this.

After completing the workaround, we have the option of creating PVCs that can be read and written by several pods at the same time. A note in advance: the setup described is not HA-capable!

Prerequisites

The tutorial assumes the following:

  • NWS Managed Kubernetes Cluster
  • cubectl
  • helmet

You can find more information on these topics in our NWS Docs.

Furnishings

With the following command we add the Helm repository, which provides us with the nfs-ganesha-server-and-external-provisioner, hereinafter referred to as NFS Provisioner.

$ helm repo add nfs-ganesha-server-and-external-provisioner \
https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/

We can then start the installation immediately. Note the settings that follow the -set parameter. These have the following effect:

persistence.enabled=true

This parameter ensures that our data is stored on a persistent volume and is still available after restarting the pod.

persistence.storageClass=standard

Here you specify that the “standard” storage class should be used for the persistent data.

persistence.size=200Gi

This parameter specifies how large the PVC should be that the NFS Provisioner fetches for the files.

Please note that the PVC size, which is specified with persistence.size, is shared for all NFS PVCs that are obtained via the NFS Provisioner. There are many other configuration options with which the NFS Provisioner can be adapted to your own needs. You can find them here.

$ helm install nfs-server \
nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
--set persistence.enabled=true \
--set persistence.storageClass=standard,persistence \
--set size=200Gi

If the NFS Provisioner has been successfully installed, the following output appears:

NAME: nfs-server
LAST DEPLOYED: Mon May 22 14:41:58 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NFS Provisioner service has now been installed.
 
A storage class named 'nfs' has now been created and is available to provision dynamic volumes.
 
You can use this storageclass by creating a `PersistentVolumeClaim` with the correct storageClassName attribute. For example:
 
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
    spec:
      storageClassName: "nfs"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

Now let’s take a look at the NFS Provisioner Pod we just created and check whether it is running:

kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0   1/1     Running   0          36m

And, of course, the corresponding storage class for the automatic provisioning of NFS RWX PVCs. We can now use this to dynamically create and use PVCs of type RWX.

$ kubectl get storageclass
NAME                  PROVISIONER                                       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
encrypted             cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
encrypted-high-iops   cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
high-iops             cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
nfs                   cluster.local/nfs-server-nfs-server-provisioner   Delete          Immediate           true                   39m
nws-storage           cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
standard (default)    cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m

Web server example

We now have a PVC with the property RWX which we can provision dynamically. Now let’s take a closer look at how this is integrated into a deployment. We create two files; one for our deployment of the web server and one for the RWX PVC:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webserver
  name: webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webserver
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: webserver
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        volumeMounts:
        - mountPath: /files
          name: files
      volumes:
      - name: files
        persistentVolumeClaim:
          claimName: nfs-pvc

# nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs

Our deployment specifies that we want to run exactly one replica of the NGiNX container. This integrates the dynamically created NFS PVC under /files. Now we have to feed the definitions to Kubernetes using kubectl:

$ kubectl apply -f nfs-pvc.yaml nginx-deployment.yaml
persistentvolumeclaim/nfs-pvc created
deployment.apps/webserver created

As we can see, a web server pod is running:

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0   1/1     Running   0          54m
webserver-5486dd9cf5-hfhnd            1/1     Running   0          114s

However, as this is nowhere near sufficient for our load, we expand the deployment to 4 replications:

$ kubectl scale deployment webserver --replicas=4
deployment.apps/webserver scaled

A quick check is recommended and lo and behold, all 4 pods are running happily:

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0   1/1     Running   0          55m
webserver-5486dd9cf5-hfhnd            1/1     Running   0          3m9s
webserver-5486dd9cf5-nh9fl            1/1     Running   0          18s
webserver-5486dd9cf5-ss27f            1/1     Running   0          18s
webserver-5486dd9cf5-xl2lj            1/1     Running   0          18s

Now we check whether the NFS also works correctly between the pods. A first command shows us that /files is empty. With the help of a second command, we create the file nfs-is-rwx. We can see from the third output that we were successful. We created in one pod and the file was immediately available in another pod.

$ kubectl exec webserver-5486dd9cf5-hfhnd -- ls /files
$ kubectl exec webserver-5486dd9cf5-nh9fl -- touch /files/nfs-is-rwx
$ kubectl exec webserver-5486dd9cf5-xl2lj -- ls /files
nfs-is-rwx

Summary

You have now set up an NFS server that uses a CSI Cinder block storage in the background. The NFS server uses the block storage in the background to provide an RWX PVC for your pods via NFS. As a result, we have circumvented the technical limitation of a CSI Cinder block device and you can now use your NWS Managed Kubernetes Cluster for more use cases.

 

Our portfolio

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

How did you like our article?