Introduction
You have the desire that your application needs to scale across multiple nodes for load balancing, but needs access to a common PVC? For this purpose, you need a PVC that is RWX-enabled. As part of our Managed Kubernetes Cluster, it is possible to create a CSI Cinder block storage. However, due to technical limitations, this is only ReadWriteOnce (RWO) capable. We will show you here how we can still create a RWX-capable PVC using a workaround!
PVCs are explained in the tutorial Creating Persistent Volumes in Kubernetes, on which this tutorial builds.
Take, for example, a web server that has HTML pages, images, JS etc. stored on its PVC. These files are to be uploaded once and made available to all instances of the web server simultaneously. To better distribute the load, the deployment is scaled accordingly. Thus, several pods of the same type run on different servers.
To ensure that all pods – across host boundaries – have access to the files, we create a RWX Storageclass with the nfs-ganesha-server-and-external-provisioner.
The basis for this is an NWS Managed Kubernetes Cluster. After completing the workaround, we have the option of creating PVCs that can be read and written to simultaneously by multiple pods.
A note beforehand: the setup described is not HA capable!
Prerequisites
The tutorial assumes the following:
- NWS Managed Kubernetes Cluster
- kubectl
- helm
You can find more information on these topics in our NWS Docs
Setup
With the following command we add the Helm Repository, which provides us with the nfs-ganesha-server-and-external-provisioner, called NFS Provisioner in the following.
$ helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/
After that we can immediately start with the installation. Note the settings that follow the –set parameter. These result in:
-
persistence.enabled=true This parameter ensures that our data is stored on a persistent volume and persists even after the pod restarts.
-
persistence.storageClass=default Here it is specified that the storage class "standard" should be used for the persistent data.
-
persistence.size=200Gi This parameter specifies the size of the PVC that the NFS Provisioner fetches for the files.
Note that the PVC size specified with persistence.size is shared for all NFS PVCs that are obtained from the NFS Provisioner. There are many other configuration options with which the NFS Provisioner can be adapted to your own needs. You can find these here
$ helm install nfs-server nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set persistence.enabled=true,persistence.storageClass=standard,persistence.size=200Gi
If the NFS Provisioner could be installed successfully, an output like the following appears:
NAME: nfs-server LAST DEPLOYED: Mon May 22 14:41:58 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NFS Provisioner service has now been installed. A storage class named 'nfs' has now been created and is available to provision dynamic volumes. You can use this storageclass by creating a `PersistentVolumeClaim` with the correct storageClassName attribute. For example: --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-dynamic-volume-claim spec: storageClassName: "nfs" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi
Now let’s take a look at the NFS Provisioner Pod we just created and see if it’s running:
kubectl get pods NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 36m
And of course the associated storage class for the automatic provision of NFS RWX PVCs. We can now use this to dynamically create and use PVCs of the RWX type.
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE encrypted cinder.csi.openstack.org Delete Immediate true 3h22m encrypted-high-iops cinder.csi.openstack.org Delete Immediate true 3h22m high-iops cinder.csi.openstack.org Delete Immediate true 3h22m nfs cluster.local/nfs-server-nfs-server-provisioner Delete Immediate true 39m nws-storage cinder.csi.openstack.org Delete Immediate true 3h22m standard (default) cinder.csi.openstack.org Delete Immediate true 3h22m
Web server example
We now have a PVC with the property RWX which we can provision dynamically. Now let’s take a look at how this is integrated into a deployment.
We create two files; one for our web server deployment and one for the RWX PVC:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: webserver name: webserver spec: replicas: 1 selector: matchLabels: app: webserver template: metadata: creationTimestamp: null labels: app: webserver spec: containers: - image: nginx:latest name: nginx volumeMounts: - mountPath: /files name: files volumes: - name: files persistentVolumeClaim: claimName: nfs-pvc
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: nfs
Our deployment specifies that we want to run exactly one replica of the NGiNX Container. This integrates the dynamically created NFS PVC under /files. Now we have to feed the definitions to Kubernetes using kubectl:
$ kubectl apply -f nfs-pvc.yaml nginx-deployment.yaml persistentvolumeclaim/nfs-pvc created deployment.apps/webserver created
As we can see a Webserver pod is running:
$ kubectl get pods NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 54m webserver-5486dd9cf5-hfhnd 1/1 Running 0 114s
But since this is far from sufficient for our load, we will expand the deployment to 4 replications:
$ kubectl scale deployment webserver --replicas=4 deployment.apps/webserver scaled
A small check is recommended and lo and behold, all 4 pods are running happily:
$ kubectl get pods NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 55m webserver-5486dd9cf5-hfhnd 1/1 Running 0 3m9s webserver-5486dd9cf5-nh9fl 1/1 Running 0 18s webserver-5486dd9cf5-ss27f 1/1 Running 0 18s webserver-5486dd9cf5-xl2lj 1/1 Running 0 18s
Now let’s check if the NFS is also working properly between the pods.
A first command shows us that /files is empty. Using a second command, we create the nfs-is-rwx file. From the third issue we can see that we have been successful. We created in one pod and the file immediately existed in another pod.
$ kubectl exec webserver-5486dd9cf5-hfhnd -- ls /files $ kubectl exec webserver-5486dd9cf5-nh9fl -- touch /files/nfs-is-rwx $ kubectl exec webserver-5486dd9cf5-xl2lj -- ls /files nfs-is-rwx
Summary
You have now set up an NFS server, which uses a CSI Cinder block storage in the background. The NFS server uses the block storage in the background to provide an RWX PVC for your pods via NFS. As a result, we have circumvented the technical limitation of a CSI Cinder block device and you can now use your NWS Managed Kubernetes Cluster for more use cases.