May 26, 2023 | Kubernetes, Tutorials

ReadWriteMany (RWX) with the NFS Ganesha Provisioner

by

Introduction

You have the desire that your application needs to scale across multiple nodes for load balancing, but needs access to a common PVC? For this purpose, you need a PVC that is RWX-enabled. As part of our Managed Kubernetes Cluster, it is possible to create a CSI Cinder block storage. However, due to technical limitations, this is only ReadWriteOnce (RWO) capable. We will show you here how we can still create a RWX-capable PVC using a workaround!

PVCs are explained in the tutorial Creating Persistent Volumes in Kubernetes, on which this tutorial builds.

Take, for example, a web server that has HTML pages, images, JS etc. stored on its PVC. These files are to be uploaded once and made available to all instances of the web server simultaneously. To better distribute the load, the deployment is scaled accordingly. Thus, several pods of the same type run on different servers.

To ensure that all pods – across host boundaries – have access to the files, we create a RWX Storageclass with the nfs-ganesha-server-and-external-provisioner. The basis for this is an NWS Managed Kubernetes Cluster. After completing the workaround, we have the option of creating PVCs that can be read and written to simultaneously by multiple pods.

A note beforehand: the setup described is not HA capable!

 

Prerequisites

The tutorial assumes the following:

  • NWS Managed Kubernetes Cluster
  • kubectl
  • helm

You can find more information on these topics in our NWS Docs

 

Setup

With the following command we add the Helm Repository, which provides us with the nfs-ganesha-server-and-external-provisioner, called NFS Provisioner in the following.

$ helm repo add nfs-ganesha-server-and-external-provisioner \
https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/

After that we can immediately start with the installation. Note the settings that follow the –set parameter. These result in:

  • persistence.enabled=true
    This parameter ensures that our data is stored on a persistent volume and persists even after the pod restarts.
  •  persistence.storageClass=default
    Here it is specified that the storage class “standard” should be used for the persistent data.
  •  persistence.size=200Gi
    This parameter specifies the size of the PVC that the NFS Provisioner fetches for the files.

Note that the PVC size specified with persistence.size is shared for all NFS PVCs that are obtained from the NFS Provisioner. There are many other configuration options with which the NFS Provisioner can be adapted to your own needs. You can find these here

$ helm install nfs-server \
nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
--set persistence.enabled=true \
--set persistence.storageClass=standard,persistence \
--set size=200Gi

If the NFS Provisioner could be installed successfully, an output like the following appears:

NAME: nfs-server
LAST DEPLOYED: Mon May 22 14:41:58 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NFS Provisioner service has now been installed.
 
A storage class named 'nfs' has now been created and is available to provision dynamic volumes.
 
You can use this storageclass by creating a `PersistentVolumeClaim` with the correct storageClassName attribute. For example:
 
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
    spec:
      storageClassName: "nfs"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

Now let’s take a look at the NFS Provisioner Pod we just created and see if it’s running:

kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0   1/1     Running   0          36m

And of course the associated storage class for the automatic provision of NFS RWX PVCs. We can now use this to dynamically create and use PVCs of the RWX type.

$ kubectl get storageclass
NAME                  PROVISIONER                                       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
encrypted             cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
encrypted-high-iops   cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
high-iops             cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
nfs                   cluster.local/nfs-server-nfs-server-provisioner   Delete          Immediate           true                   39m
nws-storage           cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m
standard (default)    cinder.csi.openstack.org                          Delete          Immediate           true                   3h22m

Web server example

We now have a PVC with the property RWX which we can provision dynamically. Now let’s take a look at how this is integrated into a deployment.

We create two files; one for our web server deployment and one for the RWX PVC:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webserver
  name: webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webserver
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: webserver
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        volumeMounts:
        - mountPath: /files
          name: files
      volumes:
      - name: files
        persistentVolumeClaim:
          claimName: nfs-pvc

 

# nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs

Our deployment specifies that we want to run exactly one replica of the NGiNX Container. This integrates the dynamically created NFS PVC under /files. Now we have to feed the definitions to Kubernetes using kubectl:

$ kubectl apply -f nfs-pvc.yaml nginx-deployment.yaml
persistentvolumeclaim/nfs-pvc created
deployment.apps/webserver created

As we can see a Webserver pod is running:

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0   1/1     Running   0          54m
webserver-5486dd9cf5-hfhnd            1/1     Running   0          114s

But since this is far from sufficient for our load, we will expand the deployment to 4 replications:

$ kubectl scale deployment webserver --replicas=4
deployment.apps/webserver scaled

A small check is recommended and lo and behold, all 4 pods are running happily:

$ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0   1/1     Running   0          55m
webserver-5486dd9cf5-hfhnd            1/1     Running   0          3m9s
webserver-5486dd9cf5-nh9fl            1/1     Running   0          18s
webserver-5486dd9cf5-ss27f            1/1     Running   0          18s
webserver-5486dd9cf5-xl2lj            1/1     Running   0          18s

Now let’s check if the NFS is also working properly between the pods.
A first command shows us that /files is empty. Using a second command, we create the nfs-is-rwx file. From the third issue we can see that we have been successful. We created in one pod and the file immediately existed in another pod.

$ kubectl exec webserver-5486dd9cf5-hfhnd -- ls /files
$ kubectl exec webserver-5486dd9cf5-nh9fl -- touch /files/nfs-is-rwx
$ kubectl exec webserver-5486dd9cf5-xl2lj -- ls /files
nfs-is-rwx

Summary

You have now set up an NFS server, which uses a CSI Cinder block storage in the background. The NFS server uses the block storage in the background to provide an RWX PVC for your pods via NFS. As a result, we have circumvented the technical limitation of a CSI Cinder block device and you can now use your NWS Managed Kubernetes Cluster for more use cases.

Subcribe for next article

More articles in Kubernetes | Tutorials
LUKS Encrypted Storage on OpenStack

LUKS Encrypted Storage on OpenStack

Thoroughly securing your IT landscape has become more and more important over the last couple of years. With an increase in (user) data to be managed, processed, and stored, encryption of this data should be on your agenda towards fully secured IT infrastructure....

Securing ingress-nginx with cert-manager

Securing ingress-nginx with cert-manager

In one of our first tutorials, we showed you how to get started with ingress-nginx on your Kubernetes cluster. As a next step, we will tell you how to go about securing ingress-nginx with cert-manager by creating TLS certificates for your services! What is...

Migrating Servers from VMware to Openstack

Migrating Servers from VMware to Openstack

In this tutorial, we will have a look at migrating servers from VMware to OpenStack. After VMware's recent acquisition by Broadcom, many Cloud Service Providers (CSPs) face termination of their partnership programs with VMware. With no further information publicly...

Mastering Kubernetes with Cilium: Empowering L7 Traffic Control

Mastering Kubernetes with Cilium: Empowering L7 Traffic Control

With the new release of the Cilium CNI on our Kubernetes Service you'll get the ability to filter traffic based on L7 properties. It's very powerful and can help a lot with your services security. In this tutorial, we'll be securing an API endpoint to allow access...

Using Terraform with OpenStack

Using Terraform with OpenStack

Many of you may already be familiar using Terraform with Azure or AWS. Although these may be the most used platforms, there is still a need for variety of other options due to local regulations (GDPR). As our systems are geared towards Open-Source, we will be looking...

Dynamic Inventory – An Ansible and Openstack Lovestory

Dynamic Inventory – An Ansible and Openstack Lovestory

For those of you that may not be too familiar with Ansible, it is a great tool to get started in the world of automation and making your life with configuration management a whole lot easier. In this tutorial we will be going through a basic playbook that you can use...

Resizing Persistent Volumes in Kubernetes

Resizing Persistent Volumes in Kubernetes

You want to resize a PersistentVolume (PV) in Kubernetes? In this tutorial, you'll learn how to do it. If you don't already know what a PV is and how you can create one, you should check out the tutorial Creating Persistent Volumes in Kubernetes first.   Let's...

How to start your NETWAYS Managed Database

How to start your NETWAYS Managed Database

In the first Database tutorial, Sebastian already explained what Vitess is all about and what possibilities it offers you, when running your application compared to an ordinary database. In this tutorial, I would like to explain how easy it is for you to start your...

What is Vitess?

What is Vitess?

Back in 2010 a solution was created to solve the massive MySQL scalability challenges at YouTube - and then Vitess was born. Later in 2018, the project became part of the Cloud Native Computing Foundation and since 2019 it has been listed as one of the graduated...