You want to create a persistent volume in Kubernetes? Here you can learn how it works with Openstack Cinder in a NWS Managed Kubernetes plan.
Pods and containers are by definition more or less temporary components in a Kubernetes cluster and are created and destroyed as needed. However, many applications such as databases can rarely be operated meaningfully without long-lived storage. With the industry-standard Container Storage Interface (CSI), Kubernetes offers a uniform integration for different storage backends for the integration of persistent volumes. For our Managed Kubernetes solution, we use the Openstack component Cinder to provide persistent volumes for pods. The CSI Cinder controller is already active for NWS Kubernetes from version 1.18.2 and you can use persistent volumes with only a few K8s objects.
Creating Persistent Volumes with CSI Cinder Controller
Before you can create a volume, a StorageClass must be created with Cinder as the provisioner. As usual, the K8s objects are sent to your cluster in YAML format and kubectl apply:
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cinderStorage provisioner: cinder.csi.openstack.org allowVolumeExpansion: true
With get and describe you can check whether the creation has worked:
$ kubectl apply -f storageclass.yaml
$ kubectl get storageclass
$ kubectl describe storageclass cinderStorage
Based on this storage class, you can now create as many volumes as you like.
Persistent Volume (PV) and Persistent Volume Claim (PVC)
You can create a new volume with the help of a peristentVolumeClaim. The PVC claims a persistentVolume resource for you. If no suitable PV resource is available, it is created dynamically by the CSI Cinder Controller. PVC and PV are bound to each other and are exclusively available for you. Without further configuration, a dynamically created PV is immediately deleted when the associated PVC is deleted. This behaviour can be overridden in the StorageClass defined above with the help of the reclaimPolicy.
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-documentroot spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: cinderStorage
In addition to the name, other properties such as size and accessMode are defined in the PVC-Objekt. After you have created the PVC in the cluster with kubectl apply a new volume is created in the storage backend in the background. In the case of our NETWAYS Managed Kubernetes, Cinder creates a volume as RBD in the Ceph cluster. In the next step, your new volume is mounted in the document root of an Nginx pod.
Pods and persistent Volumes
Usually, volumes are defined in the context of a pod and therefore have the same life cycle as them. However, if you want to use a volume that is independent of the pod and container, you can reference the PVC you just created in the volumes section and then include it in the container under volumeMounts. In this example, the document root of a Nginx is replaced.
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: selector: matchLabels: app: nginx strategy: type: Recreate template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 protocol: TCP volumeMounts: - mountPath: /usr/share/nginx/html name: documentroot volumes: - name: documentroot persistentVolumeClaim: claimName: nginx-documentroot readOnly: false
--- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: nginx
Kubernetes and the CSI Cinder Controller naturally ensure that your new volume and the associated pods are always started at the same worker node. With kubectl you can also quickly adjust the index.html and start the K8s proxy and you can already access your index.html in the persistent volume:
$ kubectl exec -it deployment/nginx — bash -c ‘echo “CSI FTW” > /usr/share/nginx/html/index.html’
$ kubectl proxy
With the CSI Cinder Controller, you can create and manage persistent volumes quickly and easily. Further features for creating snapshots or enlarging volumes are already included. And options such as Multinode Attachment re already being planned. So nothing stands in the way of your database cluster in Kubernetes and the next exciting topic in our Kubernetes Blog series has been decided!