Most applications require persistent storage in one way or another. Whether profile pictures, text documents or product information in an outsourced database, data always has to be stored somewhere. In the cloud, block storage or object storage is normally used for this. But what if the cloud environment does not meet your requirements or you want to manage your storage yourself? This is where Ceph comes into play.
Ceph is an open source project for scalable, distributed storage across multiple hosts and disks. It offers block storage, object storage and file systems as consumable storage. In the NETWAYS Cloud, a Ceph cluster also provides the storage for your VMs, persistent volumes in NETWAYS Managed Kubernetes® and S3-compatible object storage.
In this tutorial, we will show you how you can use Rook to provision your own Ceph cluster and configure it according to your requirements in a reasonable amount of time.
What is Rook?
Rook is a Kubernetes operator that takes care of the installation, configuration, operation, and updates of one or more Ceph clusters. This is done using various CustomResources that are defined, interpreted and implemented by Rook.
Rook distinguishes between four different cluster types:
- Host Storage Cluster: The storage managed by Ceph is located directly on the hosts of the Kubernetes cluster in which Rook is running.
- External Storage Cluster: Ceph runs in an external provider cluster. Rook only configures the storage classes and access in the consumer cluster.
- PVC Storage Cluster: The storage managed by Ceph is located on PersistentVolumeClaims, which are provisioned in a Kubernetes clusters.
- Stretch Storage Cluster: The storage managed by Ceph is distributed by Rook across two failure domains in order to remain available in the event of a failure of one of the domains.
In this tutorial, we will look at the PVC storage cluster type, as this is the most common scenario in the cloud: In a managed Kubernetes cluster, for example, you often cannot expand the storage on the nodes of the cluster, but you have PersistentVolumeClaims available. As a bonus, ReadWriteMany(RWX) storage can be implemented, a feature that many clouds do not offer out of the box.
Prerequisites
We need a few things to set up Rook:
- A Kubernetes cluster (managed or not), with a storage class that supports File and Block modes
- Enough resources in the cluster (Ceph is unfortunately quite resource-hungry – at least 16GB RAM per node would be an advantage)
- Helm to install Rook using helm charts
- kubectl to monitor the installation process
Once all the requirements have been met, we can start installing Rook.
Installation of Rook
The installation of Rook basically consists of six steps:
- Setting up the Rook chart repository
- Configuration of the Rook operator
- Installing the Rook operator
- Configuration of the Ceph cluster
- Installation of the Ceph cluster
- Testing the storage managed by Ceph
Step 1: Setting up the Rook chart repository
Even though many example configurations for Rook scenarios exist in the form of Kubernetes manifests, an installation via Helm is the most common way for installing Rook in production. To do this, we first configure the chart repository of the Rook project and download the latest information about the charts it contains:
helm repo add rook-release https://charts.rook.io/release
helm repo update
helm search repo rook-cephThe last command should list the following information:
NAME CHART VERSION APP VERSION DESCRIPTION
rook-release/rook-ceph v1.19.1 v1.19.1 File, Block, and Object Storage Services for...
rook-release/rook-ceph-cluster v1.19.1 v1.19.1 Manages a single Ceph cluster namespace for RookThere are two Helm charts in the Rook chart repository: rook-ceph and rook-ceph-cluster. The rook-ceph-cluster chart is responsible for installing a Ceph cluster. For this, however, we first need the Rook operator, which we can install with the rook-ceph chart. In step 2, we take care of the configuration of this chart.
Step 2: Configuration of the Rook operator
Helmcharts can usually be configured via a values.yaml file, in which predefined values for the chart templating can be specified. The complete values.yaml with all possible configurations of the rook-ceph Helmchart can be found on GitHub.
With just under 700 lines, it is not exactly small, fortunately we do not have to adjust any values for a first deployment.
If you want to make changes to the Rook operator later, you can still do this using helm upgrade and a corresponding values.yaml. The following values.yaml would, for example, adapt the containerSecurityContext of the Rook operator for PodSecurityAdmissions and deactivate the deployment of the CephFS driver:
containerSecurityContext:
seccompProfile:
type: RuntimeDefault
allowPrivilegeEscalation: false
ceph-csi-operator:
controllerManager:
manager:
containerSecurityContext:
seccompProfile:
type: RuntimeDefault
csi:
enableCephfsDriver: falseStep 3: Installing the Rook operator
Helmcharts are installed using the helm install command. Expected arguments are a name for the so-called Helm release (the application to be installed), which Helmchart is to be installed, and optionally an existing namespace or one to be created.
You can also set values of the values.yaml explicitly with --set or pass your own values.yaml.
The installation of the rook-ceph Helmchart looks like this in our scenario:
helm install rook-ceph rook-release/rook-ceph \
--namespace rook-ceph \
--create-namespaceThe Rook operator should be installed by Helm in the namespace rook-ceph – you can check the progress with kubectl:
kubectl get pods --namespace rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-csi-controller-manager-7f5867ddb-rrr45 1/1 Running 0 43s
rook-ceph-operator-c67cd758c-8b4dw 1/1 Running 0 43sIf both pods are Ready and Running, the installation of the Rook operator was successful and we can continue with the configuration of an actual Ceph cluster.
Step 4: Configuration of the Ceph cluster
The configuration and installation of a Ceph cluster also works via a Helmchart: rook-ceph-cluster. The Helmchart installs almost exclusively RBAC resources and CustomResources. The Rook operator, which interprets and implements these CustomResources, then takes care of the actual setup of the cluster:
helm template rook-ceph-cluster rook-release/rook-ceph-cluster | grep kind
kind: ServiceAccount
kind: ServiceAccount
kind: ServiceAccount
kind: ServiceAccount
kind: ServiceAccount
kind: ServiceAccount
kind: ServiceAccount
kind: StorageClass
kind: StorageClass
kind: StorageClass
kind: ClusterRoleBinding
kind: ClusterRole
- kind: ServiceAccount
kind: ClusterRoleBinding
kind: ClusterRole
- kind: ServiceAccount
kind: Role
kind: Role
kind: Role
kind: Role
kind: RoleBinding
kind: ClusterRole
- kind: ServiceAccount
kind: RoleBinding
kind: Role
- kind: ServiceAccount
kind: RoleBinding
kind: Role
- kind: ServiceAccount
kind: RoleBinding
kind: ClusterRole
- kind: ServiceAccount
kind: RoleBinding
kind: Role
- kind: ServiceAccount
kind: RoleBinding
kind: Role
- kind: ServiceAccount
kind: CephBlockPool
kind: CephCluster
kind: CephFilesystem
kind: CephFilesystemSubVolumeGroup
kind: CephObjectStoreIn addition to RBAC manifests(ServiceAccounts, Roles, ClusterRoles, RoleBindings, ClusterRoleBindings) and CustomResources for the Ceph cluster itself(CephCluster), the Helmchart installs resources for storage pools(CephBlockPool), shared file systems(CephFilesystem, CephFilesystemSubVolumeGroup) and object stores(CephObjectStore), as well as StorageClasses associated with these resources.
The only deployment directly installed by the Helmchart are the Ceph tools, which you can use to inspect and debug your Ceph cluster.
Before we can install a Ceph cluster, however, we must first make the necessary configuration for our PVC storage cluster. To do this, we create the following values.yaml:
cephClusterSpec:
mon:
count: 3
volumeClaimTemplate:
spec:
resources:
requests:
storage: 10Gi
storageClassName: standard # this storage class must exist in your cluster
storage:
storageClassDeviceSets:
- name: set1
count: 3
portable: true
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard # this storage class must exist in your cluster
volumeMode: BlockThe values entered here primarily define the use of PersistentVolumeClaims (PVC) for the Ceph monitors (section mon) and Object Storage Devices(OSD, section storage).
In both sections, we define the number of desired replicas (3), the storage class to be used for the PVCs and the desired size of the available storage. For the tutorial, we choose a relatively small storage size of 10GB.
Step 5: Installing the Ceph cluster
We can now install the Ceph configuration created as a rook-ceph-cluster Helmchart in our cluster. The namespace is the same as when installing the Rook operator:
helm install -n rook-ceph rook-ceph-cluster rook-release/rook-ceph-cluster -f values.yamlThe installation and setup of the cluster can take a few minutes. The Rook operator starts up the resources required by the Ceph cluster one after the other, configures the PVCs provided for the cluster storage and takes care of the creation of storage classes and CustomResources.
You can monitor the progress using the pods in the rook-ceph namespace or the status of the CephCluster CustomResource:
kubectl get pods -n rook-ceph -l=rook_cluster=rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-crashcollector-48c6fcc6eef1956bf1e1247c60581206-s2x74 1/1 Running 0 24m
rook-ceph-crashcollector-5bc661df96ac242309cd28dfcdc1efd0-mh25x 1/1 Running 0 24m
rook-ceph-crashcollector-d449ca875daa794f49a1147650b323ae-8mvkg 1/1 Running 0 5m56s
rook-ceph-crashcollector-f16a0024aad7d77ed6e79b538eb2906d-gfs8s 1/1 Running 0 22m
rook-ceph-exporter-48c6fcc6eef1956bf1e1247c60581206-74f97fwfmr4 1/1 Running 0 24m
rook-ceph-exporter-5bc661df96ac242309cd28dfcdc1efd0-5576f6wpmnd 1/1 Running 0 24m
rook-ceph-exporter-d449ca875daa794f49a1147650b323ae-84fb7bt4lfb 1/1 Running 0 5m54s
rook-ceph-exporter-f16a0024aad7d77ed6e79b538eb2906d-9c456bzgw72 1/1 Running 0 22m
rook-ceph-mds-ceph-filesystem-a-58b645f497-jnck2 2/2 Running 0 23m
rook-ceph-mds-ceph-filesystem-b-7b686f5cd5-g85bd 2/2 Running 0 23m
rook-ceph-mgr-a-55fd7c8f44-kqbnr 3/3 Running 0 24m
rook-ceph-mgr-b-5b4bfcbb69-tfz52 3/3 Running 0 24m
rook-ceph-mon-a-5f878c749b-rldvv 2/2 Running 0 26m
rook-ceph-mon-b-6b8578dd8d-f2nkq 2/2 Running 0 25m
rook-ceph-mon-c-58fcf64c98-7t255 2/2 Running 0 25m
rook-ceph-osd-0-c797bfb97-rzt4j 2/2 Running 0 106s
rook-ceph-osd-1-768787c556-qxdkj 2/2 Running 0 3m32s
rook-ceph-osd-2-fd6f986fb-sx46r 2/2 Running 0 2m52s
rook-ceph-osd-prepare-48c6fcc6eef1956bf1e1247c60581206-2tmb2 0/1 Completed 0 43s
rook-ceph-osd-prepare-5bc661df96ac242309cd28dfcdc1efd0-84wx5 0/1 Completed 0 46s
rook-ceph-osd-prepare-d449ca875daa794f49a1147650b323ae-mwwk9 0/1 Completed 0 52s
rook-ceph-osd-prepare-f16a0024aad7d77ed6e79b538eb2906d-kcncr 0/1 Completed 0 49s
rook-ceph-osd-prepare-set1-data-1blxfw-pwhf9 0/1 Completed 0 4m26s
rook-ceph-rgw-ceph-objectstore-a-5c4bcdfff9-jj5c4 2/2 Running 0 22mThe CephCluster should now be in the Health_OK state:
kubectl get cephcluster -n rook-ceph rook-ceph
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH
rook-ceph /var/lib/rook 3 27m Ready Cluster created successfully HEALTH_OKAt first glance, everything looks good. Now it’s time to test the functionality of the various storage classes provided by Rook Ceph!
Step 6: Testing the storage managed by Ceph
Once the Ceph cluster has been successfully installed, we can discover three new storage classes in our cluster:
kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-block (default) rook-ceph.rbd.csi.ceph.com Delete Immediate true 17h
ceph-bucket rook-ceph.ceph.rook.io/bucket Delete Immediate false 17h
ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate true 17h
[...]- ceph-block provides RWO-compatible block storage
- ceph-bucket provides S3-/SWIFT-compatible buckets for object storage
- ceph-filesystem provides RWX-compatible Ceph file systems
In the last part of this tutorial, we will now test the various storage classes.
Step 6.1: Testing ceph-block
The ceph-block storage class provides RWO-compatible block storage. This means that we can use it to provision PersistentVolume(Claim)s that can be mounted by exactly one node at a time. We therefore create the following manifest pod-rwo.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rwo-pvc
labels:
scenario: rwo-test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-block
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: rwo-test
labels:
scenario: rwo-test
spec:
containers:
- name: app
image: busybox
command: [sh, -c, "echo RWO OK > /mnt/data/test.txt && sleep 3600"]
volumeMounts:
- name: data
mountPath: /mnt/data
volumes:
- name: data
persistentVolumeClaim:
claimName: rwo-pvcWith kubectl apply we create the described pod, which mounts an RWO PersistentVolume requested by the PersistentVolumeClaim and creates a new file in it:
kubectl apply -f pod-rwo.yaml
persistentvolumeclaim/rwo-pvc created
pod/rwo-test created
kubectl get pods -l=scenario=rwo-test
NAME READY STATUS RESTARTS AGE
rwo-test 1/1 Running 0 29s
kubectl get pvc -l=scenario=rwo-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
rwo-pvc Bound pvc-3ccb1f8f-470b-44fe-93d4-9429f165edbc 1Gi RWO ceph-block <unset> 32sThe pod and the volume appear to have been successfully provisioned. In addition, we can read the file created by the pod in the volume:
kubectl exec -it rwo-test -- cat /mnt/data/test.txt
RWO OKThe pod was able to write to the mounted volume. The ceph-block storage class seems functional.
Step 6.2: Testing ceph-filesystem
It gets more exciting when testing the storage class ceph-filesystem, which provides RWX-compatible storage. For this, we need several pods that mount the same volume and interact with it. To do this, we create the pods-rwx.yaml manifest:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rwx-pvc
labels:
scenario: rwx-test
spec:
accessModes:
- ReadWriteMany
storageClassName: ceph-filesystem
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: rwx-writer
labels:
scenario: rwx-test
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
scenario: rwx-test
topologyKey: kubernetes.io/hostname
containers:
- name: writer
image: busybox
command: [sh, -c, "while true; do date >> /mnt/shared/log.txt; sleep 5; done"]
volumeMounts:
- name: shared
mountPath: /mnt/shared
volumes:
- name: shared
persistentVolumeClaim:
claimName: rwx-pvc
---
apiVersion: v1
kind: Pod
metadata:
name: rwx-reader
labels:
scenario: rwx-test
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
scenario: rwx-test
topologyKey: kubernetes.io/hostname
containers:
- name: reader
image: busybox
command: [sh, -c, "while true; do cat /mnt/shared/log.txt; sleep 10; done"]
volumeMounts:
- name: shared
mountPath: /mnt/shared
volumes:
- name: shared
persistentVolumeClaim:
claimName: rwx-pvcAs in the RWO test, we provision a PersistentVolumeClaim, this time for the ceph-filesystem storage class. In addition, two pods are distributed across different nodes by podAntiAffinity in order to test the simultaneous mounting of the volume on different nodes.
kubectl apply -f pods-rwx.yaml
persistentvolumeclaim/rwx-pvc created
pod/rwx-writer created
pod/rwx-reader created
kubectl get pods -l=scenario=rwx-test
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rwx-reader 1/1 Running 0 85s 10.2.1.167 shoot--20993--ceph-cluster-default-nodes-z1-85fb4-79xzh <none> <none>
rwx-writer 1/1 Running 0 85s 10.2.0.145 shoot--20993--ceph-cluster-default-nodes-z1-85fb4-b8bbk <none> <none>
kubectl get pvc -l=scenario=rwx-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
rwx-pvc Bound pvc-9c754dc5-7084-4d65-b7f5-80bd8f4bf657 1Gi RWX ceph-filesystem <unset> 2m4sThe pods are running on different nodes and the volume has been successfully created. Based on the output of the two pods, we can determine whether simultaneous reading/writing from different applications works:
kubectl logs rwx-reader
Tue Feb 24 08:23:15 UTC 2026
Tue Feb 24 08:23:20 UTC 2026
Tue Feb 24 08:23:25 UTC 2026
Tue Feb 24 08:23:15 UTC 2026
Tue Feb 24 08:23:20 UTC 2026
Tue Feb 24 08:23:25 UTC 2026We can access the data written by the rwx-writer pod from the rwx-reader pod. The RWX-compatible storage class also works.
Step 6.3: Testing ceph-bucket
The last remaining storage class to be tested is ceph-bucket. This is not normally provisioned manually, but instead uses the CustomResource ObjectBucketClaim provided by Rook. We create the following manifests in pod-s3.yaml:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: my-bucket
spec:
generateBucketName: my-bucket
storageClassName: ceph-bucket
---
apiVersion: v1
kind: Pod
metadata:
name: s3-test
labels:
scenario: s3-test
spec:
containers:
- name: s3
image: amazon/aws-cli
command:
- sh
- -c
- |
aws s3 ls s3://$BUCKET_NAME && \
echo "hello ceph" | aws s3 cp - s3://$BUCKET_NAME/hello.txt && \
aws s3 cp s3://$BUCKET_NAME/hello.txt - && \
echo "Object storage OK" && \
sleep 3600
env:
# ConfigMap injected by OBC controller
- name: BUCKET_NAME
valueFrom:
configMapKeyRef:
name: my-bucket
key: BUCKET_NAME
- name: BUCKET_HOST
valueFrom:
configMapKeyRef:
name: my-bucket
key: BUCKET_HOST
- name: BUCKET_PORT
valueFrom:
configMapKeyRef:
name: my-bucket
key: BUCKET_PORT
# Secret injected by OBC controller
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: my-bucket
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: my-bucket
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ENDPOINT_URL
value: http://$(BUCKET_HOST):$(BUCKET_PORT)The manifest shows that when provisioning an object storage bucket, Rook also takes care of providing the required information (bucket name, host, port, access key and ID) in ConfigMaps or Secrets, which we only have to integrate into our pod:
kubectl apply -f pod-s3.yaml
objectbucketclaim.objectbucket.io/my-bucket created
pod/s3-test created
kubectl get pods -l=scenario=s3-test
NAME READY STATUS RESTARTS AGE
s3-test 1/1 Running 0 65s
kubectl get objectbuckets
NAME AGE
obc-default-my-bucket 75sIn this scenario, the requested storage is created in the form of an ObjectBucket using our ObjectBucketClaim, and the pod starts successfully. The pod’s logs should tell us whether an upload and download from the provisioned S3 bucket is working:
kubectl logs s3-test
hello ceph
Object storage OKIf the output looks like this, the upload and download have worked – so the ceph-bucket storage class also works as expected.
Conclusion
With Rook, a fully-fledged Ceph cluster can be provisioned and operated in an existing Kubernetes cluster in just a few steps – without requiring in-depth Ceph expert knowledge. The operator does the hard work: initial setup, configuration of storage classes, ongoing management of the cluster and updates of Rook and Ceph.
As the tests in this tutorial have shown, three storage types are available after installation: RWO block storage via ceph-block, RWX-capable shared storage via ceph-filesystem and S3-compatible object storage via ceph-bucket. The ability to provide ReadWriteMany volumes in particular makes Rook Ceph an attractive solution for applications that need to access the same storage – a feature that many cloud providers do not offer out of the box.
Of course, Ceph also entails complexity: the resource requirements should not be underestimated, and careful planning is recommended for productive environments with regard to replication factor, error domains and capacity. The Rook documentation offers extensive instructions for further configurations. And, of course, our MyEngineers® will also be happy to help you!





0 Comments