title | excerpt | updated |
---|---|---|
Configuring multi-attach persistent volumes with OVHcloud Cloud Disk Array |
Find out how to configure a multi-attach persistent volume using OVHcloud Cloud Disk Array |
2025-02-12 |
OVHcloud Managed Kubernetes natively integrates Block Storage as persistent volumes. This technology may however not be suited to some legacy or non cloud-native applications, often requiring to share this persistent data accross different pods on multiple worker nodes (ReadWriteMany or RWX). If you would need to do this for some of your workloads, one solution is to use CephFS volumes.
OVHcloud Cloud Disk Array is a managed solution that lets you easily configure a Ceph cluster and multiple CephFS volumes. In this tutorial we are going to see how to configure your OVHcloud Managed Kubernetes cluster to use OVHcloud Cloud Disk Array as a CephFS provider for Kubernetes Persistent Volumes.
This tutorial assumes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the deploying a Hello World application documentation.
It also assumes you have an OVHcloud Cloud Disk Array already available. If you don't, you can order one in the OVHcloud Control Panel.
You also need to have Helm installed on your workstation, please refer to the How to install Helm on OVHcloud Managed Kubernetes Service tutorial.
To configure OVHcloud Cloud Disk Array, you need to use the OVHcloud API. If you have never used it, you can find the basics here: First steps with the OVHcloud API.
- List you available Cloud Disk Array cluster:
[!api]
@api {v1} /dedicated/ceph GET /dedicated/ceph
- Create a file system on your Cloud Disk Array:
[!api]
@api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/cephfs/{fsName}/enable
- Create a user for the CephFS CSI that will be used by your Kubernetes cluster:
[!api]
@api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/user
- Add permissions on fs-default for the Ceph CSI user:
[!api]
@api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/user/{userName}/pool
Your cluster is installed with Public Network or a private network without using an OVHcloud Internet Gateway or a custom one as your default route
Once the partition is created, we need to allow our Kubernetes nodes to access our newly created partition.
Get our Kubernetes nodes IP:
kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
$ kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
51.77.204.175 51.77.205.79
Your cluster is installed with Private Network and a default route via your Private Network (OVHcloud Internet Gateway/OpenStack Router or a custom one)
Because your nodes are configured to be routed by the private network gateway, you need to add the gateway IP address to the ACLs.
By using Public Cloud Gateway through our Managed Kubernetes Service, Public IPs on nodes are only for management purposes: MKS Known Limits
You can get your OVHcloud Internet Gateway's Public IP by navigating through the OVHcloud Control Panel:
Public Cloud
{.action} > Select your tenant > Network / Gateway
{.action} > Public IP
{.action}
You can also use the following API endpoint to retrieve your OVHcloud Internet Gateway's Public IP:
[!api]
@api {v1} /cloud GET /cloud/project/{serviceName}/region/{regionName}/gateway/{id}
If you want to use your Kubernetes cluster to retrieve your Gateway Public's IP, run this command:
kubectl run get-gateway-ip --image=ubuntu:latest -i --tty --rm
This will create a temporary pod and open a console.
You may have to wait a little for the pod to be created. Once the shell appears, you can run this command:
apt update && apt upgrade -y && apt install -y curl && curl ifconfig.ovh
This command will output the Public IP of the Gateway of your kubernetes cluster.
- Add the list of nodes IP or the Gateway IP to allow access to the Cloud Disk Array cluster:
[!api]
@api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/acl
- Get the key for the CephFS CSI user:
[!api]
@api {v1} /dedicated/ceph GET /dedicated/ceph/{serviceName}/user/{userName}
- Get the Ceph monitors IP:
[!api]
@api {v1} /dedicated/ceph GET /dedicated/ceph/{serviceName}
- Create a Ubuntu pod on your Kubernetes cluster
kubectl run -it ubuntu --image=ubuntu
- Install the Ceph client and vim inside the Ubuntu pod:
apt-get update
apt-get install ceph-common vim
- Configure the Ceph client with the monitors IP and the key:
vim ceph.conf
[global]
mon_host = <your_ceph_monitor_ip_1>,<your_ceph_monitor_ip_2>,<your_ceph_monitor_ip_3>
keyring = /root/ceph.client.ceph-csi.keyring
vim /root/ceph.client.ceph-csi.keyring
[client.ceph-csi]
key = <your_ceph_csi_user_key>
- Test the Ceph client configuration:
ceph --conf ceph.conf --id ceph-csi fs ls
ceph --conf ceph.conf --id ceph-csi fs ls
name: fs-default, metadata pool: cephfs.fs-default.meta, data pools: [cephfs.fs-default.data ]
- Add a subvolumegroup to fs-default:
ceph --conf ceph.conf --id ceph-csi fs subvolumegroup create fs-default csi
- Exit the Ubuntu pod:
exit
- Destroy the Ubuntu pod:
kubectl delete pod ubuntu
- Install the Ceph CSI helmchart repository:
helm repo add ceph-csi https://ceph.github.io/csi-charts
- Create a configuration file for the Ceph CSI helmchart:
vim values.yaml
csiConfig:
- clusterID: "abcd123456789" # You can change this, but it needs to have at least one letter character
monitors:
- "<your_ceph_monitor_ip_1>:6789"
- "<your_ceph_monitor_ip_2>:6789"
- "<your_ceph_monitor_ip_3>:6789"
storageClass:
create: true
name: "cephfs"
clusterID: "abcd123456789" # This should be the ID you chose above
fsName: "fs-default"
secret:
create: true
adminID: "ceph-csi"
adminKey: "<your_ceph_csi_user_key>"
- Install the Ceph CSI on the Managed Kubernetes cluster:
helm install --namespace "ceph-csi-cephfs" "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs --create-namespace -f ./values.yaml
Let’s create a cephfs-persistent-volume-claim.yaml
file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: cephfs
resources:
requests:
storage: 1Gi
And apply this to create the persistent volume claim:
kubectl apply -f cephfs-persistent-volume-claim.yaml
Let's now create a DaemonSet, which will create pods on all available nodes in order to use our CephFS volume simultaneously on multiple nodes. Let’s create a cephfs-nginx-daemonset.yaml
file:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cephfs-nginx
namespace: default
spec:
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
volumes:
- name: cephfs-volume
persistentVolumeClaim:
claimName: cephfs-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-volume
And apply this to create the Nginx pods:
kubectl apply -f cephfs-nginx-pods.yaml
Let’s enter inside the first Nginx container to create a file on the NFS persistent volume:
FIRST_POD=$(kubectl get pod -l name=nginx --no-headers=true -o custom-columns=:metadata.name | head -1)
kubectl exec -it $FIRST_POD -n default -- bash
Create a new index.html
file:
echo "CEPHFS volume!" > /usr/share/nginx/html/index.html
And exit the Nginx container:
exit
Generate the URL to open in your browser:
$ URL=$(echo "http://localhost:8001/api/v1/namespaces/default/pods/http:$FIRST_POD:/proxy/")
echo $URL
You can open the displayed URL to access the Nginx Service.
Use the following command to validate that the filesystem is shared with the second pod (given that you have more than one node deployed).
$ SECOND_POD=$(kubectl get pod -l name=nginx --no-headers=true -o custom-columns=:metadata.name | head -2 | tail -1)
URL2=$(echo "http://localhost:8001/api/v1/namespaces/default/pods/http:$SECOND_POD:/proxy/")
echo $URL2
Let’s try to access our new web page:
kubectl proxy
Open both URLs generated by the commands above to see if the data is shared with all the pods connected to the Ceph volume.
As you can see the data is correctly shared between the two Nginx pods running on two different Kubernetes nodes. Congratulations, you have successfully set up a multi-attach persistent volume with OVHcloud Cloud Disk Array!
To learn more about using your Kubernetes cluster the practical way, we invite you to look at our OVHcloud Managed Kubernetes documentation.
-
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project.
-
Join our community of users.