Even if a cluster is reliable, nodes can and do fail. Rebooting a node does not simulate a crash. There can be many reasons, such as catastrophic hardware failure, Operating System failure, or communication failure among the nodes. To overcome this hazardous situation, the Replication of volume becomes necessary.

Replication is the process by which one or more volumes can be copied to maintain the significance of a cluster and to avoid data loss. OpenEBS provides volume replication through different storage engines. One of them is cStor Volume Replication.

Synchronous replication of data

Prerequisite for scaling up the replicas of cStor volume:

  • A cStor pool should be available, and replicas of this cStor volume should not be present on this cStor pool.
  • The OpenEBS version should be 1.3.0 or more.

Please follow the below steps for cStor Volume Replication:

Get the StorageClass name using the following command:

kubectl get sc

Example Output:

The storage class for cStor is openebs-sc-cstor. Perform the following command to get the details of the corresponding StorageClass, which is used for creating the cStor volume :

Kubectl get sc openebs-sc-cstor

We will get the Yaml file of the corresponding StorageClass openebs-sc-cstor.

In the Yaml above, We can see the Replica count Value is 1.

Get the volume name using the following command:

Kubectl get pvc

Get the VOLUME name and use it in the following command to get the details of corresponding cStor volume. All commands are performed by considering the above PVC.

kubectl get cstorvolume -n openebs -l openebs.io/persistent-volume=<Vol-name>

Example output:

Get the details of existing cStor Volume Replica details using the following command:

kubectl get cstorvolume -n openebs -l openebs.io/persistent-volume=pvc-3f86fcdf-02f6-11ea-b0f6-42010a8000f8

Example output:

Perform the following command to get complete details of the existing cStor volume replica:

kubectl get cvr -n openebs -l openebs.io/persistent-volume=pvc-3f86fcdf-02f6-11ea-b0f6-42010a8000f8

Get the available cStor Pools for creating new cStor volume replica. The following command will get the other associated cStor pools details:

kubectl get csp -l openebs.io/storage-pool-claim=cstor-disk-pool | grep -v cstor-disk-pool-hgt4

Example Output:

From the above example output, there are 2 cStor pools available, i.e., cstor-disk-pool-2phf and cstor-disk-pool-zm8l. So it is possible to scale up the current volume replica count to 3 from 1. If there are no cStor pools available to perform volume replica scale-up, then follow the steps to create a new cStor pool by updating existing SPC configuration.

Perform the following command to get the details of the cStor Pool where new replica will be created:

kubectl get csp -n openebs cstor-disk-pool-2phf -oyaml

Note down following parameters from the output:

  • metadata.labels.cstorpool.openebs.io/name
  • metadata.labels.cstorpool.cstorpool.openebs.io/uid
  • metadata.annotations.cstorpool.openebs.io/hostname

The sample CVR Yaml is provided below:

Apply the updated CVR YAML spec to create the new replica of cStor volume using the following command:

kubectl apply -f cvr.yaml

Example Output:

Verify if new CVR is created successfully using the following command:

kubectl get cvr -n openebs

Example output:

From the above output, a new replica of the cStor volume is created, and STATUS is showing as Offline.

Update Desired Replication Factor in cStor volume with a new replica count. This can be updated by editing corresponding cStor volume CR YAML.

kubectl edit cstorvolume pvc-3f86fcdf-02f6-11ea-b0f6-42010a8000f8 -n openebs

The following is the snippet of updated cStor volume CR YAML:


In the above snippet, the desiredReplicationFactor is updated to 2 from 1. Example output:

Verify if the rebuilding has started on the new replica of the cStor volume. Once rebuilding has been completed, it will update its STATUS as Healthy. Get the latest status of the CVRs using the following command:

kubectl get cvr -n openebs

Example output: