cStor User Guide - install and setup
This user guide will help you to configure cStor storage and use cStor Volumes for running your stateful workloads.
note
If you are an existing user of cStor and have setup cStor storage using StoragePoolClaim(SPC), we strongly recommend you to migrate to using CStorPoolCluster(CSPC). CSPC based cStor uses Kubernetes CSI Driver, provides additional flexibility in how devices are used by cStor and has better resiliency against node failures. For detailed instructions, refer to the cStor SPC to CSPC migration guide.
#
Install and Setup#
PrerequisitescStor uses the raw block devices attached to the Kubernetes worker nodes to create cStor Pools. Applications will connect to cStor volumes using
iSCSI
. This requires you ensure the following:- There are raw (unformatted) block devices attached to the Kubernetes worker nodes. The devices can be either direct attached devices (SSD/HDD) or cloud volumes (GPD, EBS)
iscsi
utilities are installed on all the worker nodes where Stateful applications will be launched. The steps for setting up the iSCSI utilities might vary depending on your Kubernetes distribution. Please see prerequisites verification
If you are setting up OpenEBS in a new cluster. You can use one of the following steps to install OpenEBS. If OpenEBS is already installed, skip this step.
Using helm,
The above command will install all the default OpenEBS components along with cStor.
Using kubectl,
The above command will install all the required components for running cStor.
Enable cStor on already existing OpenEBS
Using helm, you can enable cStor on top of your openebs installation as follows:
Using kubectl,
Verify cStor and NDM pods are running in your cluster.
To get the status of the pods execute:
Sample Output:
Nodes must have disks attached to them. To get the list of attached block devices, execute:
Sample Output:
#
Creating cStor storage poolsYou will need to create a Kubernetes custom resource called CStorPoolCluster, specifying the details of the nodes and the devices on those nodes that must be used to setup cStor pools. You can start by copying the following Sample CSPC yaml into a file named cspc.yaml
and modifying it with details from your cluster.
Get all the node labels present in the cluster with the following command, these node labels will be required to modify the CSPC yaml.
Sample Output:
Modify the CSPC yaml to use the worker nodes. Use the value from labels
kubernetes.io/hostname=< node_name >
. This label value and node name could be different in some platforms. In this case, the label values and node names are:kubernetes.io/hostname:"worker-node-1"
,kubernetes.io/hostname: "worker-node-2"
andkubernetes.io/hostname: "worker-node-3"
.Modify CSPC yaml file to specify the block device attached to the above selected node where the pool is to be provisioned. You can use the following command to get the available block devices on each of the worker node:
Sample Output:
The
dataRaidGroupType:
can either be set asstripe
ormirror
as per your requirement. In the following example it is configured asstripe
.We have named the configuration YAML file as
cspc.yaml
. Execute the following command for CSPC creation,To verify the status of created CSPC, execute:
Sample Output:
Check if the pool instances report their status as ONLINE using the below command:
Sample Output:
Once all the pods are in running state, these pool instances can be used for creation of cStor volumes.
#
Creating cStor storage classesStorageClass definition is an important task in the planning and execution of OpenEBS storage. The real power of CAS architecture is to give an independent or a dedicated storage engine like cStor for each workload, so that granular policies can be applied to that storage engine to tune the behaviour or performance as per the workload's need.
Steps to create a cStor StorageClass
Decide the CStorPoolCluster for which you want to create a Storage Class. Let us say you pick up
cstor-disk-pool
that you created in the above step.Decide the replicaCount based on your requirement/workloads. OpenEBS doesn't restrict the replica count to set, but a maximum of 5 replicas are allowed. It depends how users configure it, but for the availability of volumes at least (n/2 + 1) replicas should be up and connected to the target, where n is the replicaCount. The Replica Count should be always less than or equal to the number of cStor Pool Instances(CSPIs). The following are some example cases:
- If a user configured replica count as 2, then always 2 replicas should be available to perform operations on volume.
- If a user configured replica count as 3 it should require at least 2 replicas should be available for volume to be operational.
- If a user configured replica count as 5 it should require at least 3 replicas should be available for volume to be operational.
Create a YAML spec file
cstor-csi-disk.yaml
using the template given below. Update the pool, replica count and other policies. By using this sample configuration YAML, a StorageClass will be created with 3 OpenEBS cStor replicas and will configure themselves on the pool instances.To deploy the YAML, execute:
To verify, execute:
Sample Output: