Installing OpenEBS
This guide will help you to customize and install OpenEBS.
#
PrerequisitesIf this is your first time installing OpenEBS, make sure that your Kubernetes nodes meet the required prerequisites. At a high level OpenEBS requires:
- Verify that you have the admin context. If you do not have admin permissions to your cluster, please check with your Kubernetes cluster administrator to help with installing OpenEBS or if you are the owner of the cluster, check out the steps to create a new admin context and use it for installing OpenEBS.
- You have Kubernetes 1.18 version or higher.
- Each storage engine may have few additional requirements like having:
- iSCSI initiator utils installed for Jiva and cStor volumes
- Depending on the managed Kubernetes platform like Rancher or MicroK8s - set up the right bind mounts
- Decide which of the devices on the nodes should be used by OpenEBS or if you need to create LVM Volume Groups or ZFS Pools
- Join OpenEBS community on Kubernetes slack.
#
Installation through helmVerify helm is installed and helm repo is updated. You need helm 3.2 or more.
Setup helm repository
OpenEBS provides several options that you can customize during install like:
- specifying the directory where hostpath volume data is stored or
- specifying the nodes on which OpenEBS components should be deployed, and so forth.
The default OpenEBS helm chart will only install Local PV hostpath and Jiva data engines. Please refer to OpenEBS helm chart documentation for full list of customizable options and using cStor and other flavors of OpenEBS data engines by setting the correct helm values.
Install OpenEBS helm chart with default values.
The above commands will install OpenEBS Jiva and Local PV components in openebs
namespace and chart name as openebs
. To install and enable other engines you can modified the above command as follows:
- cStor
To view the chart
As a next step verify your installation and do the post installation steps.
#
Installation through kubectlOpenEBS provides a list of YAMLs that will allow you to easily customize and run OpenEBS in your Kubernetes cluster. For custom installation, download the openebs-operator YAML file, update the configurations and use the customized YAML for installation in the below kubectl
command.
To continue with default installation mode, use the following command to install OpenEBS. OpenEBS is installed in openebs
namespace.
The above command installs Jiva and Local PV components. To install and enable other engines you will need to run additional command like:
- cStor
- Local PV ZFS
- Local PV LVM
#
Verifying OpenEBS installationVerify pods:
List the pods in <openebs>
namespace
In the successful installation of OpenEBS, you should see an example output like below.
openebs-ndm
is a daemon set, it should be running on all nodes or on the nodes that are selected through nodeSelector configuration.
The control plane pods openebs-provisioner
, maya-apiserver
and openebs-snapshot-operator
should be running. If you have configured nodeSelectors , check if they are scheduled on the appropriate nodes by listing the pods through kubectl get pods -n openebs -o wide
Verify StorageClasses:
List the storage classes to check if OpenEBS has installed with default StorageClasses.
In the successful installation, you should have the following StorageClasses are created.
#
Post-Installation considerationsFor testing your OpenEBS installation, you can use the below default storage classes
openebs-jiva-default
for provisioning Jiva Volume (this usesdefault
pool which means the data replicas are created in the /var/openebs/ directory of the Jiva replica pod)openebs-hostpath
for provisioning Local PV on hostpath.
You can follow through the below user guides for each of the engines to use storage devices available on the nodes instead of the /var/openebs
directory to save the data.
#
Troubleshooting#
Set cluster-admin user contextFor installation of OpenEBS, cluster-admin user context is a must. OpenEBS installs service accounts and custom resource definitions that are only allowed for cluster administrators.
Use the kubectl auth can-i
commands to verify that you have the cluster-admin context. You can use the following commands to verify if you have access:
If there is no cluster-admin user context already present, create one and use it. Use the following command to create the new context.
Example:
Set the existing cluster-admin user context or the newly created context by using the following command.
Example: