OpenEBS Local PV Hostpath User Guide
This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by Hostpath.
OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique Hostpath (directory) on the node to persist data, hereafter referred to as OpenEBS Local PV Hostpath volumes.
OpenEBS Local PV Hostpath volumes have the following advantages compared to native Kubernetes hostpath volumes.
- OpenEBS Local PV Hostpath allows your applications to access hostpath via StorageClass, PVC, and PV. This provides you the flexibility to change the PV providers without having to redesign your Application YAML.
- Data protection using the Velero Backup and Restore.
- Protect against hostpath security vulnerabilities by masking the hostpath completely from the application YAML and pod.
OpenEBS Local PV uses volume topology aware pod scheduling enhancements introduced by Kubernetes Local Volumes
QUICKSTART
OpenEBS Local PV Hostpath volumes will be created under /var/openebs/local directory. You can customize the location by configuring install parameters or by creating new StorageClass.
If you have OpenEBS already installed, you can create an example pod that persists data to OpenEBS Local PV Hostpath with following kubectl commands.
Verify using below kubectl commands that example pod is running and is using a OpenEBS Local PV Hostpath.
For a more detailed walkthrough of the setup, follow along the rest of this document.
Minimum Versions#
- Kubernetes 1.12 or higher is required
- OpenEBS 1.0 or higher is required.
Prerequisites#
Setup the directory on the nodes where Local PV Hostpaths will be created. This directory will be referred to as BasePath. The default location is /var/openebs/local.
BasePath can be any of the following:
- A directory on root disk (or
os disk). (Example:/var/openebs/local). - In the case of bare-metal Kubernetes nodes, a mounted directory using the additional drive or SSD. (Example: An SSD available at
/dev/sdb, can be formatted with Ext4 and mounted as/mnt/openebs-local) - In the case of cloud or virtual instances, a mounted directory created from attaching an external cloud volume or virtual disk. (Example, in GKE, a Local SSD can be used which will be available at
/mnt/disk/ssd1.)
air-gapped environment
If you are running your Kubernetes cluster in an air-gapped environment, make sure the following container images are available in your local repository.
- openebs/localpv-provisioner
- openebs/linux-utils
Rancher RKE cluster
If you are using the Rancher RKE cluster, you must configure kubelet service with extra_binds for BasePath. If your BasePath is the default directory /var/openebs/local, then extra_binds section should have the following details:
Install#
You can skip this section if you have already installed OpenEBS.
Prepare to install OpenEBS by providing custom values for configurable parameters.
OpenEBS Dynamic Local Provisioner offers some configurable parameters that can be applied during the OpenEBS Installation. Some key configurable parameters available for OpenEBS Dynamic Local Provisioner are:
The location of the OpenEBS Dynamic Local PV provisioner container image.
The location of the Provisioner Helper container image. OpenEBS Dynamic Local Provisioner create a Provisioner Helper pod to create and delete hostpath directories on the nodes.
The absolute path on the node where the Hostpath directory of a Local PV Volume will be created.
You can proceed to install OpenEBS either using kubectl or helm using the steps below.
Install using kubectl
If you would like to change the default values for any of the configurable parameters mentioned in the previous step, download the
openebs-operator.yamland make the necessary changes before applying.note
If you would like to use only Local PV (hostpath and device), you can install a lite version of OpenEBS using the following command.
kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml kubectl apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml
Install using OpenEBS helm charts
If you would like to change the default values for any of the configurable parameters mentioned in the previous step, specify each parameter using the
--set key=value[,key=value]argument tohelm install.
Create StorageClass#
You can skip this section if you would like to use default OpenEBS Local PV Hostpath StorageClass created by OpenEBS.
The default Storage Class is called openebs-hostpath and its BasePath is configured as /var/openebs/local.
To create your own StorageClass with custom
BasePath, save the following StorageClass definition aslocal-hostpath-sc.yaml(Optional) Custom Node Labelling#
You can use custom node affinity labels instead of hostname in the hostpath provisioner. This helps in cases where the hostname changes when the node is removed and added back with the disks still intact. For eg: If the custom node label is
openebs.io/custom-node-unique-id, it can be added to the storage class config undermetadata.annotations.note
The
volumeBindingModeMUST ALWAYS be set toWaitForFirstConsumer.volumeBindingMode: WaitForFirstConsumerinstructs Kubernetes to initiate the creation of PV only after Pod using PVC is scheduled to the node.Edit
local-hostpath-sc.yamland update with your desired values formetadata.nameandcas.openebs.io/config.BasePath.note
If the
BasePathdoes not exist on the node, OpenEBS Dynamic Local PV Provisioner will attempt to create the directory, when the first Local Volume is scheduled on to that node. You MUST ensure that the value provided forBasePathis a valid absolute path.Create OpenEBS Local PV Hostpath Storage Class.
Verify that the StorageClass is successfully created.
Install verification#
Once you have installed OpenEBS, verify that OpenEBS Local PV provisioner is running and Hostpath StorageClass is created.
To verify OpenEBS Local PV provisioner is running, execute the following command. Replace
-n openebswith the namespace where you installed OpenEBS.The output should indicate
openebs-localpv-provisionerpod is running.To verify OpenEBS Local PV Hostpath StorageClass is created, execute the following command.
The output should indicate either the default StorageClass
openebs-hostpathand/or custom StorageClasslocal-hostpathare displayed.
Create a PersistentVolumeClaim#
The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolumeClaims to request Hostpath Local PV from OpenEBS Dynamic Local PV provisioner.
Here is the configuration file for the PersistentVolumeClaim. Save the following PersistentVolumeClaim definition as
local-hostpath-pvc.yamlCreate the PersistentVolumeClaim
Look at the PersistentVolumeClaim:
The output shows that the
STATUSisPending. This means PVC has not yet been used by an application pod. The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.
Create Pod to consume OpenEBS Local PV Hostpath Storage#
Here is the configuration file for the Pod that uses Local PV. Save the following Pod definition to
local-hostpath-pod.yaml.note
As the Local PV storage classes use
waitForFirstConsumer, do not usenodeNamein the Pod spec to specify node affinity. IfnodeNameis used in the Pod spec, then PVC will remain inpendingstate. For more details refer https://github.com/openebs/openebs/issues/2915.Create the Pod:
Verify that the container in the Pod is running.
Verify that the data is being written to the volume.
Verify that the container is using the Local PV Hostpath.
The output shows that the Pod is running on
Node: gke-user-helm-default-pool-3a63aff5-1tmfand using the persistent volume provided bylocal-hostpath-pvc.Look at the PersistentVolumeClaim again to see the details about the dynamically provisioned Local PersistentVolume
The output shows that the
STATUSisBound. A new Persistent Volumepvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425has been created.Look at the PersistentVolume details to see where the data is stored. Replace the PVC name with the one that was displayed in the previous step.
The output shows that the PV was provisioned in response to PVC request
spec.claimRef.name: local-hostpath-pvc.
note
A few important characteristics of a OpenEBS Local PV can be seen from the above output:
spec.nodeAffinityspecifies the Kubernetes node where the Pod using the Hostpath volume is scheduled.spec.local.pathspecifies the unique subdirectory under theBasePath (/var/local/openebs)defined in corresponding StorageClass.
Cleanup#
Delete the Pod, the PersistentVolumeClaim and StorageClass that you might have created.
Verify that the PV that was dynamically created is also deleted.
Backup and Restore#
OpenEBS Local Volumes can be backed up and restored along with the application using Velero.
note
The following steps assume that you already have Velero with Restic integration is configured. If not, please follow the Velero Documentation to proceed with install and setup of Velero. If you encounter any issues or have questions, talk to us on the #openebs channel on the Kubernetes Slack server.
Backup#
The following steps will help you to prepare and backup the data from the volume created for the example pod (hello-local-hostpath-pod), with the volume mount (local-storage).
Prepare the application pod for backup. Velero uses Kubernetes labels to select the pods that need to be backed up. Velero uses annotation on the pods to determine which volumes need to be backed up. For the example pod launched in this guide, you can inform velero to backup by specifying the following label and annotation.
Create a Backup using velero.
Verify that backup is successful.
On successful completion of the backup, the output of the backup describe command will show the following:
Restore#
Install and Setup Velero, with the same provider where backups were saved. Verify that backups are accessible.
The output of should display the backups that were taken successfully.
Restore the application.
note
Local PVs are created with node affinity. As the node names will change when a new cluster is created, create the required PVC(s) prior to proceeding with restore.
Replace the path to the PVC yaml in the below commands, with the PVC that you have created.
Verify that application is restored.
Depending on the data, it may take a while to initialize the volume. On successful restore, the output of the above command should show:
Verify that data has been restored. The application pod used in this example, write periodic messages (greetings) to the volume.
The output will show that backed up data as well as new greetings that started appearing after application pod was restored.
Troubleshooting#
Review the logs of the OpenEBS Local PV provisioner. OpenEBS Dynamic Local Provisioner logs can be fetched using.
Support#
If you encounter issues or have a question, file an Github issue, or talk to us on the #openebs channel on the Kubernetes Slack server.