This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by Hostpath.
OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique Hostpath (directory) on the node to persist data, hereafter referred to as OpenEBS Local PV Hostpath volumes.
OpenEBS Local PV Hostpath volumes have the following advantages compared to native Kubernetes hostpath volumes.
- OpenEBS Local PV Hostpath allows your applications to access hostpath via StorageClass, PVC, and PV. This provides you the flexibility to change the PV providers without having to redesign your Application YAML.
- Data protection using the Velero Backup and Restore.
- Protect against hostpath security vulnerabilities by masking the hostpath completely from the application YAML and pod.
OpenEBS Local PV uses volume topology aware pod scheduling enhancements introduced by Kubernetes Local Volumes
If you have OpenEBS already installed, you can create an example pod that persists data to OpenEBS Local PV Hostpath with following kubectl commands.
Verify using below kubectl commands that example pod is running and is using a OpenEBS Local PV Hostpath.
For a more detailed walkthrough of the setup, follow along the rest of this document.
- Kubernetes 1.12 or higher is required
- OpenEBS 1.0 or higher is required.
Setup the directory on the nodes where Local PV Hostpaths will be created. This directory will be referred to as
BasePath. The default location is
BasePath can be any of the following:
- A directory on root disk (or
os disk). (Example:
- In the case of bare-metal Kubernetes nodes, a mounted directory using the additional drive or SSD. (Example: An SSD available at
/dev/sdb, can be formatted with Ext4 and mounted as
- In the case of cloud or virtual instances, a mounted directory created from attaching an external cloud volume or virtual disk. (Example, in GKE, a Local SSD can be used which will be available at
If you are running your Kubernetes cluster in an air-gapped environment, make sure the following container images are available in your local repository.
Rancher RKE cluster
If you are using the Rancher RKE cluster, you must configure kubelet service with
BasePath. If your
BasePath is the default directory
/var/openebs/local, then extra_binds section should have the following details:
You can skip this section if you have already installed OpenEBS.
Prepare to install OpenEBS by providing custom values for configurable parameters.
OpenEBS Dynamic Local Provisioner offers some configurable parameters that can be applied during the OpenEBS Installation. Some key configurable parameters available for OpenEBS Dynamic Local Provisioner are:
The location of the OpenEBS Dynamic Local PV provisioner container image.Default value: openebs/provisioner-localpvYAML specification: spec.image on Deployment(localpv-provisioner)Helm key: localprovisioner.image
The location of the Provisioner Helper container image. OpenEBS Dynamic Local Provisioner create a Provisioner Helper pod to create and delete hostpath directories on the nodes.Default value: openebs/linux-utilsYAML specification: Environment Variable (OPENEBS_IO_HELPER_IMAGE) on Deployment(localpv-provisioner)Helm key: helper.image
The absolute path on the node where the Hostpath directory of a Local PV Volume will be created.Default value: /var/openebs/localYAML specification: Environment Variable (OPENEBS_IO_LOCALPV_HOSTPATH_DIR) on Deployment(maya-apiserver)Helm key: localprovisioner.basePath
You can proceed to install OpenEBS either using kubectl or helm using the steps below.
Install using kubectl
If you would like to change the default values for any of the configurable parameters mentioned in the previous step, download the
openebs-operator.yamland make the necessary changes before applying.kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
If you would like to use only Local PV (hostpath and device), you can install a lite version of OpenEBS using the following command.
kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml kubectl apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml
Install using OpenEBS helm charts
If you would like to change the default values for any of the configurable parameters mentioned in the previous step, specify each parameter using the
--set key=value[,key=value]argument to
helm install.helm repo add openebs https://openebs.github.io/chartshelm repo updatehelm install --namespace openebs --name openebs openebs/openebs
You can skip this section if you would like to use default OpenEBS Local PV Hostpath StorageClass created by OpenEBS.
The default Storage Class is called
openebs-hostpath and its
BasePath is configured as
To create your own StorageClass with custom
BasePath, save the following StorageClass definition as
local-hostpath-sc.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: local-hostpathannotations:openebs.io/cas-type: localcas.openebs.io/config: |- name: StorageTypevalue: hostpath- name: BasePathvalue: /var/local-hostpathprovisioner: openebs.io/localreclaimPolicy: DeletevolumeBindingMode: WaitForFirstConsumer
In Kubernetes, Hostpath LocalPV identifies nodes using labels such as
kubernetes.io/hostname=<node-name>. However, these default labels might not ensure each node is distinct across the entire cluster. To solve this, you can make custom labels. As an admin, you can define and set these labels when configuring a StorageClass. Here's a sample storage class:apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: local-hostpathannotations:openebs.io/cas-type: localcas.openebs.io/config: |- name: StorageTypevalue: "hostpath"- name: NodeAffinityLabelslist:- "openebs.io/custom-node-unique-id"provisioner: openebs.io/localvolumeBindingMode: WaitForFirstConsumer
Using NodeAffinityLabels does not influence scheduling of the application Pod. Use kubernetes allowedTopologies to configure scheduling options.
local-hostpath-sc.yamland update with your desired values for
BasePathdoes not exist on the node, OpenEBS Dynamic Local PV Provisioner will attempt to create the directory, when the first Local Volume is scheduled on to that node. You MUST ensure that the value provided for
BasePathis a valid absolute path.
Create OpenEBS Local PV Hostpath Storage Class.kubectl apply -f local-hostpath-sc.yaml
Verify that the StorageClass is successfully created.kubectl get sc local-hostpath -o yaml
Once you have installed OpenEBS, verify that OpenEBS Local PV provisioner is running and Hostpath StorageClass is created.
To verify OpenEBS Local PV provisioner is running, execute the following command. Replace
-n openebswith the namespace where you installed OpenEBS.kubectl get pods -n openebs -l openebs.io/component-name=openebs-localpv-provisioner
The output should indicate
openebs-localpv-provisionerpod is running.NAME READY STATUS RESTARTS AGEopenebs-localpv-provisioner-5ff697f967-nb7f4 1/1 Running 0 2m49s
To verify OpenEBS Local PV Hostpath StorageClass is created, execute the following command.kubectl get sc
The output should indicate either the default StorageClass
openebs-hostpathand/or custom StorageClass
local-hostpathare displayed.NAME PROVISIONER AGElocal-hostpath openebs.io/local 5h26mopenebs-hostpath openebs.io/local 6h4m
The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolumeClaims to request Hostpath Local PV from OpenEBS Dynamic Local PV provisioner.
Here is the configuration file for the PersistentVolumeClaim. Save the following PersistentVolumeClaim definition as
local-hostpath-pvc.yamlkind: PersistentVolumeClaimapiVersion: v1metadata:name: local-hostpath-pvcspec:storageClassName: openebs-hostpathaccessModes:- ReadWriteOnceresources:requests:storage: 5G
Create the PersistentVolumeClaimkubectl apply -f local-hostpath-pvc.yaml
Look at the PersistentVolumeClaim:kubectl get pvc local-hostpath-pvc
The output shows that the
Pending. This means PVC has not yet been used by an application pod. The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGElocal-hostpath-pvc Pending openebs-hostpath 3m7s
Here is the configuration file for the Pod that uses Local PV. Save the following Pod definition to
local-hostpath-pod.yaml.apiVersion: v1kind: Podmetadata:name: hello-local-hostpath-podspec:volumes:- name: local-storagepersistentVolumeClaim:claimName: local-hostpath-pvccontainers:- name: hello-containerimage: busyboxcommand:- sh- -c- 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'volumeMounts:- mountPath: /mnt/storename: local-storage
As the Local PV storage classes use
waitForFirstConsumer, do not use
nodeNamein the Pod spec to specify node affinity. If
nodeNameis used in the Pod spec, then PVC will remain in
pendingstate. For more details refer https://github.com/openebs/openebs/issues/2915.
Create the Pod:kubectl apply -f local-hostpath-pod.yaml
Verify that the container in the Pod is running.kubectl get pod hello-local-hostpath-pod
Verify that the data is being written to the volume.kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt
Verify that the container is using the Local PV Hostpath.kubectl describe pod hello-local-hostpath-pod
The output shows that the Pod is running on
Node: gke-user-helm-default-pool-3a63aff5-1tmfand using the persistent volume provided by
local-hostpath-pvc.Name: hello-local-hostpath-podNamespace: defaultPriority: 0Node: gke-user-helm-default-pool-3a63aff5-1tmf/10.128.0.28Start Time: Thu, 16 Apr 2020 17:56:04 +0000...Volumes:local-storage:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: local-hostpath-pvcReadOnly: false...
Look at the PersistentVolumeClaim again to see the details about the dynamically provisioned Local PersistentVolumekubectl get pvc local-hostpath-pvc
The output shows that the
Bound. A new Persistent Volume
pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425has been created.NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGElocal-hostpath-pvc Bound pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 5G RWO openebs-hostpath 28m
Look at the PersistentVolume details to see where the data is stored. Replace the PVC name with the one that was displayed in the previous step.kubectl get pv pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 -o yaml
The output shows that the PV was provisioned in response to PVC request
spec.claimRef.name: local-hostpath-pvc.apiVersion: v1kind: PersistentVolumemetadata:name: pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425annotations:pv.kubernetes.io/provisioned-by: openebs.io/local...spec:accessModes:- ReadWriteOncecapacity:storage: 5GclaimRef:apiVersion: v1kind: PersistentVolumeClaimname: local-hostpath-pvcnamespace: defaultresourceVersion: "291148"uid: 864a5ac8-dd3f-416b-9f4b-ffd7d285b425......local:fsType: ""path: /var/openebs/local/pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425nodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- gke-user-helm-default-pool-3a63aff5-1tmfpersistentVolumeReclaimPolicy: DeletestorageClassName: openebs-hostpathvolumeMode: Filesystemstatus:phase: Bound
A few important characteristics of a OpenEBS Local PV can be seen from the above output:
spec.nodeAffinityspecifies the Kubernetes node where the Pod using the Hostpath volume is scheduled.
spec.local.pathspecifies the unique subdirectory under the
BasePath (/var/local/openebs)defined in corresponding StorageClass.
Delete the Pod, the PersistentVolumeClaim and StorageClass that you might have created.
Verify that the PV that was dynamically created is also deleted.
OpenEBS Local Volumes can be backed up and restored along with the application using Velero.
The following steps assume that you already have Velero with Restic integration is configured. If not, please follow the Velero Documentation to proceed with install and setup of Velero. If you encounter any issues or have questions, talk to us on the #openebs channel on the Kubernetes Slack server.
The following steps will help you to prepare and backup the data from the volume created for the example pod (
hello-local-hostpath-pod), with the volume mount (
Prepare the application pod for backup. Velero uses Kubernetes labels to select the pods that need to be backed up. Velero uses annotation on the pods to determine which volumes need to be backed up. For the example pod launched in this guide, you can inform velero to backup by specifying the following label and annotation.kubectl label pod hello-local-hostpath-pod app=test-velero-backupkubectl annotate pod hello-local-hostpath-pod backup.velero.io/backup-volumes=local-storage
Create a Backup using velero.velero backup create bbb-01 -l app=test-velero-backup
Verify that backup is successful.velero backup describe bbb-01 --details
On successful completion of the backup, the output of the backup describe command will show the following:...Restic Backups:Completed:default/hello-local-hostpath-pod: local-storage
Install and Setup Velero, with the same provider where backups were saved. Verify that backups are accessible.velero backup get
The output of should display the backups that were taken successfully.NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTORbbb-01 Completed 2020-04-25 15:49:46 +0000 UTC 29d default app=test-velero-backup
Restore the application.
Local PVs are created with node affinity. As the node names will change when a new cluster is created, create the required PVC(s) prior to proceeding with restore.
Replace the path to the PVC yaml in the below commands, with the PVC that you have created.kubectl apply -f https://openebs.github.io/charts/examples/local-hostpath/local-hostpath-pvc.yamlvelero restore create rbb-01 --from-backup bbb-01 -l app=test-velero-backup
Verify that application is restored.velero restore describe rbb-01
Depending on the data, it may take a while to initialize the volume. On successful restore, the output of the above command should show:...Restic Restores (specify --details for more information):Completed: 1
Verify that data has been restored. The application pod used in this example, write periodic messages (greetings) to the volume.kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt
The output will show that backed up data as well as new greetings that started appearing after application pod was restored.Sat Apr 25 15:41:30 UTC 2020 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.Sat Apr 25 15:46:30 UTC 2020 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.Sat Apr 25 16:11:25 UTC 2020 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Review the logs of the OpenEBS Local PV provisioner. OpenEBS Dynamic Local Provisioner logs can be fetched using.