- See Local PV LVM Deployment to deploy Local PV LVM.
- See Local PV ZFS Deployment to deploy Local PV ZFS.
- See Replicated PV Mayastor Deployment to deploy Replicated PV Mayastor.
Deploy an Application
This section will help you to deploy an application.
Create a PersistentVolumeClaim
The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolumeClaims to request Hostpath Local PV from the OpenEBS Dynamic Local PV provisioner.
-
Here is the configuration file for the PersistentVolumeClaim. Save the following PersistentVolumeClaim definition as
local-hostpath-pvc.yamlkind: PersistentVolumeClaimapiVersion: v1metadata:name: local-hostpath-pvcspec:storageClassName: openebs-hostpathaccessModes:- ReadWriteOnceresources:requests:storage: 5G -
Create the PersistentVolumeClaim
kubectl apply -f local-hostpath-pvc.yaml -
Look at the PersistentVolumeClaim:
kubectl get pvc local-hostpath-pvcThe output shows that the
STATUSisPending. This means PVC has not yet been used by an application pod. The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGElocal-hostpath-pvc Pending openebs-hostpath 3m7s
Create Pod to Consume OpenEBS Local PV Hostpath Storage
-
Here is the configuration file for the Pod that uses Local PV. Save the following Pod definition to
local-hostpath-pod.yaml.apiVersion: v1kind: Podmetadata:name: hello-local-hostpath-podspec:volumes:- name: local-storagepersistentVolumeClaim:claimName: local-hostpath-pvccontainers:- name: hello-containerimage: busyboxcommand:- sh- -c- 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'volumeMounts:- mountPath: /mnt/storename: local-storagenoteAs the Local PV storage classes use
waitForFirstConsumer, do not usenodeNamein the Pod spec to specify node affinity. IfnodeNameis used in the Pod spec, then PVC will remain inpendingstate. See here for more details. -
Create the Pod:
kubectl apply -f local-hostpath-pod.yaml -
Verify that the container in the Pod is running.
kubectl get pod hello-local-hostpath-pod -
Verify that the data is being written to the volume.
kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt -
Verify that the container is using the Local PV Hostpath.
kubectl describe pod hello-local-hostpath-podThe output shows that the Pod is running on
Node: gke-user-helm-default-pool-3a63aff5-1tmfand using the persistent volume provided bylocal-hostpath-pvc.Name: hello-local-hostpath-podNamespace: defaultPriority: 0Node: gke-user-helm-default-pool-3a63aff5-1tmf/10.128.0.28Start Time: Thu, 16 Apr 2020 17:56:04 +0000...Volumes:local-storage:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: local-hostpath-pvcReadOnly: false... -
Look at the PersistentVolumeClaim again to see the details about the dynamically provisioned Local PersistentVolume
kubectl get pvc local-hostpath-pvcThe output shows that the
STATUSisBound. A new Persistent Volumepvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425has been created.NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGElocal-hostpath-pvc Bound pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 5G RWO openebs-hostpath 28m -
Look at the PersistentVolume details to see where the data is stored. Replace the PVC name with the one that was displayed in the previous step.
kubectl get pv pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 -o yamlThe output shows that the PV was provisioned in response to PVC request
spec.claimRef.name: local-hostpath-pvc.apiVersion: v1kind: PersistentVolumemetadata:name: pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425annotations:pv.kubernetes.io/provisioned-by: openebs.io/local...spec:accessModes:- ReadWriteOncecapacity:storage: 5GclaimRef:apiVersion: v1kind: PersistentVolumeClaimname: local-hostpath-pvcnamespace: defaultresourceVersion: "291148"uid: 864a5ac8-dd3f-416b-9f4b-ffd7d285b425......local:fsType: ""path: /var/openebs/local/pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425nodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- gke-user-helm-default-pool-3a63aff5-1tmfpersistentVolumeReclaimPolicy: DeletestorageClassName: openebs-hostpathvolumeMode: Filesystemstatus:phase: Bound
A few important characteristics of an OpenEBS Local PV can be seen from the above output:
spec.nodeAffinityspecifies the Kubernetes node where the Pod using the Hostpath volume is scheduled.spec.local.pathspecifies the unique subdirectory under theBasePath (/var/local/openebs)defined in the corresponding StorageClass.
Deploy Stateful Workloads
The application developers will launch their application (stateful workloads) that will in turn create Persistent Volume Claims for requesting the Storage or Volumes for their pods. The Platform teams can provide templates for the applications with associated PVCs or application developers can select from the list of Storage Classes available for them.
As an application developer, all you have to do is substitute the StorageClass in your PVCs with the OpenEBS Storage Classes available in your Kubernetes cluster.
Here are examples of some applications using OpenEBS:
- PostgreSQL
- Percona
- Redis
- MongoDB
- Cassandra
- Prometheus
- Elastic
- MinIO
Managing the Life Cycle of OpenEBS Components
Once the workloads are up and running, the platform or the operations team can observe the system using the cloud native tools like Prometheus, Grafana, and so forth. The operational tasks are a shared responsibility across the teams:
- Application teams can watch out for the capacity and performance and tune the PVCs accordingly.
- Platform or Cluster teams can check for the utilization and performance of the storage per node and decide on expansion and spreading out of the Data Engines.
- Infrastructure team will be responsible for planning the expansion or optimizations based on the utilization of the resources.