Replicated PV Mayastor Installation on OpenShift
This document provides instructions for installing Replicated PV Mayastor on OpenShift. Using OpenEBS Replicated PV Mayastor with OpenShift offers several benefits for persistent storage management in Kubernetes environments, especially in the context of DevOps and Cloud-Native applications.
Cloud-Native and Container-Aware Storage: OpenEBS is designed to work in a cloud-native, containerized environment that aligns well with OpenShift's architecture. It offers Container Native Storage (CNS), which runs as microservices in the Kubernetes cluster, providing dynamic storage provisioning with high flexibility.
Dynamic and Scalable Storage: OpenEBS allows the provisioning of persistent volumes dynamically. This is particularly useful in OpenShift environments where applications may scale rapidly, and on demand, with minimal manual intervention.
Storage for Stateful Applications: OpenShift often hosts stateful applications like databases (MySQL, PostgreSQL, Cassandra), message queues, and other services requiring persistent storage. OpenEBS supports various storage engines, such as Replicated PV Mayastor enabling optimized storage performance depending on the workload type.
Simplified Storage Operations: With OpenEBS, storage can be managed and operated by DevOps teams without requiring specialized storage administrators. It abstracts away the complexity of traditional storage solutions, providing a Kubernetes-native experience.
Easy Integration with OpenShift Features: OpenEBS can integrate seamlessly with OpenShift’s features like Operators, pipelines, and monitoring tools, making it easier to manage and monitor persistent storage using OpenShift-native tools.
#
PrerequisitesBefore installing Replicated PV Mayastor, make sure that you meet the following requirements:
Hardware Requirements
Your machine type must meet the requirements defined in the prerequisites.
Worker Nodes
The number of worker nodes on which IO engine pods are deployed should not be less than the desired replication factor when using the synchronous replication feature (N-way mirroring).
Additional Disks
Your worker nodes should have additional storage disks attached. The additional storage disks should not be mounted or contain a filesystem.
Enable Huge Pages
2MiB-sized Huge Pages must be supported and enabled on the storage nodes i.e. nodes where IO engine pods are deployed. A minimum number of 1024 such pages (i.e. 2GiB total) must be available exclusively to the IO engine pod on each node. Huge pages in the OpenShift Container Platform (OCP) can be enabled during the installation or it can be enabled by creating new machine configs post-installation. Refer to the Red Hat Documentation for more details.
Kernel Modules
nvme modules are loaded by default in coreOS.
Preparing the Cluster
Refer to the Replicated PV Mayastor Installation Documentation for instructions on preparing the cluster.
Security Context Constraint (SCC)
Ensure that the service account used for the OpenEBS deployments is added to the privileged SCC.
#
Install Replicated PV Mayastor on OpenShiftRefer to the OpenEBS Installation Documentation to install Replicated PV Mayastor using Helm.
- Helm Install Command
info
OCP includes VolumeSnapshot CRDs by default. To avoid potential installation issues, it is recommended to disable these CRDs in the OpenEBS Helm chart, as these resources already exist in the OCP environment.
#
PoolsThe available worker nodes can be viewed using the kubectl-mayastor
plugin. To use this functionality, you must install kubectl
(or execute the binary using ./kubectl-mayastor
). The plugin is not compatible with the oc
binary directly.
It is highly recommended to specify the disk using a unique device link that remains unaltered across node reboots. Examples of such device links are: by-path or by-id (Sample disk-pools as below):
Command
Output
Command
Output
The status of DiskPools can be determined by referencing their corresponding cluster Custom Resources (CRs). Pools that are available and healthy should report their state as online
. Verify that the expected number of pools has been created and that all are in the "online" state.
Command
Output
#
Configuration- Refer to the Replicated PV Mayastor Configuration Documentation for instructions regarding StorageClass creation.
Replicated PV Mayastor dynamically provisions Persistent Volumes (PVs) based on StorageClass definitions that you create. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. Most importantly StorageClass definition is used to control the level of data protection afforded to it (i.e. the number of synchronous data replicas that are maintained for purposes of redundancy). It is possible to create any number of StorageClass definitions, spanning all permitted parameter permutations. An example is given below:
- Refer to the Deploy an Application Documentation for instructions regarding PVC creation and deploying an application.
If all verification steps in the preceding stages were satisfied, then the Replicated PV Mayastor has been successfully deployed within the cluster. To verify basic functionality, we will now dynamically provision a Persistent Volume based on a Replicated PV Mayastor StorageClass, mount that volume within a small test pod which we'll create, and use the Flexible I/O Tester utility within that pod to check that I/O to the volume is processed correctly.
Use oc
to create a PVC based on a StorageClass created. In the example shown below, we will consider StorageClass to have been named "openebs-single-replica" which was created as part of OpenEBS Installation.
As a next step verify the PV/PVC and the Replicated PV Mayastor volumes.
Command
Output
Command
Output
Command
Output
The Replicated PV Mayastor CSI driver will cause the application pod and the corresponding Replicated PV Mayastor volume's NVMe target/controller ("Nexus") to be scheduled on the same Replicated PV Mayastor Node, to assist with the restoration of volume and application availability in the event of a storage node failure.
Verify the application.
Command
Output