OpenEBS Prerequisites
If you are installing OpenEBS, make sure that your Kubernetes nodes meet the required prerequisites for the following storages:
- Local PV Hostpath Prerequisites
- Local PV LVM Prerequisites
- Local PV ZFS Prerequisites
- Replicated PV Mayastor Prerequisites.
At a high-level, OpenEBS requires:
- Verify that you have the admin context. If you do not have admin permissions to your cluster, check with your Kubernetes cluster administrator to help with installing OpenEBS or if you are the owner of the cluster, check out the steps to create a new admin context and use it for installing OpenEBS.
- Each storage engine may have a few additional requirements as follows:
- Depending on the managed Kubernetes platform like Rancher or MicroK8s - set up the right bind mounts.
- Decide which of the devices on the nodes should be used by OpenEBS or if you need to create LVM Volume Groups or ZFS Pools.
#
Local PV Hostpath PrerequisitesSet up the directory on the nodes where Local PV Hostpaths will be created. This directory will be referred to as BasePath
. The default location is /var/openebs/local
.
BasePath
can be any of the following:
- A directory on the root disk (or
os disk
). (Example:/var/openebs/local
). - In the case of bare-metal Kubernetes nodes, a mounted directory using the additional drive or SSD. (Example: An SSD available at
/dev/sdb
, can be formatted with Ext4 and mounted as/mnt/openebs-local
) - In the case of cloud or virtual instances, a mounted directory is created by attaching an external cloud volume or virtual disk. (Example, in GKE, a Local SSD can be used which will be available at
/mnt/disk/ssd1
.)
note
Air-gapped environment: If you are running your Kubernetes cluster in an air-gapped environment, make sure the following container images are available in your local repository.
- openebs/provisioner-localpv
- openebs/linux-utils
Rancher RKE cluster:
If you are using the Rancher RKE cluster, you must configure the kubelet service with extra_binds
for BasePath
. If your BasePath
is the default directory /var/openebs/local
, then extra_binds section should have the following details:
#
Local PV LVM PrerequisitesBefore installing the LVM driver, make sure your Kubernetes Cluster must meet the below prerequisite:
All the nodes must have lvm2 utils installed and the dm-snapshot kernel module loaded.
Setup Volume Group: Find the disk that you want to use for the LVM, for testing you can use the loopback device.
Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes.
In the above command,
lvmvg
is the volume group name to be created.
#
Local PV ZFS PrerequisitesBefore installing the ZFS driver, make sure your Kubernetes Cluster must meet the following prerequisites:
- All the nodes must have zfs utils installed.
- ZPOOL has been set up for provisioning the volume.
Setup:
All the nodes should have zfsutils-linux installed. We should go to each node of the cluster and install zfs utils:
Go to each node and create the ZFS Pool, which will be used for provisioning the volumes. You can create the Pool of your choice, it can be striped, mirrored or raidz pool.
If you have the disk (say /dev/sdb), then you can use the below command to create a striped pool:
You can also create mirror or raidz pool as per your need. Refer to the OpenZFS Documentation for more details.
If you do not have the disk, then you can create the zpool on the loopback device which is backed by a sparse file. Use this for testing purpose only.
Once the ZFS Pool is created, verify the pool via zpool status command, you should see the command similar as below:
Configure the custom topology keys (if needed). This can be used for many purposes like if we want to create the PV on nodes in a particular zone or building. We can label the nodes accordingly and use that key in the storageclass for making the scheduling decision.
#
Replicated PV Mayastor Prerequisites#
GeneralAll worker nodes must satisfy the following requirements:
x86-64 CPU cores with SSE4.2 instruction support
(Tested on) Linux kernel 5.15 (Recommended) Linux kernel 5.13 or higher.
The kernel should have the following modules loaded:- nvme-tcp
- ext4 and optionally xfs
- Helm version must be v3.7 or later.
Each worker node which will host an instance of an io-engine pod must have the following resources free and available for exclusive use by that pod:
- Two CPU cores
- 1GiB RAM
- HugePage support
- A minimum of 2GiB of 2MiB-sized pages
Enabling
nvme_core.multipath
is required for High Availability (HA) functionality in Replicated PV Mayastor. Ensure the kernel parameternvme_core.multipath=Y
is set during the installation. (This prerequisite is optional.)
note
If the application is scheduled to nodes with the io-engine label
(openebs.io/engine=mayastor
), the volume target is preferably placed on the same node where the application is scheduled.
#
Network Requirements- Ensure that the following ports are not in use on the node:
- 10124: Mayastor gRPC server will use this port.
- 8420 / 4421: NVMf targets will use these ports.
- The firewall settings should not restrict connection to the node.
#
Minimum Worker Node CountThe minimum supported worker node count is three nodes. When using the synchronous replication feature (N-way mirroring), the number of worker nodes on which IO engine pods are deployed should be no less than the desired replication factor.
#
Transport ProtocolsReplicated PV Mayastor supports the export and mounting of volumes over NVMe-oF TCP only. Worker node(s) on which a volume may be scheduled (to be mounted) must have the requisite initiator software installed and configured. In order to reliably mount Replicated PV Mayastor volumes over NVMe-oF TCP, a worker node's kernel version must be 5.13 or later and the nvme-tcp kernel module must be loaded.
#
Preparing the Cluster#
Verify/Enable Huge Page Support2MiB-sized Huge Pages must be supported and enabled on the storage nodes i.e. nodes where IO engine pods are deployed. A minimum number of 1024 such pages (i.e. 2GiB total) must be available exclusively to the IO engine pod on each node, which should be verified thus:
If fewer than 1024 pages are available then the page count should be reconfigured on the worker node as required, accounting for any other workloads which may be scheduled on the same node and which also require them. For example:
This change should also be made persistent across reboots by adding the required value to the file/etc/sysctl.conf
like so:
warning
If you modify the huge page configuration of a node, you MUST either restart kubelet or reboot the node. Replicated PV Mayastor will not deploy correctly if the available huge page count as reported by the node's kubelet instance does not satisfy the minimum requirements.
#
Label IO Node CandidatesAll worker nodes which will have IO engine pods running on them must be labeled with the OpenEBS storage type "mayastor". This label will be used as a node selector by the IO engine Daemonset, which is deployed as a part of the Replicated PV Mayastor data plane components installation. To add this label to a node, execute:
warning
If you set csi.node.topology.nodeSelector: true
, then you will need to label the worker nodes accordingly to csi.node.topology.segments
. Both csi-node and agent-ha-node Daemonsets will include the topology segments into the node selector.
#
Supported Versions- Kubernetes 1.23 or higher is required
- Linux Kernel 5.15 or higher is required
- OS: Ubuntu and RHEL 8.8
- LVM Version: LVM 2
- ZFS Version: ZFS 0.8