Skip to main content

Prerequisites#

General#

All worker nodes must satisfy the following requirements:

  • x86-64 CPU cores with SSE4.2 instruction support

  • (Tested on) Linux kernel 5.15 (Recommended) Linux kernel 5.13 or higher.
    The kernel should have the following modules loaded:

    • nvme-tcp
    • ext4 and optionally xfs
    • Helm version must be v3.7 or later.
  • Each worker node which will host an instance of an io-engine pod must have the following resources free and available for exclusive use by that pod:

    • Two CPU cores
    • 1GiB RAM
    • HugePage support
      • A minimum of 2GiB of 2MiB-sized pages

Network Requirements#

  • Ensure that the following ports are not in use on the node:
    • 10124: Mayastor gRPC server will use this port.
    • 8420 / 4421: NVMf targets will use these ports.
  • The firewall settings should not restrict connection to the node.

Recommended Resource Requirements#

io-engine DaemonSet

resources:
  limits:
    cpu: "2"
    memory: "1Gi"
    hugepages-2Mi: "2Gi"
  requests:
    cpu: "2"
    memory: "1Gi"
    hugepages-2Mi: "2Gi"

csi-node DaemonSet

resources:
  limits:
    cpu: "100m"
    memory: "50Mi"
  requests:
    cpu: "100m"
    memory: "50Mi"

csi-controller Deployment

resources:
  limits:
    cpu: "32m"
    memory: "128Mi"
  requests:
    cpu: "16m"
    memory: "64Mi"

api-rest Deployment

resources:
  limits:
    cpu: "100m"
    memory: "64Mi"
  requests:
    cpu: "50m"
    memory: "32Mi"

agent-core

resources:
  limits:
    cpu: "1000m"
    memory: "32Mi"
  requests:
    cpu: "500m"
    memory: "16Mi"

operator-diskpool

resources:
  limits:
    cpu: "100m"
    memory: "32Mi"
  requests:
    cpu: "50m"
    memory: "16Mi"

DiskPool Requirements#

  • Disks must be unpartitioned, unformatted, and used exclusively by the DiskPool.
  • The minimum capacity of the disks should be 10 GB.

Minimum Worker Node Count#

The minimum supported worker node count is three nodes. When using the synchronous replication feature (N-way mirroring), the number of worker nodes on which IO engine pods are deployed should be no less than the desired replication factor.

Transport Protocols#

Replicated PV Mayastor supports the export and mounting of volumes over NVMe-oF TCP only. Worker node(s) on which a volume may be scheduled (to be mounted) must have the requisite initiator software installed and configured. In order to reliably mount Replicated PV Mayastor volumes over NVMe-oF TCP, a worker node's kernel version must be 5.13 or later and the nvme-tcp kernel module must be loaded.

Preparing the Cluster#

Verify/Enable Huge Page Support#

2MiB-sized Huge Pages must be supported and enabled on the storage nodes i.e. nodes where IO engine pods are deployed. A minimum number of 1024 such pages (i.e. 2GiB total) must be available exclusively to the IO engine pod on each node, which should be verified thus:

grep HugePages /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 1024
HugePages_Free: 671
HugePages_Rsvd: 0
HugePages_Surp: 0

If fewer than 1024 pages are available then the page count should be reconfigured on the worker node as required, accounting for any other workloads which may be scheduled on the same node and which also require them. For example:

echo 1024 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

This change should also be made persistent across reboots by adding the required value to the file/etc/sysctl.conf like so:

echo vm.nr_hugepages = 1024 | sudo tee -a /etc/sysctl.conf
warning

If you modify the huge page configuration of a node, you MUST either restart kubelet or reboot the node. Replicated PV Mayastor will not deploy correctly if the available huge page count as reported by the node's kubelet instance does not satisfy the minimum requirements.

Label IO Node Candidates#

All worker nodes which will have IO engine pods running on them must be labeled with the OpenEBS storage type "Replicated PV Mayastor". This label will be used as a node selector by the IO engine Daemonset, which is deployed as a part of the Replicated PV Mayastor data plane components installation. To add this label to a node, execute:

kubectl label node <node_name> openebs.io/engine=mayastor
warning

If you set csi.node.topology.nodeSelector: true, then you will need to label the worker nodes accordingly to csi.node.topology.segments. Both csi-node and agent-ha-node Daemonsets will include the topology segments into the node selector.

Installation#

For installation instructions, see here.

Support#

If you encounter issues or have a question, file a Github issue, or talk to us on the #openebs channel on the Kubernetes Slack server.

See Also#

Was this page helpful? We appreciate your feedback