Replicated PV Mayastor Installation on Google Kubernetes Engine
This document provides instructions for installing Replicated PV Mayastor on Google Kubernetes Engine (GKE).
#
GKE with Local SSDsGKE with local SSDs (Solid State Drive) are ephemeral because local SSDs are physically attached to the node’s host virtual machine instance, any data stored in them only exists on that node. Since the data stored on the disks is local, your application must be resilient to unavailable data.
A Pod that writes to a local SSD might lose access to the data stored on the disk if the Pod is rescheduled away from that node. Additionally, if the node is terminated, upgraded, or repaired the data will be erased.
Local SSDs cannot be added to an existing node pool.
Using OpenEBS for GKE with Local SSDs offers several benefits, particularly in managing storage in a cloud-native way.
Replication and Resilience: OpenEBS can manage data replication across multiple nodes, enhancing data availability and resilience. Even though Local SSDs provide high performance, they are ephemeral by nature. OpenEBS can help mitigate the risk of data loss by replicating data to other nodes.
Performance: Local SSDs provide high IOPS and low latency compared to other storage options. OpenEBS can leverage these performance characteristics for applications that require fast storage access.
info
GKE supports adding additional disks with local SSD while creating the cluster.
Adding additional disks to existing node pool is not supported.
Each Local SSD disk comes in a fixed size and you can attach multiple Local SSD disks to a single VM when you create it. The number of Local SSD disks that you can attach to a VM depends on the VM's machine type. See the Local SSD Disks Documentation for more information.
#
PrerequisitesBefore installing Replicated PV Mayastor, make sure that you meet the following requirements:
Image
Replicated PV Mayastor is supported exclusively by GKE clusters that are provisioned on the Ubuntu node images (ubuntu_containerd). It is necessary to specify the Ubuntu node image when you create the clusters.
Hardware Requirements
Your machine type must meet the requirements defined in the prerequisites.
GKE Nodes
The minimum number of worker nodes that can be supported is three. The number of worker nodes on which IO engine pods are deployed should not be less than the desired replication factor when using the synchronous replication feature (N-way mirroring).
Additional Disks
Additional node storage disks can be added as local SSDs during the cluster creation based on the machine types. These local SSDs should be created as a Block device storage using the
--local-nvme-ssd-block
option and not as ephemeral storage.Enable Huge Pages
2MiB-sized Huge Pages must be supported and enabled on the storage nodes i.e. nodes where IO engine pods are deployed. A minimum number of 1024 such pages (i.e. 2GiB total) must be available exclusively to the IO engine pod on each node. Secure Socket Shell (SSH) to the GKE worker node to enable huge pages. Refer to the SSH Cluster Node Documentation for more details.
Kernel Modules
SSH to the GKE worker nodes to load nvme_tcp modules.
Preparing the Cluster
Refer to the Replicated PV Mayastor Installation Documentation for instructions on preparing the cluster.
ETCD and LOKI Storage Class
GKE storage class - standard (rwo) should be used for ETCD and LOKI.
#
Install Replicated PV Mayastor on GKESee the Installing OpenEBS Documentation to install Replicated PV Mayastor using Helm.
- Helm Install Command
info
GKE storage class - standard (rwo) should be used for ETCD and LOKI.
GKE comes with Volume snapshot CRD’s. Disable it from the OpenEBS chart as you might face issues with installation as these resources already exist.
As a next step verify your installation and do the post-installation steps as follows:
#
PoolsThe available GKE local SSD disks on worker nodes can be viewed by using the kubectl-mayastor
plugin.
The block size of the disks should be specified by the block size of the local SSD in your GKE. Run the following commands from the worker node to find the SSD block size:
Command
Output
Command
Output
#
Pool.yamlCreate a pool with the following pool.yaml:
Command
Output
#
ConfigurationRefer to the Replicated PV Mayastor Configuration Documentation for instructions regarding StorageClass creation.
Refer to the Deploy an Application Documentation for instructions regarding PVC creation and deploying an application.
#
Node Failure ScenarioThe GKE worker nodes are a part of a managed instance group. A new node is created with a new local SSD if a node becomes unreachable or faulty. In such cases, recreate the pool with a new name. Once the new pool is created, the OpenEBS Replicated PV Mayastor will rebuild the volume with the replicated data.
note
When a node gets replaced with a new node, all the node labels and huge pages configurations will be lost. You must reconfigure these prerequisites on the new node.
Example
When the node gke-gke-local-ssd-default-pool-dd2b0b02-8twd
is failed, a new node/disk is acquired, resulting in the pool-3 being classified as unknown and the Replicated PV Mayastor volume being classified as degraded due to the failure of one of the replicas.
Re-configure the node labels/hugepages and load nvme_tcp modules on the node again. Recreate the pool with a new name pool-4
.
Once the pool is created, the degraded volume is back online after the rebuild.
Replicated PV Mayastor Rebuild History
The application data is available without any errors.