Topology Parameters
The topology parameters defined in storage class helps in determining the placement of volume replicas across different nodes/pools of the cluster. A brief explanation of each parameter is as follows.
note
We support only one type of topology parameter per storage class.
#
"nodeAffinityTopologyLabel"The parameter nodeAffinityTopologyLabel
will allow the placement of replicas on the node that exactly matches the labels defined in the storage class.
For the case shown below, the volume replicas will be provisioned on worker-node-1
and worker-node-3
only as they match the labels specified under nodeAffinityTopologyLabel
in storage class which is equal to zone=us-west-1.
Command
Apply the labels to the nodes using the below command:
Command
Command (Get nodes)
#
"nodeHasTopologyKey"The parameter nodeHasTopologyKey
will allow the placement of replicas on the nodes having a label whose key matches the key specified in the storage class.
Command
Apply the labels on the node using the below command:
Command
In this case, the volume replicas will be provisioned on any two of the three nodes i.e.
worker-node-1
andworker-node-2
orworker-node-1
andworker-node-3
orworker-node-2
andworker-node-3
as the storage class hasrack
as the value fornodeHasTopologyKey
that matches the label key of the node.
#
"nodeSpreadTopologyKey"The parameter nodeSpreadTopologyKey
will allow the placement of replicas on the node that has label keys that are identical to the keys specified in the storage class but have different values.
Command
Apply the labels to the nodes using the below command:
Command
Command (Get nodes)
In this case, the volume replicas will be provisioned on the below given nodes i.e.
worker-node-1
andworker-node-2
orworker-node-2
andworker-node-3
as the storage class haszone
as the value fornodeSpreadTopologyKey
that matches the label key of the node but has a different value.
#
"poolAffinityTopologyLabel"The parameter poolAffinityTopologyLabel
will allow the placement of replicas on the pool that exactly match the labels defined in the storage class.
Command
Apply the labels to the pools using the below command:
Command
Command (Get filtered pools based on labels)
For the case shown above, the volume replicas will be provisioned on pool-on-node-0
and pool-on-node-3
only as they match the labels specified under poolAffinityTopologyLabel
in the storage class that is equal to zone=us-west-1.
#
"poolHasTopologyKey"The parameter poolHasTopologyKey
will allow the placement of replicas on the pool that has label keys same as the keys passed in the storage class.
Command
Command (Get filtered pools based on labels)
In this case, the volume replicas will be provisioned on any two of the three pools i.e.
pool-on-node-1
andpool-on-node-2
orpool-on-node-1
andpool-on-node-3
orpool-on-node-2
andpool-on-node-3
as the storage class haszone
as the value forpoolHasTopologyKey
that matches with the label key of the pool.
#
"stsAffinityGroup" stsAffinityGroup
represents a collection of volumes that belong to instances of Kubernetes StatefulSet. When a StatefulSet is deployed, each instance within the StatefulSet creates its own individual volume, which collectively forms the stsAffinityGroup
. Each volume within the stsAffinityGroup
corresponds to a pod of the StatefulSet.
This feature enforces the following rules to ensure the proper placement and distribution of replicas and targets so that there is not any single point of failure affecting multiple instances of StatefulSet.
Anti-Affinity among single-replica volumes: This rule ensures that replicas of different volumes are distributed in such a way that there is no single point of failure. By avoiding the colocation of replicas from different volumes on the same node.
Anti-Affinity among multi-replica volumes:
If the affinity group volumes have multiple replicas, they already have some level of redundancy. This feature ensures that in such cases, the replicas are distributed optimally for the stsAffinityGroup volumes.
- Anti-affinity among targets:
The High Availability feature ensures that there is no single point of failure for the targets.
The stsAffinityGroup
ensures that in such cases, the targets are distributed optimally for the stsAffinityGroup volumes.
By default, the stsAffinityGroup
feature is disabled. To enable it, modify the storage class YAML by setting the parameters.stsAffinityGroup
parameter to true.
#
"cloneFsIdAsVolumeId"cloneFsIdAsVolumeId
is a setting for volume clones/restores with two options: true
and false
. By default, it is set to false
.
- When set to
true
, the created clone/restore's filesystemuuid
will be set to the restore volume'suuid
. This is important because some file systems, like XFS, do not allow duplicate filesystemuuid
on the same machine by default. - When set to
false
, the created clone/restore's filesystemuuid
will be the same as the original volumeuuid
, but it will be mounted using thenouuid
flag to bypass duplicateuuid
validation.
note
This option needs to be set to true when using a btrfs
filesystem, if the application using the restored volume is scheduled on the same node where the original volume is mounted, concurrently.