Etcd Migration Procedure
By following the given steps, you can successfully migrate etcd from one node to another during maintenance activities like node drain etc., ensuring the continuity and integrity of the etcd data.
note
Take a snapshot of the etcd. Click here for the detailed documentation.
#
Step 1: Draining the etcd Node- Assuming we have a three-node cluster with three etcd replicas, verify the etcd pods with the following commands:
Command to Verify Pods
Output
- From etcd-0/1/2, we could see all the values are registered in the database. Once we migrated etcd to new node, all the key-value pairs should be available across all the pods. Run the following commands from any etcd pod.
Commands to get etcd data
- In this example, we drain the etcd node worker-0 and migrate it to the next available node (in this case, the worker-4 node), use the following command:
Command to Drain the Node
Output
#
Step 2: Migrating etcd to the New NodeAfter draining the worker-0 node, the etcd pod will be scheduled on the next available node, which is the worker-4 node.
The pod may end up in a CrashLoopBackOff status with specific errors in the logs.
When the pod is scheduled on the new node, it attempts to bootstrap the member again, but since the member is already registered in the cluster, it fails to start the etcd server with the error message member already bootstrapped.
To fix this issue, change the cluster's initial state from new to existing by editing the StatefulSet for etcd:
Command to Check new etcd Pod Status
Output
Command to edit the StatefulSet
Output
#
Step 3: Validating etcd Key-Value PairsRun the appropriate command from the migrated etcd pod to validate the key-value pairs and ensure they are the same as in the existing etcd.
caution
This step is crucial to avoid any data loss during the migration process.
Command