Skip to main content

KubeVirt VM Backup and Restore using Replicated PV Mayastor VolumeSnapshots and Velero - FileSystem

Overview#

KubeVirt extends Kubernetes with virtual machine (VM) management capabilities, enabling a unified platform for both containerized and virtualized workloads. Live migration of VMs is a critical feature for achieving high availability, zero-downtime maintenance, and workload mobility. However, live migration in KubeVirt requires shared ReadWriteMany (RWX) storage that can be accessed across multiple nodes.

OpenEBS Replicated PV Mayastor is a high-performance, container-native block storage engine that provides persistent storage for Kubernetes. While Replicated PV Mayastor does not natively support RWX volumes, it can be integrated with an NFS server pod and the NFS CSI driver to provide shared access to storage volumes. This document guides you through the setup and validation of a KubeVirt live migration environment using OpenEBS Replicated PV Mayastor and NFS.

Environment#

ComponentVersion
KubeVirtv1.5.0
Kubernetes (3 nodes)v1.29.6
OpenEBSv4.2.0
NFS CSI Driverv4.11.0
Containerized Data Importer (CDI)v1.62.0
kubectl-mayastor Pluginv2.7.4+0
virtctlv1.5.0

Prerequisites#

Setup OpenEBS#

  1. Create a file named StorageClass.yaml.

StorageClass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-1
parameters:
protocol: nvmf
repl: "1"
thin: "true"
provisioner: io.openebs.csi-mayastor
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
  1. Apply the configuration.
kubectl create -f StorageClass.yaml

Create a VolumeSnapshotClass#

  1. Create a file named VolumeSnapshotClass.yaml.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-mayastor-snapshotclass
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: io.openebs.csi-mayastor
deletionPolicy: Delete
  1. Apply the configuration.
kubectl create -f VolumeSnapshotClass.yaml

KubeVirt Setup#

  1. Install KubeVirt Operator.
export VERSION=$(curl -s https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
echo $VERSION
kubectl create -f "https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml"

Sample Output

namespace/kubevirt created
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created
  1. Create KubeVirt Custom Resource.
kubectl create -f "https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml "

Sample Output

kubevirt.kubevirt.io/kubevirt created
  1. Patch to Use Emulation (Optional).
kubectl -n kubevirt patch kubevirt kubevirt --type=merge --patch '{"spec":{"configuration":{"developerConfiguration":{"useEmulation":true}}}}'
  1. Verify KubeVirt Installation.
kubectl get all -n kubevirt

Sample Output

➜ kubevirt kubectl get all -n kubevirt
Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME READY STATUS RESTARTS AGE
pod/virt-api-c8c86b5b-fjcdt 1/1 Running 1 (2m32s ago) 17m
pod/virt-api-c8c86b5b-vsznq 1/1 Running 1 (2m32s ago) 17m
pod/virt-controller-5f57b7cc79-6qgv2 1/1 Running 1 (2m37s ago) 16m
pod/virt-controller-5f57b7cc79-qwlzv 1/1 Running 1 (2m37s ago) 16m
pod/virt-handler-684vj 1/1 Running 0 16m
pod/virt-handler-njqxj 1/1 Running 0 16m
pod/virt-handler-sk8bf 1/1 Running 0 16m
pod/virt-operator-584c7dd444-5r9d8 1/1 Running 0 20m
pod/virt-operator-584c7dd444-tcxs4 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubevirt-operator-webhook ClusterIP 10.106.160.8 <none> 443/TCP 17m
service/kubevirt-prometheus-metrics ClusterIP None <none> 443/TCP 17m
service/virt-api ClusterIP 10.111.142.175 <none> 443/TCP 17m
service/virt-exportproxy ClusterIP 10.106.210.218 <none> 443/TCP 17m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/virt-handler 3 3 3 3 3 kubernetes.io/os=linux 16m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/virt-api 2/2 2 2 17m
deployment.apps/virt-controller 2/2 2 2 16m
deployment.apps/virt-operator 2/2 2 2 20m
NAME DESIRED CURRENT READY AGE
replicaset.apps/virt-api-c8c86b5b 2 2 2 17m
replicaset.apps/virt-controller-5f57b7cc79 2 2 2 16m
replicaset.apps/virt-operator-584c7dd444 2 2 2 20m
NAME AGE PHASE
kubevirt.kubevirt.io/kubevirt 17m Deployed

CDI Setup#

  1. Install CDI Operator and Custom Resource.
export TAG=$(curl -s -w %{redirect_url} https://github.com/kubevirt/containerized-data-importer/releases/latest)
export VERSION=$(echo ${TAG##*/})
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

Sample Output - CDI

namespace/cdi created
customresourcedefinition.apiextensions.k8s.io/cdis.cdi.kubevirt.io created
clusterrole.rbac.authorization.k8s.io/cdi-operator-cluster created
clusterrolebinding.rbac.authorization.k8s.io/cdi-operator created
serviceaccount/cdi-operator created
role.rbac.authorization.k8s.io/cdi-operator created
rolebinding.rbac.authorization.k8s.io/cdi-operator created
deployment.apps/cdi-operator created

Sample Output - CR

kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
cdi.cdi.kubevirt.io/cdi created
  1. Configure Scratch Space StorageClass.
kubectl edit cdi cdi

Add the following under spec.config:

spec:
config:
featureGates:
- HonorWaitForFirstConsumer
scratchSpaceStorageClass: mayastor-1
important

CDI always requests scratch space with a Filesystem volume mode regardless of the volume mode of the related DataVolume. It also always requests it with a ReadWriteOnce accessMode. Therefore, when using block mode DataVolumes, you must ensure that a storage class capable of provisioning Filesystem mode PVCs with ReadWriteOnce accessMode is configured.

  1. Verify CDI Installation.
kubectl get all -n cdi

Sample Output

Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME READY STATUS RESTARTS AGE
pod/cdi-apiserver-5bbd7b4df5-28gm8 1/1 Running 1 (2m55s ago) 3m
pod/cdi-deployment-84d584dbdd-g8mfn 1/1 Running 0 3m
pod/cdi-operator-7cfb4db845-fg6vt 1/1 Running 0 3m46s
pod/cdi-uploadproxy-856554cb9c-m7kll 1/1 Running 1 (2m54s ago) 3m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cdi-api ClusterIP 10.103.42.202 <none> 443/TCP 3m1s
service/cdi-prometheus-metrics ClusterIP 10.105.45.246 <none> 8080/TCP 3m1s
service/cdi-uploadproxy ClusterIP 10.96.255.119 <none> 443/TCP 3m1s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cdi-apiserver 1/1 1 1 3m2s
deployment.apps/cdi-deployment 1/1 1 1 3m2s
deployment.apps/cdi-operator 1/1 1 1 3m48s
deployment.apps/cdi-uploadproxy 1/1 1 1 3m2s
NAME DESIRED CURRENT READY AGE
replicaset.apps/cdi-apiserver-5bbd7b4df5 1 1 1 3m2s
replicaset.apps/cdi-deployment-84d584dbdd 1 1 1 3m2s
replicaset.apps/cdi-operator-7cfb4db845 1 1 1 3m48s
replicaset.apps/cdi-uploadproxy-856554cb9c 1 1 1 3m2s

Deploying a Virtual Machine#

  • Create DataVolume
  1. Create a file named dv.yaml.

dv.yaml

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: "fedora-1"
spec:
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: mayastor-1
volumeMode: Block
source:
http:
url: "https://download.fedoraproject.org/pub/fedora/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-AmazonEC2.x86_64-40-1.14.raw.xz"
  1. Apply the Configuration.
kubectl create -f dv.yaml
  1. Monitor the Import.
kubectl logs -f pod/importer-fedora
  • Create a Virtual Machine
  1. Create a file named vm1_pvc.yaml to use the PVC prepared by DataVolume as a root disk.

vm1_pvc.yaml

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
creationTimestamp: 2018-07-04T15:03:08Z
generation: 1
labels:
kubevirt.io/os: linux
name: vm1
spec:
runStrategy: Always
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/domain: vm1
spec:
domain:
cpu:
cores: 2
devices:
disks:
- disk:
bus: virtio
name: disk0
- cdrom:
bus: sata
readonly: true
name: cloudinitdisk
machine:
type: q35
resources:
requests:
memory: 1024M
volumes:
- name: disk0
persistentVolumeClaim:
claimName: fedora-1
- cloudInitNoCloud:
userData: |
#cloud-config
hostname: vm1
ssh_pwauth: True
disable_root: false
chpasswd:
list: |
root:MySecurePassword123
expire: false
name: cloudinitdisk
  1. Apply the configuration.
kubectl create -f vm1_pvc.yaml
  1. Connect to the VM Console.
virtctl console vm1
  • Login credentials: Username: root Password: MySecurePassword123
  1. Install Guest Agent.
virtctl console vm1
yum install -y qemu-guest-agent
systemctl enable --now qemu-guest-agent

Check Agent Status

kubectl get vm vm1 -o yaml
  1. Insert Sample Data: Connect to the VM and create sample files in the root user home directory for backup verification.

Velero Setup#

  • Configuring an AWS S3 Bucket for Velero Backup-Location
  1. Configure AWS CLI (Optionally you can use the AWS GUI). Refer to the AWS CLI Setup Guide for setting up AWS CLI.

  2. Create an AWS S3 Bucket.

BUCKET=kubevirtbackup2025
REGION=ap-south-1
aws s3api create-bucket \
--bucket $BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION
  1. Create Velero User.
aws iam create-user --user-name velero
  1. Create an IAM policy with appropriate permissions.
cat > velero-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::${BUCKET}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::${BUCKET}"
]
}
]
}
EOF
  1. Assign the IAM policy to velero user.
aws iam put-user-policy \
--user-name velero \
--policy-name velero \
--policy-document file://velero-policy.json
  1. Generate Access Keys.
aws iam create-access-key --user-name velero

Sample Output

{
"AccessKey": {
"UserName": "velero",
"AccessKeyId": <AWS_ACCESS_KEY_ID>,
"Status": "Active",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
"CreateDate": "2025-04-24T13:32:14+00:00"
}
}
  • Install Velero Utility

Install the Velero utility with Homebrew.

brew install velero
  • Install Velero
  1. Create credentials-velero file with the access key and secret access key.
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
  1. Install Velero.
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.10.0 \
--bucket $BUCKET \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION \
--secret-file ./credentials-velero \
--use-volume-snapshots=true \
--features=EnableCSI \
--use-node-agent \
--privileged-node-agent
  1. Verify the Installation.

Command

kubectl get all -n velero

Sample Output

Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME READY STATUS RESTARTS AGE
pod/node-agent-2nbdm 1/1 Running 0 7s
pod/node-agent-ckw7f 1/1 Running 0 7s
pod/node-agent-mzxx9 1/1 Running 0 7s
pod/velero-c67f8df7b-9d47t 1/1 Running 0 7s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-agent 3 3 3 3 3 <none> 8s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/velero 1/1 1 1 8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/velero-c67f8df7b 1 1 1 9s
  1. Check Velero backup storage location details.

Command

velero get backup-location

Sample Output

NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws kubevirtbackup2025 Available 2025-05-05 12:32:25 +0530 IST ReadWrite true
  • Add KubeVirt-Velero Plugin

The KubeVirt-Velero plugin automates reliable backups of KubeVirt and CDI objects.

  1. Add the plugin.
velero plugin add quay.io/kubevirt/kubevirt-velero-plugin:v0.2.0
  1. Verify the plugin installation.
velero get plugins | grep kubevirt

Backup of KubeVirt VM#

See Also#

Was this page helpful? We appreciate your feedback