Volume Snapshots
Volume Snapshots are copies of a persistent volume at a specific point in time. They can restore a volume to a previous state or create a new volume. Replicated PV Mayastor provides support for industry-standard copy-on-write (COW) snapshots, which is a popular methodology for taking snapshots by keeping track of changed blocks. Replicated PV Mayastor incremental snapshot capability enhances data migration and portability in Kubernetes clusters across different cloud providers or data centers. Using standard kubectl commands, you can seamlessly perform operations on snapshots and clones in a fully Kubernetes-native manner.
Use cases for volume snapshots include:
- Efficient replication for backups
- Utilization of clones for troubleshooting
- Development against a read-only copy of data
Volume snapshots allow the creation of read-only incremental copies of volumes, enabling you to maintain a history of your data. These volume snapshots possess the following characteristics:
- Consistency: The data stored within a snapshot remains consistent across all replicas of the volume, whether local or remote.
- Immutability: Once a snapshot is successfully created, the data contained within it cannot be modified.
Currently, Replicated PV Mayastor supports the following operations related to volume snapshots:
- Creating a snapshot for a PVC
- Listing available snapshots for a PVC
- Deleting a snapshot for a PVC
info
Unlike volume replicas, snapshots cannot be rebuilt on an event of a node failure.
Prerequisites#
Install and configure Replicated PV Mayastor by following the steps given in the OpenEBS Installation documentation and create disk pools.
Command
YAML
- Create a PVC by following the steps given in the Deploy a test Application documentation and check if the status of the PVC is Bound.
Command
Example Output
note
Copy the PVC name, for example, ms-volume-claim.
- Create an application by following the instructions provided in the Deploy an Application documentation.
Create a Snapshot#
You can create a snapshot (with or without an application) using the PVC.
Follow the steps below to create a volume snapshot:
Step 1: Create a Kubernetes VolumeSnapshotClass object#
Command
YAML
| Parameters | Type | Description |
|---|---|---|
| Name | String | Custom name of the snapshot class |
| Driver | String | CSI provisioner of the storage provider being requested to create a snapshot (io.openebs.csi-mayastor) |
Apply VolumeSnapshotClass Details
Command
Example Output
Step 2: Create the Snapshot#
Command
YAML
| Parameters | Type | Description |
|---|---|---|
| Name | String | Name of the snapshot |
| VolumeSnapshotClassName | String | Name of the created snapshot class |
| PersistentVolumeClaimName | String | Name of the PVC. Example- ms-volume-claim |
Apply the Snapshot
Command
Example Output
note
When a snapshot is created on a thick-provisioned volume, the storage system automatically converts it into a thin-provisioned volume.
List Snapshots#
To retrieve the details of the created snapshots, use the following command:
Command
Example Output
Command
Example Output
Delete a Snapshot#
To delete a snapshot, use the following command:
Command
Example Output
Filesystem Consistent Snapshot#
The filesystem consistent snapshot ensures that the snapshot filesystem remains consistent while taking a volume snapshot. Before taking the volume snapshot, the csi-node plugin runs the FIFREEZE and FITHAW ioctls on the underlying filesystem to flush and quiesce any active IOs. After the snapshot creation process, the IOs are resumed.
By default, mayastor volume snapshots are fs consistent. This means that if any part of creating a snapshot or an ioctl fails, the whole process will fail and be tried again by the mayastor CSI-controller without any user intervention.
You can disable the fs consistency feature using the VolumeSnapshotClass parameter quiesceFS. See the below example to disable the feature:
Operational Considerations - Snapshot Capacity and Commitment Considerations#
When using VolumeSnapshots with Replicated PV Mayastor, snapshot, volume, and replica creation are governed by configurable capacity commitment thresholds. These commitment limits ensure that storage pools maintain sufficient free space to support copy-on-write operations, replica placement, and snapshot creation safely.
Understanding these limits is important to prevent unexpected failures during snapshot, backup, or volume provisioning operations.
Commitment Thresholds Overview#
OpenEBS Replicated PV Mayastor enforces the following commitment thresholds for thin-provisioned storage pools:
| Commitment Type | Description | Default |
|---|---|---|
poolCommitment | Maximum allowed overcommitment of a storage pool | 250% |
volumeCommitment | Minimum required free space to create replicas for existing volumes | 40% |
volumeCommitmentInitial | Minimum required free space to create replicas for new volumes | 40% |
snapshotCommitment | Minimum required free space to create snapshots | 40% |
These thresholds are evaluated independently on each replica pool.
Snapshot Commitment Behavior#
Snapshot creation requires that each replica pool has sufficient free space relative to the volume size. This requirement is controlled by the snapshotCommitment threshold.
Example:
- SnapshotCommitment = 40%
- Volume size = 100 GiB
- Required free space per replica pool = 40 GiB
If any replica pool has less than 40 GiB of free space, snapshot creation fails. This check ensures that sufficient space is available to accommodate copy-on-write operations after snapshot creation.
Pool Commitment Impact on Snapshots and Volumes#
The poolCommitment threshold defines how much a pool can be overcommitted when thin provisioning is enabled.
Example:
- Pool size = 10 GiB
- PoolCommitment = 250%
- Maximum logical allocation allowed = 25 GiB
If the pool reaches this commitment limit, the following operations may fail:
- Volume creation
- Replica creation
- Snapshot creation
- Backup operations dependent on snapshots
This mechanism prevents storage pools from exceeding safe operating limits.
Behavior with Thick-Provisioned Volumes#
When a snapshot is created for a thick-provisioned volume, the volume is internally converted to thin provisioning.
As part of this process:
- The snapshot's logical size equals the volume size
- The pool must reserve committed capacity equal to the volume size
- Even though physical space is not immediately consumed, committed capacity increases
If this increase causes the pool to exceed its commitment thresholds, snapshot creation may fail. This behavior ensures safe operation and prevents unexpected storage exhaustion.
Example: Snapshot Commitment Impact on Backups
The following example illustrates how snapshot commitment can affect snapshot creation when replica pool capacity is constrained.
Assumptions:
- Pool size: 10 GiB
- SnapshotCommitment: 40%
- Thick-provisioned volumes
| Volume Size | Free Space per Pool | Required Free Space (40%) | Snapshot Result |
|---|---|---|---|
| 7 GiB | 3 GiB | 2.8 GiB | Successful |
| 8 GiB | 2 GiB | 3.2 GiB | Failed |
| 9 GiB | 1 GiB | 3.6 GiB | Failed |
In this scenario, snapshot creation succeeds only when all replica pools meet the snapshot commitment requirement. If any replica pool fails the check, the snapshot and therefore the backup fails.
Default Commitment Values and Customization#
Replicated PV Mayastor enforces snapshot and pool capacity checks using configurable Helm values.
Example Helm parameters:
--set mayastor.agents.core.capacity.thin.poolCommitment=250%
--set mayastor.agents.core.capacity.thin.volumeCommitment=40%
--set mayastor.agents.core.capacity.thin.volumeCommitmentInitial=40%
--set mayastor.agents.core.capacity.thin.snapshotCommitment=40%
The default values are suitable for most environments and provide a good balance between storage utilization and operational safety.
However, environments with large volumes, frequent snapshots, or aggressive thin provisioning may require tuning these values during installation or upgrade using Helm parameters. Any adjustments should be accompanied by careful capacity planning and continuous monitoring of DiskPool utilization to ensure reliable snapshot creation and uninterrupted backup operations.