OpenEBS Releases
Release Date: 23 June 2025
OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v4.3.2 are as follows:
-
Local Storage (a.k.a Local Engine)
- Local PV Hostpath 4.3.0 (stable)
- Local PV LVM 1.7.0 (stable)
- Local PV ZFS 2.8.0 (stable)
-
Replicated Storage (a.k.a Replicated Engine)
- Replicated PV Mayastor 2.9.0 (stable)
-
Out-of-tree (External Storage) Provisioners
- Local PV Hostpath 4.3.0 (stable)
-
Other Components
What’s New
OpenEBS is delighted to introduce the following new features with OpenEBS 4.3.2:
General
-
Kubectl OpenEBS Plugin
A new unified CLI plugin has been introduced. If you have deployed your cluster using the OpenEBS umbrella chart, you can now manage all supported storages - Local PV Hostpath, Local PV LVM, Local PV ZFS, and Replicated PV Mayastor using a single plugin.
-
One-Step Upgrade
OpenEBS now supports a unified, one-step upgrade process for all its storages. This umbrella upgrade mechanism simplifies and streamlines the upgrade procedure across the OpenEBS ecosystem.
-
Enhanced Supportability
- Support bundle collection is now available for all stable OpenEBS storages — Replicated PV Mayastor, Local PV Hostpath, Local PV LVM and Local PV ZFS using the
kubectl openebs dump systemcommand. - This unified supportability approach enables consistent and comprehensive system state capture, significantly improving the efficiency of debugging and troubleshooting. Previously, this capability was limited to Replicated PV Mayastor via the
kubectl-mayastorplugin.
- Support bundle collection is now available for all stable OpenEBS storages — Replicated PV Mayastor, Local PV Hostpath, Local PV LVM and Local PV ZFS using the
Replicated Storage
At-Rest Encryption
You can now configure disk pools with your own encryption key, allowing volume replicas to be encrypted at rest. This is useful if you are working in environments with compliance or security requirements.
Enhancements
Replicated Storage
- Added
formatOptionssupport via storage class. - Cordoned nodes are now preferred when removing volume replicas (Example: Scale down).
- Restricted pool creation using non-persistent devlinks like
/dev/sdX. - You no longer need to recreate the StorageClass when restoring volumes from thick snapshots.
- New volume health information is available to better represent volume state.
- A plugin command is available to delete volumes with
RETAINpolicy - useful when a volume remains after its PV is deleted. - Full volume rebuilds are now avoided if a partial rebuild fails due to reaching the max rebuild limit.
Local Storage
- For Local PV Hostpath, support has been added to specify file permissions for PVC hostpaths.
- For Local PV LVM, support for
formatOptionshas been added via the storage class, allowing you to format devices with custommkfsoptions. - For Local PV LVM, cordoned Kubernetes nodes are now excluded while provisioning volumes.
- For Local PV ZFS, a backup garbage collector has been added to automatically clean up stale or orphaned backup resources.
- For Local PV ZFS, labeling has been improved across all components, including logging-related labels, to help you maintain and observe Helm charts more effectively.
Fixes
Local Storage
For Local PV ZFS
- The quota property is now correctly retained during upgrades.
- Volume restores now maintain backward compatibility for
quotatypevalues. - Fixed a crash in the controller caused by unhandled errors in the
CSI NodeGetInfocall. - The gRPC server now exits cleanly when receiving SIGTERM or SIGINT signals.
- The agent now uses the OpenEBS
lib-csiKubernetes client to loadkubeconfigmore reliably. - The
--pluginCLI flag now only accepts valid values:controllerandagent.
Known Issues
Known Issues - Replicated Storage
- DiskPool capacity expansion is not supported as of v2.9.0.
- If a node hosting a pod reboots and the pod lacks a controller (like a Deployment), the volume unpublish operation may not trigger. This causes the control plane to assume the volume is still in use, which leads to
fsfreezeoperation failure during snapshots. Workaround: Recreate or rebind the pod to ensure proper volume mounting. - If a disk backing a DiskPool fails or is removed (Example: A cloud disk detaches), the failure is not clearly reflected in the system. As a result, the volume may remain in a degraded state for an extended period.
- Large pools (Example: 10–20TiB) may hang during recovery after a dirty shutdown of the node hosting the io-engine.
- Provisioning very large filesystem volumes (Example: More than 15TiB) may fail due to filesystem formatting timeouts or hangs.
- When using Replicated PV Mayastor on Oracle Linux 9 (kernel 5.14.x), servers may unexpectedly reboot during volume detach operations due to a kernel bug (CVE-2024-53170) in the block layer. This issue is not caused by Mayastor but is triggered more frequently because of its NVMe-TCP connection lifecycle. Workaround: Upgrade to kernel 6.11.11, 6.12.2, or later, which includes the fix.
Known Issues - Local Storage
-
For Local PV LVM and Local PV ZFS, you may face issues on single-node setups post-upgrade where the controller pod does not enter the
Runningstate due to changes in the manifest and missing affinity rules. Workaround: Delete the old controller pod to allow scheduling of the new one. This does not occur when upgrading from the previous release. -
For Local PV LVM, thin pool capacity is not unmapped or reclaimed and is also not tracked in the
lvmnodecustom resource. This may result in unexpected behavior.
Limitations (If any)
Limitations - Replicated Storage
- The IO engine fully utilizes all allocated CPU cores regardless of the actual I/O load, as it runs a poller at full speed.
- Each DiskPool is limited to a single block device and cannot span across multiple devices.
- The data-at-rest encryption feature does not support rotation of Data Encryption Keys (DEKs).
Related Information
OpenEBS Release notes are maintained in the GitHub repositories alongside the code and releases. For summary of what changes across all components in each release and to view the full Release Notes, see OpenEBS Release 4.3.2.
See version specific Releases to view the legacy OpenEBS Releases.