OpenEBS Benefits
tip
For information on how OpenEBS is used in production, visit the use cases section or read what OpenEBS Adopters have shared.
Containers and Kubernetes have disrupted the way platforms and technology stacks are built; OpenEBS is a result of applying the patterns of containers and container orchestratation to storage software. Therefore the benefits of using OpenEBS are inline with benefits of moving to cloud native architectures. A few benefits worth highlighting include:
- Open Source Cloud Native storage for Kubernetes
- Granular policies per stateful workload
- Avoid Cloud Lock-in
- Reduced storage TCO up to 50%
- Native HCI on Kubernetes
- High availability - with Lower Blast Radius
#
Open Source Cloud Native Storage for KubernetesOpenEBS is cloud native storage for stateful applications on Kubernetes where "cloud native" means following a loosely coupled architecture. As such the normal benefits to cloud native, loosely coupled architectures apply. For example, developers and DevOps architects can use standard Kubernetes skills and utilities to configure, use, scale, customize and manage OpenEBS itself.
Some key aspects that make OpenEBS different compared to other traditional storage solutions:
- Built using the micro-services architecture like the applications it serves. OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. Uses Kubernetes itself to orchestrate and manage OpenEBS components.
- Built completely in userspace making it highly portable to run across any OS/platform.
- Completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.
- OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use a LocalPV engine for lowest latency writes. Monolithic applications like MySQL and PostgreSQL can use Mayastor built using NVMe and SPDK or cStor based on ZFS for resilience. Streaming applications like Kafka can use the NVMe engine Mayastor for best performance in edge environments or, again, a LocalPV option.
#
Avoid Cloud Lock-inEven though Kubernetes provides an increasingly ubiquitous control plane, concerns about data gravity resulting in lock-in and other challenges remain. With OpenEBS, the data can be written to the OpenEBS layer - if cStor, Jiva or Mayastor are used - and if so OpenEBS acts as a data abstraction layer. Using this data abstraction layer, data can be much more easily moved amongst Kubernetes environments, whether they are on premise and attached to traditional storage systems or in the cloud and attached to local storage or managed storage services.
#
Granular Policies Per Stateful WorkloadOne reason for the rise of cloud native, loosely coupled architectures is that they enable loosely coupled teams. These small teams are enabled by cloud native architectures to move faster, free of most cross functional dependencies thereby unlocking innovation and customer responsiveness. OpenEBS also unlocks small teams by enabling them to retain their autonomy by virtue of deploying their own storage system. Practically, this means storage parameters are monitored on a workload and per volume basis and storage policies and settings are declared to achieve the desired result for a given workload. The policies are tested and tuned, keeping only the particular workload in mind, while other workloads are unaffected. Workloads - and teams - remain loosely coupled.
#
Reduced Storage TCO up to 50%On most clouds, block storage is charged based on how much is purchased and not on how much is used; capacity is often over provisioned in order to achieve higher performance and in order to remove the risk of disruption when the capacity is fully utilized. Thin provisioning capabilities of OpenEBS can pool local storage or cloud storage and then grow the data volumes of stateful applications as needed. The storage can be added on the fly without disruption to the volumes exposed to the workloads or applications. Certain users have reported savings in excess of 60% due to the use of thin provisioning from OpenEBS.
#
Natively Hyperconverged on KubernetesNode Disk Manager (NDM) in OpenEBS can be used to enable disk management in a Kubernetes way by using Kubernetes constructs. Using NDM and OpenEBS, nodes in the Kubernetes cluster can be horizontally scaled without worrying about managing persistent storage needs of stateful applications. The storage needs (capacity planning, performance planning, and volume management) of a cluster can be automated using the volume and pool policies of OpenEBS thanks in part to the role played by NDM in identifying and managing underlying storage resources, including local disks and cloud volumes.
#
High AvailabilityBecause OpenEBS follows the CAS architecture, upon node failure the OpenEBS controller will be rescheduled by Kubernetes while the underlying data is protected via the use of one or more replicas. More importantly - because each workload can utilize its own OpenEBS - there is no risk of a system wide outage due to the loss of storage. For example, metadata of the volume is not centralized where it might be subject to a catastrophic generalized outage as is the case in many shared storage systems. Rather the metadata is kept local to the volume. Losing any node results in the loss of only those volume replicas present on that node. As the volume data can be synchronously replicated at other nodes, in the event of a node failure, the data continues to be available at the same performance levels.