Using OpenEBS as TSDB for Prometheus
#
IntroductionEach and every DevOps and SRE's are looking for ease of deployment of their applications in Kubernetes. After successful installation, they will be looking for how easily it can be monitored to maintain the availability of applications in a real-time manner. They can take proactive measures before an issue arise by monitoring the application. Prometheus is the mostly widely used application for scraping cloud native application metrics. Prometheus and OpenEBS together provide a complete open source stack for monitoring.
In this document, we will explain how you can easily set up a monitoring environment in your K8s cluster using Prometheus and use OpenEBS Local PV as the persistent storage for storing the metrics. This guide provides the installation of Prometheus using Helm on dynamically provisioned OpenEBS volumes.
#
Deployment modelWe will add 100G of two disks to each node. Disks will be consumed by Prometheus and Alert manager instances using OpenEBS local PV device storage engine. The recommended configuration is to have at least three nodes and two unclaimed external disks to be attached per node.
#
Configuration workflow#
Install OpenEBSIf OpenEBS is not installed in your K8s cluster, this can be done from here. If OpenEBS is already installed, go to the next step.
#
Select OpenEBS storage engineA storage engine is the data plane component of the IO path of a Persistent Volume. In CAS architecture, users can choose different data planes for different application workloads based on a configuration policy. OpenEBS provides different types of storage engines and chooses the right engine that suits your type of application requirements and storage available on your Kubernetes nodes. More information can be read from here.
After OpenEBS installation, choose the OpenEBS storage engine as per your requirement.
- Choose cStor, If you are looking for replicated storage feature and other enterprise graded features such as volume expansion, backup and restore, etc. The steps for Prometheus operator installation using OpenEBS cStor storage engine can be obtained from here.
- Choose OpenEBS Local PV, if you only want to use Prometheus for generating alerts, you will need low latency storage rather than replicated storage.
In this document, we are deploying Prometheus Operator using OpenEBS Local PV device.
#
Configure OpenEBS Local PV StorageClassThere are 2 ways to use OpenEBS Local PV.
openebs-hostpath
- Using this option, it will create Kubernetes Persistent Volumes that will store the data into OS host path directory at:/var/openebs/<"prometheus-pv-name">/
. Select this option, if you don’t have any additional block devices attached to Kubernetes nodes. If you would like to customize the directory where data will be saved, create a new OpenEBS Local PV storage class using the instructions mentioned here.openebs-device
- Using this option, it will create Kubernetes Local PVs using the block devices attached to the node. Select this option when you want to dedicate a complete block device on a node to a Prometheus application pod and other device for Alert manager. You can customize which devices will be discovered and managed by OpenEBS using the instructions here.
The Storage Class openebs-device
has been chosen to deploy Prometheus Operator in the Kubernetes cluster.
Note: Ensure that you have two disks with the required capacity is added to the corresponding nodes prior to Prometheus installation. In this example, we have added two 100G disks to each node.
#
Installing Prometheus OperatorIn this section, we will install the Prometheus operator. We will later deploy the latest available version of Prometheus application. The following are the high-level overview.
- Label the nodes
- Fetch and update Prometheus repository
- Configure Prometheus Helm
values.yaml
- Create namespace for installing application
- Install Prometheus operator
#
Label the nodesLabel the nodes with custom label so that Prometheus application will be deployed only on the matched Nodes. Label each node with node=prometheus
. We have used this label in the Node Affinity for Prometheus and Alert Manager instances. This will ensure to schedule the Prometheus and Alert Manager pods to deploy only on the labelled nodes.
In this example, we used the following command to label our nodes.
#
Fetch and update Prometheus Helm repositoryvalues.yaml
#
Configure Prometheus Helm Download values.yaml
which we will modify before installing Prometheus using Helm.
Perform the following changes:
Update
fullnameOverride: "new"
Update
prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues
asfalse
Update
prometheus.prometheusSpec.replicas
as3
Update
prometheus.prometheusSpec.podAntiAffinity
ashard
Uncomment the following spec in the
values.yaml
for enabling Node Affinity for Prometheusprometheus.prometheusSpec.affinity
using a custom node label configured in the previous section. Since we usednode=prometheus
for labeling the nodes, mention the same in the Node affinity section for Prometheus deployment.Uncomment the following spec in the
values.yaml
forprometheus.prometheusSpec.storageSpec
and change the StorageClass name of Prometheus with the required StorageClass and required volume capacity. In this case, Storage Class used asopenebs-device
and provided the volume capacity as90Gi
. Ensure to provide the capacity less than equal to the maximum capacity of the blockdevice, which is going to use it.(Optional in case of GKE) Update
prometheusOperator.admissionWebhooks.enabled
asfalse
.Update
prometheusOperator.tls.enabled
asfalse
Update
alertmanager.alertmanagerSpec.replicas
as3
Update
alertmanager.alertmanagerSpec.podAntiAffinity
ashard
Uncomment the following spec in the
values.yaml
for enabling Node Affinity for Alert Manageralertmanager.alertmanagerSpec.affinity
using a custom node label configured in the previous section. Since we usednode=prometheus
for labeling the nodes, mention the same in the Node affinity section for Alert manager.Uncomment the following spec in the
values.yaml
foralertmanager.alertmanagerSpec.storage
and change the StorageClass name of Alert manager with the required StorageClass name and required volume capacity. In this case, StorageClass used asopenebs-device
and provided the volume capacity as90Gi
.
#
Create namespace for installing Prometheus operator#
Install Prometheus operatorThe following command will install both Prometheus and Grafana components.
Note: Check compatibility for your Kubernetes version and Prometheus stack from here.
Verify Prometheus related pods are installed under monitoring namespace
Verify Prometheus related PVCs are created under monitoring namespace
Verify Prometheus related services created under monitoring namespace
For ease of simplicity in testing the deployment, we are going to use NodePort for prometheus-kube-prometheus-prometheus and prometheus-grafana services . Please be advised to consider using LoadBalancer or Ingress, instead of NodePort, for production deployment.
Change Prometheus service to NodePort from ClusterIP:
Change prometheus-grafana service to LoadBalancer/NodePort from ClusterIP:
Note: If the user needs to access Prometheus and Grafana outside the network, the service type can be changed or a new service should be added to use LoadBalancer or create Ingress resources for production deployment.
Sample output after making the above 2 changes in services:
#
Accessing Prometheus and GrafanaGet the node details using the following command:
Verify Prometheus service is accessible over web browser using http://<any_node_external-ip:<NodePort>
Example:
Note: It may be required to allow the Node Port number/traffic in the Firewall/Security Groups to access the above Grafana and Prometheus URL on the web browser.
Launch Grafana using Node's External IP and with corresponding NodePort of prometheus-grafana service
http://<any_node_external-ip>:<Grafana_SVC_NodePort>
Example:
Grafana Credentials:
Username: admin
Password: prom-operator
Password can be obtained using the command
The above credentials need to be provided when you need to access the Grafana console. Login to your Grafana console using the above credentials.
Users can upload a Grafana dashboard for Prometheus in 3 ways.
First method is by proving the Grafana id of the corresponding dashboard and then load it. Just find the Grafana dashboard id of the Prometheus and then just mention this id and then load it. The Grafana dashboard id of Prometheus is 3681.
Another approach is download the following Grafana dashboard JSON file for Prometheus and then paste it in the console and then load it.
The other way to monitor Prometheus Operator is by using the inbuilt Prometheus dashboard. This can be obtained by searching on the Grafana dashboard and finding the Prometheus dashboard under the General category.
#
See Also:OpenEBS use cases Understanding NDM Local PV concepts Local PV User guide