OpenEBS for Elasticsearch
#
IntroductionEFK is the most popular cloud native logging solution on Kubernetes for On-Premise as well as cloud platforms. In the EFK stack, Elasticsearch is a stateful application that needs persistent storage. Logs of production applications need to be stored for a long time which requires reliable and highly available storage. OpenEBS and EFK together provides a complete logging solution.
This guide explains the basic installation for Elasticsearch operators on OpenEBS Local PV devices using KUDO. We will be installing Fluentd and Kibana to form the EFK stack.
Advantages of using OpenEBS LocalPV for Elasticsearch database:
- All the logs data is stored locally and managed natively to Kubernetes
- Low latency and better performance
#
Deployment modelThe Local PV volume will be provisioned on a node where Elasticsearch components are getting scheduled and uses one of the matching unclaimed block device for each of them, which will then use the entire block device for storing data. No other application can use this device. If users have limited blockdevices attached to some nodes, they can use nodeSelector
in the application YAML to provision applications on particular nodes where the available block device is present. The recommended configuration is to have at least three nodes and two unclaimed external disk to be attached per node.
The Elasticsearch deployment has the following components, which will use the OpenEBS LocalPV Devices for storage:
- coordinator pod: 1
- master pods: 3
- data pods: 2
Note: Elasticsearch can be deployed both as deployment
or as statefulset
. When Elasticsearch deployed as statefulset
, you don't need to replicate the data again at OpenEBS level. When Elasticsearch is deployed as deployment
, consider 3 OpenEBS replicas, choose the StorageClass accordingly.
#
Configuration workflow- Install OpenEBS
- Select OpenEBS storage engine
- Configure OpenEBS Local PV StorageClass
- Installing KUDO Operator
- Installing and Accessing Elasticsearch
- Installing Kibana
- Installing Fluentd-ES
#
Install OpenEBSIf OpenEBS is not installed in your K8s cluster, this can be done from here. If OpenEBS is already installed, go to the next step.
#
Select OpenEBS storage engineA storage engine is the data plane component of the IO path of a Persistent Volume. In CAS architecture, users can choose different data planes for different application workloads based on a configuration policy. OpenEBS provides different types of storage engines and you should choose the right engine that suits your type of application requirements and storage available on your Kubernetes nodes. More information can be read from here.
After OpenEBS installation, choose the OpenEBS storage engine as per your requirement.
Choose cStor, If you are looking for replicated storage feature and other enterprise graded features such as volume expansion, backup and restore, etc. The steps for Elasticsearch installation using OpenEBS cStor storage engine can be obtained from here.
Choose OpenEBS Local PV, If you are looking for direct attached storage or low latency data write or if the application manages data replication.
In this document, we are deploying Elasticsearch using OpenEBS Local PV.
#
Configure OpenEBS Local PV StorageClassDepending on the type of storage attached to your Kubernetes worker nodes, you can select from different flavors of Dynamic Local PV - Hostpath, Device, LVM, ZFS or Rawfile. For more information you can read here.
The Storage Class openebs-device
has been chosen to deploy Elasticsearch in the Kubernetes cluster.
Note: Ensure that you have two disks with the required capacity added to the corresponding nodes prior to Elasticsearch installation. In this example, we have added two 100G disks to each node.
#
Installing KUDO OperatorIn this section, we will install the KUDO operator. We will later deploy the latest available version of Elasticsearch using KUDO.
Use the latest stable version of KUDO CLI. The latest version of KUDO can be checked from here.
#
Verify if Cert-manager is installedFor installing KUDO operator, the Cert-manager must be already installed in your cluster. If not, install the Cert-manager. The instruction can be found from here. Since our K8s version is v1.18.12, we have installed Cert-manager using the following command.
#
Installing KUDO operator into clusterOnce prerequisites are installed you need to initialize the KUDO operator. The following command will install KUDO v0.18.2.
Verify pods in the kudo-system
namespace:
#
Setting OpenEBS Storage Class as defaultChange the default storage class from your current setting to OpenEBS LocalPV Device. For example, in this tutorial default storage class is used as openebs-device
from standard.
#
Verify default Storage ClassList the storage classes and verify openebs-device
is set to default
.
#
Installing and Accessing ElasticsearchSet instance and namespace variables:
#
Verifying Elastic pods#
Verifying Services#
Verifying Elastic instance status#
Accessing ElasticsearchEnter into one of the master pod using exec command:
Run below command inside Elastic master pod:
Following is the output of the above command:
The above command added data into Elasticsearch. You can use the following command to query for the inserted data:
Now, let's get the details of Elasticsearch cluster. The cluster information will show the Elasticsearch version, cluster name and other details. If you are getting similar information, then it means your Elasticsearch deployment is successful.
#
Installing KibanaFirst, add helm repository of Elastic.
Install Kibana deployment using helm
command. Ensure to meet required prerequisites corresponding to your helm version. Fetch the Kibana values.yaml
:
Edit the following parameters:
elasticsearchHosts
as"http://elastic-coordinator-hs:9200"
# service name of Elastic search.service.type
as"NodePort"
.service.nodePort
as"30295"
# since this port is already added in our network firewall rules.imageTag
as"7.10.1"
, it should be the same image tag of Elasticsearch. In our case, Elasticsearch image tag is 7.10.1.
Now install Kibana using Helm:
Verifying Kibana Pods and Services:
#
Installing Fluentd-ESFetch the values.yaml
:
Replace the following section in the values.yaml
file with new content.
Old:
New:
Install Fluentd-Elasticsearch DaemonSet using the new values:
Verify Fluentd Daemonset, Pods and Services:
Getting logs from the indices:
- Goto Kibana dashboard.
- Click on Management->Stack Management which is placed at left side bottom most.
- Click on index patterns listed under Kibana and then click on
Create Index pattern
. - Provide
logstash-*
inside the index pattern box and then selectNext step
. - In the next step, inside the
Time Filter
field name, select the@timestamp
field from the dropdown menu, and clickCreate index pattern
. - Now click on the
Discover
button listed on the top left of the side menu bar. - There will be a dropdown menu where you can select the available indices.
- In this case, you have to select
logstash-*
from the dropdown menu.
Now let's do some tests:
If you want to get the logs of NDM pods, type the following text inside the Filters
field.
kubernetes.labels.openebs_io/component-name.keyword : "ndm"
and then choose the required date and time period. After that, click Apply.
You will see the OpenEBS NDM pod logs listed on the page.
#
See Also:OpenEBS use cases Understanding NDM Local PV concepts Local PV User guide