The Percona XtraDB Cluster (PXC) is a fully open-source high-availability solution for MySQL. It integrates Percona Server for MySQL and Percona XtraBackup with the Galera library to enable synchronous multi-master replication. A cluster consists of nodes, where each node contains the same set of data synchronized across nodes. The recommended configuration is to have at least three nodes. Each node is a regular Percona Server for MySQL instances.
Percona XtraDB Cluster can be provisioned with OpenEBS volumes using OpenEBS storage engine- OpenEBS Local PV. Depending on the performance and high availability requirements of Percona, you can select any of the storage engine to run Percona with the following deployment options:
- For optimal performance, deploy Percona PXC with OpenEBS Local PV.
- If you would like to use storage layer capabilities like high availability, snapshots, incremental backups and restore and so forth, you can select OpenEBS cStor.
This tutorial provides detailed instructions to run a Percona XtraDB Cluster(PXC) with OpenEBS Local PV and perform some simple database operations to verify the successful deployment and it's performance benchmark.
- Install OpenEBS
- Select OpenEBS storage engine
- Configure OpenEBS Local PV StorageClass
- Install the Percona XtraDB Cluster operator
- Update Storage and Monitoring section
- Install the Percona XtraDB Cluster
- Access Percona MySQL database
- Run performance benchmark
If OpenEBS is not installed in your K8s cluster, this can be done from here. If OpenEBS is already installed, go to the next step.
A storage engine is the data plane component of the IO path of a Persistent Volume. In CAS architecture, users can choose different data planes for different application workloads based on a configuration policy. OpenEBS provides different types of storage engines. Choose the right engine that suits your type of application requirements and storage available on your Kubernetes nodes. More information can be read from here.
In this tutorial, OpenEBS Local PV device has been used as the storage engine for deploying Percona PXC. There are 2 ways to use OpenEBS Local PV.
openebs-hostpath- Using this option, it will create Kubernetes Persistent Volumes that will store the data into OS host path directory at: /var/openebs/
<percona-pv>/. Select this option, if you don’t have any additional block devices attached to Kubernetes nodes. You would like to customize the directory where data will be saved, create a new OpenEBS Local PV storage class using these instructions.
openebs-device- Using this option, it will create Kubernetes Local PVs using the block devices attached to the node. Select this option when you want to dedicate a complete block device on a node to a Percona node. You can customize which devices will be discovered and managed by OpenEBS using the instructions here.
The Storage Class
openebs-device has been chosen to deploy PXC in the Kubernetes cluster.
Verify if the operator is running correctly
In this document, we have made changes in the storage section for PXC and the monitoring section PMM.
Update Storage Class name and required storage parameters in deploy/cr.yaml. In this example, we have updated the following parameters:
Note: Ensure you have 100Gi is attached with each Node. Else, provide the storage capacity as per the capacity of the available disk.
Enable monitoring service and server user name. In this example, we have updated the following parameters:
The following is the sample snippet of PMM spec of Percona XtraDB where we enabled monitoring feature and updated the PMM server username.
There is a dependency if you are enabling a monitoring service(PMM) for your PXC. In this case, you must install the PMM server using the following command before installing PXC. We have used Percona blog to enable the monitoring service.
Using helm, add the Percona chart repository and update the information for the available charts as follows:
Note: In this document, we have used “test123” as the PMM server credential password and the base64 encoded form of this password is “dGVzdDEyMw==”. This encoded value will be added in one of the secrets while installing the PXC cluster and also while running the performance benchmark task.
Now, verify PMM server pod is installed and running.
In the previous section, we have made the required changes on the CR YAML spec. Let’s install the PXC cluster using the following command. Ensure your current directory is the cloned Percona directory.
After applying the above command, you may see that cluster1-pxc-0 pod started in
This is due to the unavailability of the PMM server key in the secret. To resolve this, edit the corresponding secret and add the PMM server key.
Let’s edit the secret
internal-cluster1 using the following command and add
pmmserver value as per the given credential password during PMM server installation time. In this example, we have added
pmmserver: dGVzdDEyMw== in the secret
Sample spec of the modified secret content.
Now, verify that all required components are installed and running successfully.
Sample snippet of output:
Now, get the encoded information of the data named as
root. It is given as “WnV2cFNiRGU4UWhpWjNmd1Y=”. The decoded value can be found using the following method.
Let’s run a Percona client to perform the database operations. You can run simple database operations in many ways. One method is by logging in to any of the Percona pods and running MySQL commands. In this example, we have created a Percona Client pod, and by using this pod, database operations are performed.
The following command will run a Percona client pod through which we can access the PXC cluster and perform database operations. Once you enter into the Percona client shell, login to the MySQL console by providing the user credentials. In this case, username as “root” and password should be the decoded value, which can be found above.
Let’s create a SysBench pod to perform the performance benchmark of the PXC database.
The above command will create a temporary pod for SysBench. This pod will be used to run the benchmark commands. In this example, we are using the PXC service name as the mysql host in the following performance benchmark test command. The root password used in the following command can be obtained from the previous section.
Run the following tests from the SysBench pod.
Ensure the same database has already been created before running the tests. In this example, we have created a database called “sbtest” in the previous section and used it in the performance benchmark tests. Please remember to use the corresponding MySQL password throughout the performance benchmark tests.