How To Setup MySQL Percona Database Servers on Kubernetes Infrastructure
Not many newly designed systems had such an impact on technology in such a short time-frame as today omnipresent Kubernetes, which emerged from Google’s laboratories only short seven years ago! This open-source container-orchestration system, today maintained by Cloud Native Computing Foundation, was originally developed by Google’s engineers to help them orchestrate their containerized applications. As good old Wiki says: “It aims to provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. Unlike at the beginning, when such sophisticated systems were reserved for global giants, today most companies large or small started adopting containerization, some for it’s scalability advantages, some for better efficiency through constant delivery/deployment, and they all need a system which can manage those containers.
In this tutorial, we will learn how to deploy another ubiquitous technology – MySQL (Percona) database servers on the Kubernetes infrastructure, using a provider which is very popular among the industry experts for it’s reliability and affordability – Digital Ocean!
To wrap things up, we will also deploy a battle-proven HAProxy load balancer on our cluster, which will handle all traffic, perform database servers health checks and make sure all of them are equally busy.
By combining these systems we will create one resilient, highly available MySql service, which not only rids us of downtimes but also enables us to painlessly scale according to the needs.
If you do not have it already, go and grab Digital Ocean’s free trial account, with 100$ of credit available for 60 days, which we will use for the purpose of this tutorial: https://try.digitalocean.com/freetrialoffer
Digital Ocean Setup
1. Once you have the access to Digital Ocean, you need to install their doctl cli tool, which will enable us to use their resources from the command line. From your home directory, run:
Extract it, and then move it to your path with following commands:
tar xf doctl-1.54.0-linux-amd64.tar.gz sudo mv doctl /usr/local/bin
2. Next, we will download, validate and if everything is alright, install the kubectl app, with the following commands:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | sha256sum --check sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Now when we have the tools we need, create .kube directory in your home directory if you don't have it already, (this is where our cluster's configuration will be stored).
3. Create a Digital Ocean API key from their web interface (click on the API on the bottom left corner of the screen, generate new token and copy it's value to the clipboard).
Open terminal and run:
cd ~/ doctl auth init
Paste your newly generated token and if all is well, you should be presented with:
[email protected]:~$ doctl auth init
Please authenticate doctl for use with your DigitalOcean account. You can generate a token in the control panel at https://cloud.digitalocean.com/account/api/tokens
Enter your access token:
Validating token... OK
Create Kubernetes Cluster
4. Now when our doctl is authenticated, we can proceed with creating a new Kubernetes cluster. We can either do that via their web interface, or we can stay hard-core and use CLI, where we will simply run:
doctl kubernetes cluster create database-cluster --count=3 --size=s-4vcpu-8gb --region=ams3
This command will create a cluster consisting of three droplets (--count=3), each with 2 vcpu's and 2 GB of RAM memory (--size=s-4vcpu-8gb) in their european Amsterdam 3 datacenter (--region=ams3).
To check all available sizes, run:
doctl kubernetes options sizes
Slug Name s-1vcpu-2gb s-1vcpu-2gb s-2vcpu-2gb s-2vcpu-2gb s-2vcpu-4gb s-2vcpu-4gb s-4vcpu-8gb s-4vcpu-8gb
You can check which datacenters are available with:
doctl kubernetes options regions
Slug Name nyc1 New York 1 sgp1 Singapore 1 lon1 London 1 nyc3 New York 3 ams3 Amsterdam 3 fra1 Frankfurt 1 tor1 Toronto 1 sfo2 San Francisco 2 blr1 Bangalore 1 sfo3 San Francisco 3
If cluster creation was successful, you received an output similar to this one:
[email protected]:~/$ doctl kubernetes cluster create database-cluster --count=3 --size=s-4vcpu-8gb --region=ams3
Notice: Cluster is provisioning, waiting for cluster to be running
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/home/milosh/.kube/config"
Notice: Setting current-context to do-ams3-database-cluster
ID Name Region Version Auto Upgrade Status Node Pools
Fa6896f7-6098-4a98-b803-ae0848c746b6 database-cluster ams3 1.19.3-do.3 false running database-cluster-default-pool
If we now run a simple kubectl get nodes command, we should be presented with:
[email protected]:~/.kube$ kubectl get nodes NAME STATUS ROLES AGE VERSION database-cluster-default-pool-3zadf Ready <none> 44m v1.19.3 database-cluster-default-pool-3zadq Ready <none> 44m v1.19.3 database-cluster-default-pool-3zady Ready <none> 44m v1.19.3
Finally, we have a fully working Kubernetes cluster, where our database servers will be running.
Clone Percona on Kubernetes
5. It's time to clone Percona Operator to our .kube directory.
git clone https://github.com/histeriks/percona-xtradb-cluster-operator.git
You can now make minor changes to ~/.kube/percona-xtradb-cluster-operator/deploy/cr.yaml file if you want. For example, if you do not want your database to be accessible from the Internet, comment the LoadBalancer line under haproxy section. This will prevent your Kubernetes cluster from giving your haproxy service an external IP address, keeping all traffic local and unreachable from the Internet. You will in that case have to run your application on the same cluster as well, so that it can communicate with the databases over the cluster's local network. Keep in mind that you cannot have more than one external IP address on the same kubernetes cluster. You can tweak other parameters as well by editing this file, play around, see what works and what doesn't, and use Percona's documentation if you get stuck.
6. Back on the subject, after we cloned the repo we will first have to edit the secrets.yaml file, also located in deploy directory, and change all entries there with our own base64-encoded passwords.
You can use your terminal to encrypt them like this:
box:~ milosh$ echo -n 'your-root-password' | base64
Replace all entries in secrets.yaml (root, xtrabackup, monitor, clustercheck, proxyadmin, pmmserver and operator) with your own encrypted passwords.
7. After that is done, we can start using kubectl command to build our cluster setup. First, we will create custom resource definitions by running:
kubectl apply -f deploy/crd.yaml
[email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl apply -f deploy/crd.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created
8. Then create the new namespace with:
kubectl create namespace pxc
[email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl create namespace pxc
9. We are then setting the current context with:
kubectl config set-context $(kubectl config current-context) --namespace=pxc [email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl config set-context $(kubectl config current-context) --namespace=pxc
Context "do-ams3-database-cluster" modified.
10. and setting role based access control with:
kubectl apply -f deploy/rbac.yaml
[email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl apply -f deploy/rbac.yaml
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role role.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created serviceaccount/percona-xtradb-cluster-operator created rolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created
Start Percona Cluster
11. Start percona cluster operator with:
kubectl apply -f deploy/operator.yaml
[email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl apply -f deploy/operator.yaml
12. and then add the passwords (secrets) which we added to the secrets file few steps above, with:
kubectl apply -f deploy/secrets.yaml
[email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl apply -f deploy/secrets.yaml
13. This will enable operator to generate all certificates needed for the operation. After all is done, start our new percona cluster with the last command:
kubectl apply -f deploy/cr.yaml
[email protected]:~/.kube/percona-xtradb-cluster-operator$ kubectl apply -f deploy/cr.yaml
You will have to give it a few minutes for it to wake up completely, and then you can check what's going on with:
kubectl get all
This will bring on a wealth of information, including the external IP address of HAProxy load balancer service, which you can use to access your databases:
NAME READY STATUS RESTARTS AGE pod/cluster1-haproxy-0 2/2 Running 0 19m pod/cluster1-haproxy-1 2/2 Running 0 17m pod/cluster1-haproxy-2 2/2 Running 0 16m pod/cluster1-pxc-0 1/1 Running 0 19m pod/cluster1-pxc-1 1/1 Running 0 17m pod/cluster1-pxc-2 1/1 Running 0 15m pod/percona-xtradb-cluster-operator-69f7f677cc-lhphs 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cluster1-haproxy LoadBalancer 10.245.23.0 188.8.131.52 3306:32078/TCP 19m service/cluster1-haproxy-replicas ClusterIP 10.245.162.11 <none> 3306/TCP 19m service/cluster1-pxc ClusterIP None <none> 3306/TCP,33062/TCP 19m service/cluster1-pxc-unready ClusterIP None <none> 3306/TCP,33062/TCP 19m
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/percona-xtradb-cluster-operator 1/1 1 1 19m
NAME DESIRED CURRENT READY AGE replicaset.apps/percona-xtradb-cluster-operator-69f7f677cc 1 1 1 19m
NAME READY AGE statefulset.apps/cluster1-haproxy 3/3 19m statefulset.apps/cluster1-pxc 3/3 19m
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE cronjob.batch/daily-backup 0 0 * * * False 0 <none> 19m cronjob.batch/sat-night-backup 0 0 * * 6 False 0 <none> 19m
It might take a while until everything's up and running, and haproxy service get's the external IP address from DigitalOcean, so run the same command again every couple of minutes.
You can also use the excellent DigitalOcean's web interface to monitor your Kubernetes cluster, just navigate to your cluster and click on the Kubernetes Dashboard button, and you will get detailed insight into your cluster's operation.
14. After your haproxy service gets the external IP address, try to access your databases with your mysql client:
mysql -u root -p -h EXTERNAL-IP-ADDRESS
Use the mysql root password you encrypted in step 6, and you should be logged into the mysql console. If everything is right, you should see the mysql console.
15. In this tutorial we have used HAProxy as load balancer. HAProxy acts as a level 4 proxy when in TCP mode of operation, and is a forwarding proxy which due to direct communication of the client with the target does not provide any options of interpretation of the data flow. In other words, all it does is establishing the connection between the client and the server.
You could have used the other option instead, ProxySQL, which is a level 7 reverse proxy that supports mysql native protocol, and acts as the final destination with which the client talks to. It does provide some benefits over haproxy, such as alteration of the data on the fly while it’s in transit, read/write splitting, limitation of the number of queries per user, connection multiplexing, cashing of queries, firewall and so on, but, this all comes with a price, a small drop in performance when compared to haproxy. If you would like to try to use proxysql instead of haproxy, just switch enabled true and false values next to the name of their sections inside deploy/cr.yaml file. For more information on all this, please refer to the Percona’s original documentation, which you can find on their website: https://www.percona.com/software/documentation
This is it for now, and in the sequel you will also learn how to configure the automatic backups of your whole cluster to the AWS storage bucket. Percona XtraDB Cluster Operator comes with everything you need for running both automatic and manual backups, via executable scripts located inside deploy/backups directory.