ProxySQL Behavior in the Percona Kubernetes Operator for Percona XtraDB Cluster
The Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting.
ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running
kubectl describe pod cluster1-proxysql-0 | grep Node: Node: ip-192-168-37-111.ec2.internal/192.168.37.111
Login into and ask for the running containers. You will see something like this:
[root@ip-192-168-37-111 ~]# docker ps | grep -i proxysql d63c55d063c5 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql-monit_cluster1-proxys ql-0_pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 d75002a3847e percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_pxc-monit_cluster1-proxysql-0_ pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 e34d551594a8 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql_cluster1-proxysql-0_p xc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0
Now, what’s the purpose of the sidecar container in this case? To find out if there are new PXC nodes (pods) or on the contrary, PXC pods have been removed (due to scale down) and configure ProxySQL accordingly.
Adding and Removing PXC Nodes (Pods)
Let’s see it in action. A regular PXC kubernetes deployment with 3 PXC pods, like this:
kubectl get pod NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 106m cluster1-proxysql-1 3/3 Running 0 106m cluster1-proxysql-2 3/3 Running 0 106m cluster1-pxc-0 1/1 Running 0 131m cluster1-pxc-1 1/1 Running 0 128m cluster1-pxc-2 1/1 Running 0 129m
Will have the mysql_server information as following:
mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec)
What do we have?
- 3 PXC pods
- 3 ProxySQL POD
- The 3 PXC pods (or nodes) registered inside ProxySQL
- And several host groups.
What are those host groups?
mysql> select * from runtime_mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 11 backup_writer_hostgroup: 12 reader_hostgroup: 10 offline_hostgroup: 13 active: 1 max_writers: 1 writer_is_also_reader: 2 max_transactions_behind: 100 comment: NULL 1 row in set (0.01 sec)
ProxySQL is using the native galera support and has defined a writer hg, a backup writer hg, and a reader hg. Looking back at the server configuration we have 1 writer, 2 readers, and those same 2 readers are also backup writers.
And what are the query rules?
mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.*FOR UPDATE | 1 | 11 | | 8 | xtrabackup | ^SELECT | 1 | 10 | +---------+--------------+---------------------+--------+-----------------------+ 8 rows in set (0.00 sec)
Now, let’s scale up the deployment and add 2 more PXC pods:
kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]'
And let’s check the PODs
kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 124m cluster1-proxysql-1 3/3 Running 0 124m cluster1-proxysql-2 3/3 Running 0 124m cluster1-pxc-0 1/1 Running 0 149m cluster1-pxc-1 1/1 Running 0 146m cluster1-pxc-2 1/1 Running 0 147m cluster1-pxc-3 1/1 Running 0 2m53s cluster1-pxc-4 1/1 Running 0 2m10s
And now the transition inside ProxySQL:
mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec) mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------------+--------+ | 11 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 13 | cluster1-pxc-4.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | OFFLINE_HARD | 1000 | +--------------+---------------------------------------------------+--------------+--------+ 9 rows in set (0.00 sec) mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-4.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 9 rows in set (0.00 sec)
What happened? The new PXC nodes were added to ProxySQL and are ready to handle the traffic. We can also see that the previous master node, which was cluster1-pxc-2.cluster1 is now assigned to the reader hg and the new master is cluster1-pxc-4.cluster1
And what about the query rules?
mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.*FOR UPDATE | 1 | 11 | | 8 | xtrabackup | ^SELECT | 1 | 10 | +---------+--------------+---------------------+--------+-----------------------+ 8 rows in set (0.00 sec)
Same as before. Query rules are not modified when adding/removing PXC pods.
But what happens when rules are modified?
Adding and Removing ProxySQL Query Rules
In our Operator, we use the ProxySQL Native Clustering. What does that means? It means that if the user made configuration changes (via admin port) on one instance, it is automatically distributed to all members. For example, adding a rule:
We use Pod 0 for adding the rule:
kubectl exec -it cluster1-proxysql-0 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password Defaulting container name to proxysql. Use 'kubectl describe pod/cluster1-proxysql-0 -n pxc' to see all of the containers in this pod. mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2270 Server version: 8.0.18 (ProxySQL Admin Module) Copyright (c) 2009-2020 Percona LLC and/or its affiliates Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; Empty set (0.00 sec) mysql> INSERT INTO mysql_query_rules (active, match_digest, destination_hostgroup, cache_ttl,username) VALUES (1, "^SELECT.*table-dani.*", 10, NULL, "root"); Query OK, 1 row affected (0.00 sec) mysql> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+----------+-----------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+----------+-----------------------+--------+-----------------------+ | 13 | root | ^SELECT.*table-dani.* | 1 | 10 | +---------+----------+-----------------------+--------+-----------------------+ 1 row in set (0.00 sec)
And the rule is immediately replicated to the other pods, for example, POD 1:
kubectl exec -it cluster1-proxysql-1 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password Defaulting container name to proxysql. Use 'kubectl describe pod/cluster1-proxysql-1 -n pxc' to see all of the containers in this pod. mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1977 Server version: 8.0.18 (ProxySQL Admin Module) Copyright (c) 2009-2020 Percona LLC and/or its affiliates Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+----------+-----------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+----------+-----------------------+--------+-----------------------+ | 13 | root | ^SELECT.*table-dani.* | 1 | 10 | +---------+----------+-----------------------+--------+-----------------------+ 1 row in set (0.00 sec)
In Conclusion:
- The Percona Kubernetes Operator for Percona XtraDB Cluster is aware of the changes in the pods
- Modifications are automatically made in order to dispatch traffic to the new pods
- Query rules are not modified
- ProxySQL Cluster is enabled in order to maintain all the PODs in sync
by Daniel Guzmán Burgos via Percona Database Performance Blog
Comments
Post a Comment