Testing Percona XtraDB Cluster 8.0 with DBdeployer

In the light of support for Percona XtraDB Cluster (PXC), the DBdeployer anniversary, and the newly released Percona XtraDB Cluster 8.0, I’m happy to announce that Giuseppe Maxia has recently updated DBdeployer to support Percona XtraDB Cluster 8.0.
We at Percona, with my colleague Ramesh Sivaraman, have tested and deployed Percona XtraDB Cluster (PXC) 8.0.18 successfully using DBdeployer.
What is DBdeployer and How Do You Install it?
It’s an open source tool to test and deploy multiple MySQL topologies in a single host using different ports. It allows DBAs and developers to quickly launch and test certain functionalities, syntax, and features. It’s a critical tool to launch MySQL instances with minimum resources.
Here’s a quick rundown of DBdeployer:
DBdeployer use case with PXC-8.0
OS: Ubuntu Xenial
$ wget https://github.com/datacharmer/dbdeployer/releases/download/v1.49.0/dbdeployer-1.49.0.linux.tar.gz $ tar -xzf dbdeployer-1.49.0.linux.tar.gz
Note: Be sure to check your system and make sure that the packages are installed which Percona XtraDB Cluster 8.0 depends on.
$ sudo apt-get install -y socat libdbd-mysql-perl libaio1 libc6 libcurl3 libev4 libgcc1 libgcrypt20 libgpg-error0 libssl-dev libstdc++6 zlib1g libatomic1 $ mkdir opt; mkdir opt/mysql
Please select Linux Generic packages if you are downloading PXC from the web:
$ wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-8.0.18-9.3/binary/tarball/Percona-XtraDB-Cluster_8.0.18-9.3_Linux.x86_64.xenial.tar.gz $ sudo mv dbdeployer-1.49.0.linux /usr/local/bin/dbdeployer $ PATH=$PATH:/usr/local/bin/ $ dbdeployer unpack --prefix=pxc Percona-XtraDB-Cluster_8.0.18.9_Linux.x86_64.xenial.tar.gz
Configure a Single Node Percona XtraDB Cluster
The following command will help you to set up a single node cluster.
$ dbdeployer deploy single pxc8.0.18 Database installed in $HOME/sandboxes/msb_pxc8_0_18 run 'dbdeployer usage single' for basic instructions' .. sandbox server started $ $HOME/sandboxes/msb_pxc8_0_18/test_sb ok - version '8.0.18-9' [..] # Tests : 11 # PASS : 11 # fail : 0 $
Before moving on to multi-node cluster, remove the single-node cluster.
$ dbdeployer delete msb_pxc8_0_18 List of deployed sandboxes: /home/dba/sandboxes/msb_pxc8_0_18 Running /home/dba/sandboxes/msb_pxc8_0_18/stop stop /home/dba/sandboxes/msb_pxc8_0_18 Running rm -rf /home/dba/sandboxes/msb_pxc8_0_18 Directory /home/dba/sandboxes/msb_pxc8_0_18 deleted $
Configure 3 Node Percona XtraDB Cluster
$ dbdeployer deploy replication --topology=pxc pxc8.0.18 Installing and starting node 1 ... sandbox server started Installing and starting node 2 .................................... sandbox server started Installing and starting node 3 .................................... sandbox server started Replication directory installed in $HOME/sandboxes/pxc_msb_pxc8_0_18 run 'dbdeployer usage multiple' for basic instructions' $ $ dbdeployer sandboxes --header name type version port ---------------------------- ------------------------ ----------- ---------------------------------------------------------------------------------------------- pxc_msb_pxc8_0_18 : Percona-Xtradb-Cluster pxc8.0.18 [27819 27945 27946 27957 37819 27820 27947 27948 27958 37820 27821 27949 27950 27959 37821 ] $
Testing and Verification
There are several pre-set scripts under this deployment. We can start with testing replication status for this cluster.
$ $HOME/sandboxes/pxc_msb_pxc8_0_18/test_replication # master 1 # master 2 # master 3 # slave 1 ok - '3' == '3' - Slaves received tables from all masters # slave 2 ok - '3' == '3' - Slaves received tables from all masters # slave 3 ok - '3' == '3' - Slaves received tables from all masters # pass: 3 # fail: 0
Also following $HOME/sandboxes/pxc_msb_pxc8_0_18/n[1-3] scripts will help to login to the respective PXC node.
$ $HOME/sandboxes/pxc_msb_pxc8_0_18/n1 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 26 Server version: 8.0.18-9 Percona XtraDB Cluster binary (GPL) 8.0.18, Revision a34c3d3, WSREP version 26.4.3 Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. node1 [localhost:27819] {msandbox} ((none)) > create database mytest; Query OK, 1 row affected (0.79 sec) node1 [localhost:27819] {msandbox} ((none)) >
You can also check the node status using the following command:
$ $HOME/sandboxes/pxc_msb_pxc8_0_18/check_nodes # Node 1 +---------------------------+-------------------------------------------------+ | Variable_name | Value | +---------------------------+-------------------------------------------------+ | wsrep_local_state_uuid | 9caf5103-733d-11ea-affb-be0211c887a9 | | wsrep_received | 10 | | wsrep_local_commits | 0 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_incoming_addresses | 127.0.0.1:27819,127.0.0.1:27820,127.0.0.1:27821 | | wsrep_gcomm_uuid | 9ca961cb-733d-11ea-94fe-ab19e16c0ae6 | | wsrep_cluster_size | 3 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_ready | ON | +---------------------------+-------------------------------------------------+ # Node 2 +---------------------------+-------------------------------------------------+ | Variable_name | Value | +---------------------------+-------------------------------------------------+ | wsrep_local_state_uuid | 9caf5103-733d-11ea-affb-be0211c887a9 | | wsrep_received | 6 | | wsrep_local_commits | 0 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_incoming_addresses | 127.0.0.1:27819,127.0.0.1:27820,127.0.0.1:27821 | | wsrep_gcomm_uuid | bf9a2416-733d-11ea-86b9-d3b30f695a2c | | wsrep_cluster_size | 3 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_ready | ON | +---------------------------+-------------------------------------------------+ # Node 3 +---------------------------+-------------------------------------------------+ | Variable_name | Value | +---------------------------+-------------------------------------------------+ | wsrep_local_state_uuid | 9caf5103-733d-11ea-affb-be0211c887a9 | | wsrep_received | 5 | | wsrep_local_commits | 0 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_incoming_addresses | 127.0.0.1:27819,127.0.0.1:27820,127.0.0.1:27821 | | wsrep_gcomm_uuid | e8c9e2ea-733d-11ea-a6a7-971528ab9616 | | wsrep_cluster_size | 3 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_ready | ON | +---------------------------+-------------------------------------------------+ $
Sample functionality test script:
$ $HOME/sandboxes/pxc_msb_pxc8_0_18/test_sb_all # executing 'test_sb' on /home/dba/sandboxes/pxc_msb_pxc8_0_18 executing "test_sb" on node 1 ok - version '8.0.18-9' ok - port detected: 27819 ok - version is 8.0.18-9 as expected ok - port is 27819 as expected ok - query was successful for user msandbox_ro: 'select 1' ok - query was successful for user msandbox_rw: 'select 1' ok - query was successful for user msandbox_ro: 'use mysql; select count(*) from information_schema.tables where table_schema=schema()' ok - query was successful for user msandbox_rw: 'use mysql; select count(*) from information_schema.tables where table_schema=schema()' # Tests : 8 # PASS : 8 # fail : 0 executing "test_sb" on node 2 ok - version '8.0.18-9' ok - port detected: 27820 ok - version is 8.0.18-9 as expected ok - port is 27820 as expected ok - query was successful for user msandbox_ro: 'select 1' ok - query was successful for user msandbox_rw: 'select 1' ok - query was successful for user msandbox_ro: 'use mysql; select count(*) from information_schema.tables where table_schema=schema()' ok - query was successful for user msandbox_rw: 'use mysql; select count(*) from information_schema.tables where table_schema=schema()' # Tests : 8 # PASS : 8 # fail : 0 executing "test_sb" on node 3 ok - version '8.0.18-9' ok - port detected: 27821 ok - version is 8.0.18-9 as expected ok - port is 27821 as expected ok - query was successful for user msandbox_ro: 'select 1' ok - query was successful for user msandbox_rw: 'select 1' ok - query was successful for user msandbox_ro: 'use mysql; select count(*) from information_schema.tables where table_schema=schema()' ok - query was successful for user msandbox_rw: 'use mysql; select count(*) from information_schema.tables where table_schema=schema()' # Tests : 8 # PASS : 8 # fail : 0
To restart cluster nodes use the following script:
$ $HOME/sandboxes/pxc_msb_pxc8_0_18/restart_all # executing 'stop' on /home/dba/sandboxes/pxc_msb_pxc8_0_18 executing 'stop' on node3 stop /home/dba/sandboxes/pxc_msb_pxc8_0_18/node3 executing 'stop' on node2 stop /home/dba/sandboxes/pxc_msb_pxc8_0_18/node2 executing 'stop' on node1 stop /home/dba/sandboxes/pxc_msb_pxc8_0_18/node1 # executing 'start' on /home/dba/sandboxes/pxc_msb_pxc8_0_18 executing "start" on node 1 .. sandbox server started executing "start" on node 2 ..... sandbox server started executing "start" on node 3 ...... sandbox server started
How to Use DBdeployer to Automate Tests
Each node also has a script called metadata. Check the help of that command and you’ll find the available arguments.
$ $HOME/sandboxes/pxc_msb_pxc8_0_18/node1/metadata help Syntax: /home/dba/sandboxes/pxc_msb_pxc8_0_18/node1/metadata request Available requests: version major minor rev short (= major.minor) sversion (=sortable version string) host basedir cbasedir (Client Basedir) datadir port xport (MySQLX port) aport (Admin port) socket serverid (server id) pid (Process ID) pidfile (PID file) flavor sbhome (SANDBOX_HOME) sbbin (SANDBOX_BINARY) sbtype (Sandbox Type) $ $HOME/sandboxes/pxc_msb_pxc8_0_18/node1/metadata port 27819
We also have JSON files for each node to provide integration to third party software.
$ less $HOME/sandboxes/pxc_msb_pxc8_0_18/node1/sbdescription.json { "basedir": "/home/dba/opt/mysql/pxc8.0.18", "client_basedir": "/home/dba/opt/mysql/pxc8.0.18", "type": "pxc-node", "version": "pxc8.0.18", "flavor": "pxc", "host": "127.0.0.1", "port": [ 27819, 27945, 27946, 27957, 37819 ], "nodes": 0, "node_num": 1, "dbdeployer-version": "1.47.0", "timestamp": "Wed Apr 1 13:06:22 UTC 2020", "command-line": "dbdeployer deploy replication --concurrent --topology=pxc pxc8.0.18" }
You may use a global script to execute SQL against all nodes. This is particularly good for monitoring variables among nodes.
$HOME/sandboxes/pxc_msb_pxc8_0_18$ ./use_all "show global variables like 'wsrep_sst_receive_address'" # server: 1 Variable_name Value wsrep_sst_receive_address 127.0.0.1:27957 # server: 2 Variable_name Value wsrep_sst_receive_address 127.0.0.1:27958 # server: 3 Variable_name Value wsrep_sst_receive_address 127.0.0.1:27959
Conclusion
As we can see, DBdeployer is a very handful tool to verify version differences in behavior as well as changes to default settings among versions. You can deploy multiple, replication, and single sandbox deployments using different tar-balls. You may run automation using existing scripts or bundle with other tools like ProxySQL, Sysbench, and Orchestrator to mimic your production use cases.
In conclusion, this tool has already shown us one simple difference between 5.7 and 8.0 PXC cluster default setups (there are several changes in defaults). The initial deployment of 5.7 was faster ~40 seconds – versus what 8.0 was taking ~80 seconds – due to initial State Snapshot Transfer (SST) using ‘rsync’ as the default method for 5.7 versus ‘xtrabackup’ in 8.0. You can find the results here.
by Alkin Tezuysal via Percona Database Performance Blog
Comments
Post a Comment