Creating an External Replica of AWS Aurora MySQL with Mydumper

Oftentimes, we need to replicate between Amazon Aurora and an external MySQL server. The idea is to start by taking a point-in-time copy of the dataset. Next, we can configure MySQL replication to roll it forward and keep the data up-to-date.

This process is documented by Amazon, however, it relies on the mysqldump method to create the initial copy of the data. If the dataset is in the high GB/TB range, this single-threaded method could take a very long time. Similarly, there are ways to improve the import phase (which can easily take 2x the time of the export).

Let’s explore some tricks to significantly improve the speed of this process.

Preparation Steps

The first step is to enable binary logs in Aurora. Go to the Cluster-level parameter group and make sure binlog_format is set to ROW. There is no log_bin option in Aurora (in case you are wondering), simply setting binlog_format is enough. The change requires a restart of the writer instance, so it, unfortunately, means a few minutes of downtime.

We can check if a server is generating binary logs as follows:

mysql> SHOW MASTER LOGS;

+----------------------------+-----------+
| Log_name                   | File_size |
+----------------------------+-----------+
| mysql-bin-changelog.034148 | 134219307 |
| mysql-bin-changelog.034149 | 134218251 |
...

Otherwise, you will get an error:

ERROR 1381 (HY000): You are not using binary logging

We also need to ensure a proper binary log retention period. For example, if we expect the initial data export/import to take one day, we can set the retention period to something like three days to be on the safe side. This will help ensure we can roll forward the restored data.

mysql> call mysql.rds_set_configuration('binlog retention hours', 72);
Query OK, 0 rows affected (0.27 sec)

mysql> CALL mysql.rds_show_configuration;
+------------------------+-------+------------------------------------------------------------------------------------------------------+
| name                   | value | description                                                                                          |
+------------------------+-------+------------------------------------------------------------------------------------------------------+
| binlog retention hours | 72    | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. |
+------------------------+-------+------------------------------------------------------------------------------------------------------+
1 row in set (0.25 sec)

The next step is creating a temporary cluster to take the export. We need to do this for a number of reasons: first to avoid overloading the actual production cluster by our export process, also because mydumper relies on FLUSH TABLES WITH READ LOCK to get a consistent backup, which in Aurora is not possible (due to the lack of SUPER privilege).

Go to the RDS console and restore a snapshot that was created AFTER the date/time where you enabled the binary logs. The restored cluster should also have binlog_format set, so select the correct Cluster parameter group.

Next, capture the binary log position for replication. This is done by inspecting the Recent events section in the console. After highlighting your new temporary writer instance in the console, you should see something like this:

Binlog position from crash recovery is mysql-bin-changelog.034259 32068147

So now we have the information to prepare the CHANGE MASTER command to use at the end of the process.

Exporting the Data

To get the data out of the temporary instance, follow these steps:

  1. Backup the schema
  2. Save the user privileges
  3. Backup the data

This gives us added flexibility; we can do some schema changes, add indexes, or extract only a subset of the data.

Let’s create a configuration file with the login details, for example:

tee /backup/aurora.cnf <<EOF
[client]
user=percona
password=percona
host=percona-tmp.cgutr97lnli6.us-west-1.rds.amazonaws.com
EOF

For the schema backup, use mydumper to do a no-rows export:

mydumper --no-data \
--triggers \
--routines \
--events \
-v 3 \
--no-locks \
--outputdir /backup/schema \
--logfile /backup/mydumper.log \
--regex '^(?!(mysql|test|performance_schema|information_schema|sys))' \
--defaults-file /backup/aurora.cnf

To get the user privileges I normally like to use pt-show-grants. Aurora is, however, hiding the password hashes when you run SHOW GRANTS statement, so pt-show-grants will print incomplete statements e.g.:

mysql> SHOW GRANTS FOR 'user'@'%';
+---------------------------------------------------------+
| Grants for user@%                                       |
+---------------------------------------------------------+
| GRANT USAGE ON *.* TO 'user'@'%' IDENTIFIED BY PASSWORD |
| GRANT SELECT ON `db`.* TO 'user'@'%'                    |
+---------------------------------------------------------+

We can still gather the hashes and replace them manually in the pt-show-grants output if there is a small-ish number of users.

pt-show-grants --user=percona -ppercona -hpercona-tmp.cgutr97lnli6.us-west-1.rds.amazonaws.com  > grants.sql
mysql> select user, password from mysql.user;

Finally, run mydumper to export the data:

mydumper -t 8 \
--compress \
--triggers \
--routines \
--events \
—-rows=10000000 \
-v 3 \
--long-query-guard 999999 \
--no-locks \
--outputdir /backup/export \
--logfile /backup/mydumper.log \
--regex '^(?!(mysql|test|performance_schema|information_schema|sys))' \
-O skip.txt \
--defaults-file /backup/aurora.cnf

The number of threads should match the number of CPUs of the instance running mydumper. In the skip.txt file, you can include any tables that you don’t want to copy. The –rows argument will give you the ability to split tables in chunks of X number of rows. Each chunk can run in parallel, so it is a huge speed bump for big tables.

Importing the Data

We need to stand up a MySQL instance to do the data import. In order to speed up the process as much as possible, I suggest doing a number of optimizations to my.cnf as follows:

[mysqld]
pid-file=/var/run/mysqld/mysqld.pid
log-error=/var/log/mysqld.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log_slave_updates
innodb_buffer_pool_size=16G
binlog_format=ROW
innodb_log_file_size=1G
innodb_flush_method=O_DIRECT
innodb_flush_log_at_trx_commit=0
server-id=1000
log-bin=/log/mysql-bin
sync_binlog=0
master_info_repository=TABLE
relay_log_info_repository=TABLE
query_cache_type=0
query_cache_size=0
innodb_flush_neighbors=0
innodb_io_capacity_max=10000
innodb_stats_on_metadata=off
max_allowed_packet=1G
net_read_timeout=60
performance_schema=off
innodb_adaptive_hash_index=off
expire_logs_days=3
sql_mode=NO_ENGINE_SUBSTITUTION
innodb_doublewrite=off

Note that mydumper is smart enough to turn off the binary log for the importer threads.

After the import is complete, it is important to revert these settings to “safer” values: innodb_doublewriteinnodb_flush_log_at_trx_commit, sync_binlog, and also enable performance_schema again.

The next step is to create an empty schema by running myloader:

myloader \
-d /backup/schema \
-v 3 \
-h localhost \
-u root \
-p percona

At this point, we can easily introduce modifications like adding indexes, since the tables are empty. We can also restore the users at this time:

(echo "SET SQL_LOG_BIN=0;" ; cat grants.sql ) | mysql -uroot -ppercona -f

Now we are ready to restore the actual data using myloader. It is recommended to run this inside a screen session:

myloader -t 4 \
-d /backup/export \
-q 100 \
-v 3 \
-h localhost \
-u root \
-p percona

The rule of thumb here is to use half the number of vCPU threads. I also normally like to reduce mydumper default transaction size (1000) to avoid long transactions, but your mileage may vary.

After the import process is done, we can leverage faster methods (like snapshots or Percona Xtrabackup) to seed any remaining external replicas.

Setting Up Replication

The final step is setting up replication from the actual production cluster (not the temporary one!) to your external instance.

It is a good idea to create a dedicated user for this process in the source instance, as follows:

CREATE USER 'repl'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';

Now we can start replication, using the binary log coordinates that we captured before:

CHANGE MASTER TO MASTER_HOST='aurora-cluster-gh5s6lnli6.us-west-1.rds.amazonaws.com', MASTER_USER='repl', MASTER_PASSWORD='percona', MASTER_LOG_FILE='mysql-bin-changelog.034259', MASTER_LOG_POS=32068147;
START SLAVE;

Final Words

Unfortunately, there is no quick and easy method to get a large dataset out of an Aurora cluster. We have seen how mydumper and myloader can save a lot of time when creating external replicas, by introducing parallel operations. We also reviewed some good practices and configuration tricks for speeding up the data loading phase as much as possible.


Optimize your database performance with Percona Monitoring and Management, a free, open source database monitoring tool. Designed to work with Amazon RDS MySQL and Amazon Aurora MySQL with a specific dashboard for monitoring Amazon Aurora MySQL using Cloudwatch and direct sampling of MySQL metrics.

Visit the Demo


by Ivan Groenewold via Percona Database Performance Blog

Comments