Best Practices for MySQL Backups

Best Practices for MySQL Backups

Best Practices for MySQL BackupsHere in Percona’s Training department, we like to think that we instruct our learners on the best practices for all things MySQL-related. In addition to performance tuning, query optimization, and replication configurations, another important topic is backups. Let’s dive in and discuss some of the basics and best practices around backups for MySQL.

Logical MySQL Backups

In MySQL, it is possible to take backups in two different forms. The first form, logical, is the most universal. Essentially, you create all of the necessary INSERT statements to repopulate your table data. The two most popular tools in the area are mysqldump and mydumper.

mysqldump

This tool has been around since the beginning and supports a slew of different options, too numerous to discuss in this post.

Here is a simple example for taking a logical backup of the ‘members’ database, while simultaneously compressing the result.

mysqldump --single-transaction members | gzip - >members.sql.gz

If you wanted to take a complete backup of all tables, in all databases, you can do this:

mysqldump --single-transaction --all-databases | gzip - >full_backup.sql.gz

Note the use of the –single-transaction flag. This flag “creates a consistent snapshot by dumping all tables in a
single transaction.” If you do not use this flag, then a complete backup could contain inconsistencies between related tables.

Inconsistencies

One of the biggest downsides to mysqldump is its lack of parallelism. The tool starts at the first database, alphabetically, then dumps each table within, one at a time. Consider a transaction that inserts data into tables ‘alpha’, and ‘zeta’. Now the backup begins. When the backup reaches table ‘delta’, another transaction deletes from ‘zeta’. Your backup now has an inconsistency between ‘alpha’ and ‘zeta’.

This is why the –single-transaction flag is important.

mydumper

The mydumper tool is an open-source, 3rd party tool, originally written by Domas Mituzas, now maintained by Max Bube. This tool functions similarly to mysqldump, but offers many improvements such as parallel backups, consistent reads, and built-in compression. Another benefit to mydumper is that each individual table gets dumped into a separate file. This makes restore of a single table vastly superior to mysqldump (which clobbers everything into one file).

mydumper --compress

The above command will connect to your local MySQL server, and begin a consistent dump of all tables in all databases. As above, each table will create a separate file inside the dump directory, which is named with this pattern ‘export-YYYYMMDD-HHMMSS’. Each dump file will be individually compressed with gzip.

A companion tool, myloader, is also included. This tool allows for parallel restores of tables; something mysqldump cannot do.

Physical MySQL Backups

A physical backup copies the table data files from one location to another, ideally, in an online, consistent manner. Percona XtraBackup fits this description. There also exists MySQL Enterprise Backup, which is a paid-for product from Oracle, and device snapshotting. It should be noted that Percona XtraBackup is free, and open-source, and does everything the enterprise option can do, and more.

Percona XtraBackup

xtrabackup --backup --parallel 4 --compress --target-dir /var/backup/

The above command will connect to your MySQL server, and execute a compressed (using qpress), parallel backup, storing the resulting data files into /var/backup/. Looking in this directory, you’ll find all of the familiar files like ibdata1, mysql.ibd, and all the other .ibd files, representing your data.

You can also “stream” your physical backup to a single archive:

xtrabackup --backup --parallel 4 --stream=xbstream > mybackup.xbstream
  OR
xtrabackup --backup --stream=tar > mybackup.tar

(Note: ‘tar’ archives do not support parallelism)

Also, thanks to Percona’s xbcloud tool, you can stream directly to any S3 compatible bucket:

xtrabackup --backup --stream=xbstream | xbcloud put --storage=s3 <S3 parameters> mybackup_s3.blob

Volume Snapshots

In some cases, the amount of data needed to backup is too vast for exporting a physical backup (let alone even a logical dump). Consider a MySQL server with a dataset greater than 1TB. Even with fast disks, this may take many, many hours to perform a physical backup.

In these situations, it is much faster to perform a physical backup using snapshot capabilities of the underlying filesystem or disk device.

LVM and ZFS both include native snapshotting. Before you take the snapshot, be sure that MySQL has halted write operations and flushed most in-memory information to disk. Here is an example of such a process using ZFS from

mysql -e 'FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; ! zfs snapshot -r mysqldata/mysql@my_first_snapshot'

The above example will lock all tables in MySQL against writes from other transactions, print out the current binary log position information, then escape to a shell that executes the ZFS snapshot command. Once the snapshot returns, the connection to MySQL exits and unlocks all the tables.

Most cloud providers also support native snapshot capabilities for their block devices (ie: EC2, GPD). The process would be similar to the ZFS example above.

Verifying MySQL Backups

So you’ve got a great backup procedure, following all the best practices. How do you know the backup succeeded? Did you look at the file size? Did you only check a file was created? Maybe you only looked at the exit code of the tool you used?

Shlomi Noach told me at a previous Percona Live conference “you have not taken a backup until you have verified that backup.” Great advice. Put another way, each backup you take can be considered as Schrödinger’s backup; until you verify it, does it work?

The best practice here is to simply restore a MySQL server using your created backup; however, you created it. The machine handling this restore does not need to be as powerful as the source; a simple VM can manage this task and could be well automated.

You can restore a mysqldump using the mysql client itself:

zcat my_full_backup.sql.gz | mysql

Using mydumper/myloader:

myloader --directory dump_dir --overwrite-tables --verbose=3

Percona XtraBackup:

# Prepare the backup
xtrabackup --prepare --parallel 4 --use-memory 4G --target-dir /var/backup

# Copy backup to original location (ie: /var/lib/mysql), assuming backup taken on same host
xtrabackup --copy-back --target-dir /var/backup

# Fix file permissions if necessary
chown -R mysql:mysql /var/lib/mysql

# Start MySQL
systemctl start mysql

Yes, Percona XtraBackup does require more steps, but physical backups will always be the fastest way to backup and fastest way to restore.

Summary

There are many different ways to take MySQL backups. Hopefully, this post has given you some insight into a few best practices when choosing your backup methodology.

A few more things to consider:

  • Encrypted backups are supported by most of the tools above. Be sure to back up a copy of your encryption key!
  • Point-in-time-recovery (PITR) is not covered in this post. However, to achieve PITR, you need to also backup your binary logs. rsync can assist with that, or the mysqlbinlog tool itself can be a live binlog backup tool.
  • Make sure to back up a copy of your my.cnf as well
  • This post did not cover incremental backups; which could be an entire post by itself. This is something Percona XtraBackup supports natively.

If you’re interested in learning more about each of these tools, and performing some hands-on learning, be sure to get in touch with our training department. Our instructor-led classes can cover all the basic and advanced topics of MySQL administration; from all-things replication, to backups, recovery, and more.


by Matthew Boehm via Percona Database Performance Blog

Comments