Tuesday, April 26, 2011

Using Mysqldump Without Downtime

Learning MySQL
Mysqldump is a great utility for creating easy to use, easy to restore mysql database backups, but it can cause downtime if certain precautions are not taken.  If you manage MySQL across multiple Linux flavors, various default configurations can cause mysqldump to act in unpredictable ways.  Let's look at a couple common problem areas related to mysqldump.

Using the Right Storage Engine
If you're not certain which MySQL storage engine to use, then consider using InnoDB (instead of MyISAM) if you are not already doing so.  Using mysqldump to backup a large MyISAM table can cause the entire table to lock until the backup is complete.  Even though the long-running read done by mysqldump does not block other reads of the same table, what happens is that an update query issued will cause all subsequent reads to be queued and therefore blocked.  So, switching to InnoDB will solve this type of problem due to its usage of row-level locking.

Using --single-transaction
On most of the MySQL installations that I've managed, simply using InnoDB will allow for backups to be created w/o tables locking and queries/updates blocking.  However, a recent Ubuntu installation continued to block our Rails application from using the database while mysqldump was running and it was necessary to use the parameter "--single-transaction" like so:

mysqldump -u myuser -pmypass --single-transaction db_name > output.sql

If you have any tips to share, I welcome them in the comments.