How to Set Up Automated MariaDB Backups on Ubuntu

By Technoroots Team · Published Jan 23, 2026

How to Set Up Automated MariaDB Backups on Ubuntu

You've hardened MariaDB, optimized your queries, and your database is running smoothly. Then disaster strikes: a software bug corrupts data, ransomware encrypts your files, or a human error drops the wrong table. Without backups, you're done. Your data is gone. Your business stops.

This guide walks you through setting up automated, encrypted, offsite backups for MariaDB on Ubuntu. You'll learn three backup strategies—mysqldump for simplicity, Percona XtraBackup for speed, and binary logs for granular recovery—and how to orchestrate them with cron jobs, test them regularly, and store them safely. By the end, you'll have a backup system that runs without manual intervention and that you can actually recover from when disaster strikes.

Whether you're backing up a single database or a multi-terabyte production system, these techniques work. And unlike point-in-time recovery or elaborate replication setups, automated backups are straightforward, predictable, and actually tested by thousands of teams every day.

Table of Contents

  1. Why Automated Backups Matter
  2. Backup Strategies at a Glance
  3. Pre-Backup Setup: Infrastructure
  4. Method 1: mysqldump - Full Logical Backups
  5. Method 2: Percona XtraBackup - Fast Physical Backups
  6. Method 3: Binary Logs - Point-in-Time Recovery
  7. Encryption & Compression
  8. Backup Verification & Testing
  9. Monitoring & Alerting
  10. Backup Performance & Impact
  11. Common Backup Mistakes
  12. Recovery Examples
  13. Next Steps

Why Automated Backups Matter

Backups are your insurance policy. Without them, you have zero recovery options. With them, you can recover from almost anything: hardware failure, data corruption, ransomware, human error, compromised accounts, or application bugs.

The key is automation. Manual backups fail because humans forget, scripts break silently, and storage fills up. Automated backups run on a schedule, send you alerts if they fail, and keep multiple versions for redundancy. A backup that's automated, tested, and offsite is actually useful. Everything else is just hope.

Backup Strategies at a Glance

Before diving into implementation, understand the three main approaches:

Strategy Speed Storage Recovery Time Complexity
mysqldump Slow (for large DBs) High (text format) Minutes Low
Percona XtraBackup Fast (physical copy) Medium (compressed) Seconds Medium
Binary Logs Fast (append-only) Low (incremental) Seconds (to a point) High

mysqldump is ideal for databases under 10GB and teams that prioritize simplicity. It's text-based, so you can inspect backups, and it works on any MariaDB installation.

Percona XtraBackup is the workhorse for large databases and production systems. It copies data files directly (no locks), supports incremental backups, and compresses efficiently.

Binary Logs capture every change in real time. Combined with a full backup, they let you replay transactions up to the second before disaster struck. Essential for production, but requires infrastructure to manage.

Most production setups use all three: daily full backups with mysqldump or XtraBackup, hourly or continuous binary logs, and weekly offsite copies. Let's build that.

Pre-Backup Setup: Infrastructure

Before automating backups, set up the foundation.

Create a dedicated backup user:

CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'B@ckup!2024';
GRANT SELECT, LOCK TABLES, SHOW VIEW, RELOAD, REPLICATION CLIENT ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;

The RELOAD privilege allows flushing logs (needed for binary log backups). REPLICATION CLIENT lets the backup tool check replication status.

Create a backup directory:

sudo mkdir -p /var/backups/mariadb
sudo chown backup:backup /var/backups/mariadb
sudo chmod 700 /var/backups/mariadb

Restrict permissions to backup user only—this directory will contain credentials and sensitive data.

Create a credentials file:

sudo nano /var/backups/mariadb/.my.cnf

Add:

[mysqldump]
user=backup_user
password=B@ckup!2024
host=localhost

[xtrabackup]
user=backup_user
password=B@ckup!2024
host=localhost

Set strict permissions:

sudo chmod 600 /var/backups/mariadb/.my.cnf

This file lets backup tools connect without storing credentials in scripts.

Method 1: mysqldump - Full Logical Backups

mysqldump exports the entire database as SQL statements—slow for large databases, but human-readable and portable.

Create the backup script:

sudo nano /usr/local/bin/backup-mariadb-mysqldump.sh

Add:

#!/bin/bash

# Configuration
BACKUP_DIR="/var/backups/mariadb"
BACKUP_USER="backup_user"
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/mysqldump_${BACKUP_DATE}.sql"
BACKUP_LOG="${BACKUP_DIR}/backup.log"

# Retention (keep 7 daily backups)
RETENTION_DAYS=7

# Start logging
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup started" >> $BACKUP_LOG

# Run mysqldump
sudo mariadb-dump \
  --defaults-file=/var/backups/mariadb/.my.cnf \
  --all-databases \
  --single-transaction \
  --quick \
  --lock-tables=false \
  --routines \
  --triggers \
  --events > $BACKUP_FILE 2>> $BACKUP_LOG

if [ $? -eq 0 ]; then
  # Compress the backup
  gzip $BACKUP_FILE
  BACKUP_FILE="${BACKUP_FILE}.gz"
  
  # Log size and checksum
  SIZE=$(du -h $BACKUP_FILE | cut -f1)
  CHECKSUM=$(sha256sum $BACKUP_FILE | cut -d' ' -f1)
  
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup completed successfully" >> $BACKUP_LOG
  echo "File: $BACKUP_FILE | Size: $SIZE | SHA256: $CHECKSUM" >> $BACKUP_LOG
  
else
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup FAILED" >> $BACKUP_LOG
  exit 1
fi

# Clean up old backups
find $BACKUP_DIR -name "mysqldump_*.sql.gz" -mtime +$RETENTION_DAYS -delete

echo "[$(date '+%Y-%m-%d %H:%M:%S')] Cleanup completed" >> $BACKUP_LOG

Make it executable:

sudo chmod +x /usr/local/bin/backup-mariadb-mysqldump.sh

Test the script:

sudo /usr/local/bin/backup-mariadb-mysqldump.sh

Check the backup was created:

ls -lh /var/backups/mariadb/mysqldump_*.sql.gz

Schedule it with cron:

Edit the root crontab:

sudo crontab -e

Add (backup daily at 2 AM):

0 2 * * * /usr/local/bin/backup-mariadb-mysqldump.sh

For multiple backups per day (incremental strategy), add:

0 2 * * * /usr/local/bin/backup-mariadb-mysqldump.sh  # 2 AM daily
0 14 * * 0 /usr/local/bin/backup-mariadb-mysqldump.sh # 2 PM Sunday (full backup)

Method 2: Percona XtraBackup - Fast Physical Backups

Percona XtraBackup copies data files directly, making it much faster for large databases. It supports incremental backups (only changed data).

Install Percona XtraBackup:

sudo apt-get install percona-xtrabackup-24 -y

(For MariaDB 10.6+, use percona-xtrabackup-80; verify compatibility with your version.)

Create the backup script:

sudo nano /usr/local/bin/backup-mariadb-xtrabackup.sh

Add:

#!/bin/bash

# Configuration
BACKUP_DIR="/var/backups/mariadb"
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
FULL_BACKUP_DIR="${BACKUP_DIR}/xtrabackup_full_${BACKUP_DATE}"
BACKUP_LOG="${BACKUP_DIR}/xtrabackup.log"
RETENTION_DAYS=7

# Start logging
echo "[$(date '+%Y-%m-%d %H:%M:%S')] XtraBackup started" >> $BACKUP_LOG

# Create full backup
xtrabackup \
  --defaults-file=/var/backups/mariadb/.my.cnf \
  --backup \
  --target-dir=$FULL_BACKUP_DIR \
  --no-server-version-check \
  --compress \
  >> $BACKUP_LOG 2>&1

if [ $? -eq 0 ]; then
  # Prepare the backup for recovery
  xtrabackup --prepare --target-dir=$FULL_BACKUP_DIR \
    >> $BACKUP_LOG 2>&1
  
  # Compress the entire backup directory
  BACKUP_TAR="${BACKUP_DIR}/xtrabackup_${BACKUP_DATE}.tar.gz"
  tar -czf $BACKUP_TAR -C $(dirname $FULL_BACKUP_DIR) $(basename $FULL_BACKUP_DIR)
  
  # Clean up uncompressed directory
  rm -rf $FULL_BACKUP_DIR
  
  # Log size and checksum
  SIZE=$(du -h $BACKUP_TAR | cut -f1)
  CHECKSUM=$(sha256sum $BACKUP_TAR | cut -d' ' -f1)
  
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] XtraBackup completed successfully" >> $BACKUP_LOG
  echo "File: $BACKUP_TAR | Size: $SIZE | SHA256: $CHECKSUM" >> $BACKUP_LOG
  
else
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] XtraBackup FAILED" >> $BACKUP_LOG
  exit 1
fi

# Clean up old backups
find $BACKUP_DIR -name "xtrabackup_*.tar.gz" -mtime +$RETENTION_DAYS -delete

echo "[$(date '+%Y-%m-%d %H:%M:%S')] Cleanup completed" >> $BACKUP_LOG

Make it executable:

sudo chmod +x /usr/local/bin/backup-mariadb-xtrabackup.sh

Test the script:

sudo /usr/local/bin/backup-mariadb-xtrabackup.sh

Schedule it with cron (weekly full backup):

sudo crontab -e

Add:

0 3 * * 0 /usr/local/bin/backup-mariadb-xtrabackup.sh

This runs every Sunday at 3 AM, creating a full backup you can recover from quickly.

Method 3: Binary Logs - Point-in-Time Recovery

Binary logs record every database change. Combined with a full backup, they let you recover to any point in time.

Verify binary logging is enabled:

SHOW VARIABLES LIKE 'log_bin%';

You should see log_bin: ON. If not, enable it (see How to Secure MariaDB After Installation on Ubuntu).

Create a binary log backup script:

sudo nano /usr/local/bin/backup-mariadb-binlogs.sh

Add:

#!/bin/bash

# Configuration
BACKUP_DIR="/var/backups/mariadb"
BINLOG_DIR="/var/log/mysql"
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_LOG="${BACKUP_DIR}/binlog_backup.log"
RETENTION_DAYS=14

# Start logging
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Binary log backup started" >> $BACKUP_LOG

# Get the current binary log name before flushing
CURRENT_BINLOG=$(mysql -u backup_user -pB@ckup!2024 -e "SHOW MASTER STATUS\G" | grep "File:" | awk '{print $2}')

# Flush logs to create a new binlog file
mysql -u backup_user -pB@ckup!2024 -e "FLUSH BINARY LOGS" >> $BACKUP_LOG 2>&1

# Copy all completed binlog files (except the current one) to backup directory
for binlog in $(find $BINLOG_DIR -name "mariadb-bin.*" -not -name "*.index"); do
  if [[ $(basename $binlog) != $CURRENT_BINLOG ]]; then
    cp $binlog $BACKUP_DIR/
  fi
done

# Compress old binlog backups
find $BACKUP_DIR -name "mariadb-bin.*" -not -name "*.index" -mtime +1 -exec gzip {} \;

# Remove very old binlogs (older than retention)
find $BINLOG_DIR -name "mariadb-bin.*" -mtime +$RETENTION_DAYS -delete

echo "[$(date '+%Y-%m-%d %H:%M:%S')] Binary log backup completed" >> $BACKUP_LOG

Make it executable:

sudo chmod +x /usr/local/bin/backup-mariadb-binlogs.sh

Schedule it with cron (hourly):

sudo crontab -e

Add:

0 * * * * /usr/local/bin/backup-mariadb-binlogs.sh

This backs up completed binary logs every hour, letting you replay transactions up to the last hour.

Encryption & Compression

Backups contain sensitive data. Always encrypt them at rest and in transit.

Encrypt backups with GPG:

Generate or import a GPG key:

sudo gpg --gen-key
# (Follow prompts; use a secure passphrase)

Create an encryption wrapper:

sudo nano /usr/local/bin/encrypt-backup.sh

Add:

#!/bin/bash
BACKUP_FILE=$1
GPG_RECIPIENT="backup@example.com"  # Your GPG key email

gpg --encrypt --recipient $GPG_RECIPIENT $BACKUP_FILE
rm $BACKUP_FILE  # Remove unencrypted version
echo "Encrypted: ${BACKUP_FILE}.gpg"

Integrate into your backup scripts:

# At the end of mysqldump script:
/usr/local/bin/encrypt-backup.sh $BACKUP_FILE

Compress before encryption (more efficient):

Most backup scripts already gzip. Encryption works on compressed files:

gzip $BACKUP_FILE
/usr/local/bin/encrypt-backup.sh "${BACKUP_FILE}.gz"

This creates mysqldump_YYYYMMDD_HHMMSS.sql.gz.gpg, encrypted and compressed.

Decrypt and verify:

gpg --decrypt mysqldump_20240115_020000.sql.gz.gpg > mysqldump_20240115_020000.sql.gz
gunzip mysqldump_20240115_020000.sql.gz

Backup Verification & Testing

A backup you've never restored is a backup you can't trust. Test regularly.

Verify backup integrity:

Check the backup file exists and is readable:

# For mysqldump
gunzip -t /var/backups/mariadb/mysqldump_20240115_020000.sql.gz

# For XtraBackup
tar -tzf /var/backups/mariadb/xtrabackup_20240115_030000.tar.gz | head

Both commands verify the file isn't corrupt without extracting it.

Restore to a test database:

On a test server (or in a container), practice restoring:

# From mysqldump
gunzip -c mysqldump_20240115_020000.sql.gz | mariadb -u root -p

# From XtraBackup
tar -xzf xtrabackup_20240115_030000.tar.gz
xtrabackup --prepare --target-dir=./xtrabackup_20240115_030000
# Then copy data files to MariaDB datadir

Test recovery at least monthly. Document the process in your runbook.

Create a verification script:

sudo nano /usr/local/bin/verify-backups.sh

Add:

#!/bin/bash

BACKUP_DIR="/var/backups/mariadb"
VERIFY_LOG="${BACKUP_DIR}/verify.log"

echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup verification started" >> $VERIFY_LOG

# Check mysqldump integrity
for file in $(find $BACKUP_DIR -name "mysqldump_*.sql.gz" -mtime -1); do
  if gunzip -t $file 2>/dev/null; then
    echo "OK: $file" >> $VERIFY_LOG
  else
    echo "CORRUPT: $file" >> $VERIFY_LOG
    # Alert on corruption
  fi
done

# Check XtraBackup integrity
for file in $(find $BACKUP_DIR -name "xtrabackup_*.tar.gz" -mtime -1); do
  if tar -tzf $file > /dev/null 2>&1; then
    echo "OK: $file" >> $VERIFY_LOG
  else
    echo "CORRUPT: $file" >> $VERIFY_LOG
  fi
done

echo "[$(date '+%Y-%m-%d %H:%M:%S')] Verification completed" >> $VERIFY_LOG

Schedule it daily:

0 4 * * * /usr/local/bin/verify-backups.sh

Monitoring & Alerting

Backups fail silently if you don't monitor them. Set up alerts.

Check backup age:

sudo nano /usr/local/bin/check-backup-freshness.sh

Add:

#!/bin/bash

BACKUP_DIR="/var/backups/mariadb"
MAX_AGE_HOURS=26  # Alert if backup is older than 26 hours

# Find most recent mysqldump backup
LATEST_BACKUP=$(find $BACKUP_DIR -name "mysqldump_*.sql.gz" -printf '%T@\n' -quit)

if [ -z "$LATEST_BACKUP" ]; then
  echo "ERROR: No backup found"
  exit 1
fi

BACKUP_AGE=$(($(date +%s) - ${LATEST_BACKUP%.*}))
BACKUP_AGE_HOURS=$((BACKUP_AGE / 3600))

if [ $BACKUP_AGE_HOURS -gt $MAX_AGE_HOURS ]; then
  echo "WARNING: Last backup is $BACKUP_AGE_HOURS hours old (max: $MAX_AGE_HOURS)"
  # Send alert email
  echo "Backup is stale" | mail -s "MariaDB Backup Alert" ops@example.com
  exit 1
else
  echo "OK: Last backup is $BACKUP_AGE_HOURS hours old"
fi

Schedule it hourly:

0 * * * * /usr/local/bin/check-backup-freshness.sh

Monitor disk usage:

Backups consume disk space. Alert if /var/backups fills up:

USAGE=$(df /var/backups | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $USAGE -gt 80 ]; then
  echo "Backup disk usage at ${USAGE}%" | mail -s "Disk Alert" ops@example.com
fi

Use systemd timers (alternative to cron):

For better monitoring, use systemd:

sudo nano /etc/systemd/system/mariadb-backup.service

Add:

[Unit]
Description=MariaDB Backup
After=mariadb.service

[Service]
Type=oneshot
User=root
ExecStart=/usr/local/bin/backup-mariadb-mysqldump.sh
StandardOutput=journal
StandardError=journal

Create the timer:

sudo nano /etc/systemd/system/mariadb-backup.timer

Add:

[Unit]
Description=Daily MariaDB Backup
Requires=mariadb-backup.service

[Timer]
OnCalendar=daily
OnCalendar=*-*-* 02:00:00
Persistent=true

[Install]
WantedBy=timers.target

Enable it:

sudo systemctl enable --now mariadb-backup.timer

Check status:

sudo systemctl list-timers mariadb-backup.timer

Backup Performance & Impact

Large backups lock tables, increase disk I/O, and can impact database performance. Manage this.

mysqldump performance tips:

Use --single-transaction for InnoDB tables (no locks), but avoid --lock-tables=true for large databases. Run during off-peak hours.

# Fast mysqldump (10GB database, ~10 minutes)
mysqldump --single-transaction --quick --no-autocommit all-databases

Percona XtraBackup performance tips:

XtraBackup doesn't lock tables, making it ideal for production. But it copies all data files, so I/O impact is high. Run during low-traffic windows.

# Limit I/O to avoid impact
xtrabackup --backup --target-dir=/backup --io-capacity=50

Binary log impact:

Binary logging adds 1-3% CPU overhead. Acceptable for most systems.

Monitor impact during backup:

# Check active queries during backup
SHOW PROCESSLIST;

# Check replication lag (if applicable)
SHOW SLAVE STATUS\G;

If queries are slow, reduce backup frequency or move to a lower-traffic time.

Common Backup Mistakes

Mistake 1: Storing backups on the same server. If the server dies, backups die with it. Always have offsite copies.

Mistake 2: Never testing restores. A backup you haven't restored is an untested backup. Test monthly.

Mistake 3: Forgetting encryption. Backups contain passwords, customer data, and financial info. Always encrypt.

Mistake 4: Ignoring backup failures silently. Set up monitoring and alerts. If a backup fails, you need to know immediately.

Mistake 5: Keeping backups forever. Old backups waste storage. Implement retention policies (e.g., 7 days local, 30 days S3, 90 days Glacier).

Mistake 6: Using the wrong tool for your database size. mysqldump works for <10GB. For larger databases, use Percona XtraBackup or binary logs.

Mistake 7: Not documenting the recovery process. When disaster strikes, you're stressed. Write down the restore steps now while you're calm.

Recovery Examples

When disaster strikes, you'll be grateful you automated backups. Here are real recovery scenarios.

Scenario 1: Accidental table drop (last hour)

You dropped users table by mistake 30 minutes ago. Recovery:

# 1. Stop the application to prevent new data loss
sudo systemctl stop myapp

# 2. Get the current binary log position before recovery
mysql -u backup_user -pB@ckup!2024 -e "SHOW MASTER STATUS\G"
# Write down File and Position

# 3. Restore the latest full backup (mysqldump)
gunzip -c /var/backups/mariadb/mysqldump_20240115_020000.sql.gz | mariadb -u root -p

# 4. Replay binary logs up to just before the DROP
mysqlbinlog /var/log/mysql/mariadb-bin.000042 | grep -v "DROP TABLE.*users" | mariadb -u root -p

# 5. Verify table exists and has data
SELECT COUNT(*) FROM users;

# 6. Restart application
sudo systemctl start myapp

Recovery time: ~5-15 minutes depending on database size.

Scenario 2: Hardware failure (full recovery)

Your server's disk failed. Recovery:

# 1. Install MariaDB on new server
sudo apt-get install mariadb-server -y

# 2. Stop MariaDB
sudo systemctl stop mariadb

# 3. Download and restore XtraBackup
cd /var/backups
aws s3 cp s3://mycompany-backups/2024/01/xtrabackup_20240115_030000.tar.gz .
tar -xzf xtrabackup_20240115_030000.tar.gz

# 4. Copy backup files to MariaDB datadir
sudo cp -r xtrabackup_20240115_030000/* /var/lib/mysql/
sudo chown -R mysql:mysql /var/lib/mysql

# 5. Start MariaDB
sudo systemctl start mariadb

# 6. Verify data
mariadb -u root -p -e "SELECT COUNT(*) FROM production_db.users;"

Recovery time: 30 minutes to 2 hours depending on database size and network speed.

Next Steps

You now have automated, encrypted, tested backups. Strengthen your overall database resilience with these related topics:

For deeper backup knowledge, review the Percona XtraBackup Documentation, the MariaDB Backup Documentation, and the mysqldump Manual.

Consider subscribing to the Percona Community Blog for backup best practices and disaster recovery case studies.

Conclusion

A backup is only as good as your ability to restore from it. This guide walks you through setting up automated, encrypted, offsite backups that actually work. Three complementary strategies—mysqldump for simplicity, Percona XtraBackup for speed, and binary logs for granular recovery—give you defense in depth.

The key to success is automation and testing. Automate the backup process so it runs without thinking. Test recovery regularly so you know it actually works. Monitor freshness and integrity so you catch failures early. Do this, and when disaster strikes—and it will—you'll restore and move on. Everyone else will still be scrambling.

Start small: set up daily mysqldump backups, store them on a second server, and test recovery once. Then add Percona XtraBackup for speed, binary logs for point-in-time recovery, and use cloud for geographic redundancy. Build incrementally. But start today.

Need professional help with your database or infrastructure? Our team at Technoroots Limited specializes in enterprise database solutions.

Contact us at Technoroots for expert guidance on MySQL migrations, PostgreSQL optimization, Oracle Database setup, or any database infrastructure challenges.

Related Posts

How to Secure MariaDB After Installation on Ubuntu

You have just installed MariaDB on Ubuntu. However, if the default configuration is left unchang...

Technoroots Team · Jan 21, 2026
Read More →

How to Install MariaDB on Ubuntu: A Complete Step-by-Step Guide

You need a reliable database for your Ubuntu server, but you're not sure where to start with Maria...

Technoroots Team · Jan 19, 2026
Read More →

How to Migrate Data from MySQL to PostgreSQL on Oracle Linux 8

Your MySQL database served you well, but now you're hitting its limits. You've heard about...

TechnoRoots Team · Jan 15, 2026
Read More →