backup scripts

This is a package of utlity scripts to set up a system to back up servers and directories to local or remote archives. Refer to the help text for each script to see how they work.

Current version: 0.5.2

NOTE: This new release of the backup scripts package deprecates the following obsolete scripts:

local_dir2tar, remote_backup, remote_dir2tar, restore_backup, sync_dir, sync_files


Wrapper script for duplicity.

Caller must export the following vars:
* PASSPHRASE -- passphrase for encrypting archives
* BACKUP_SOURCE -- source path for backup
* BACKUP_DESTINATION -- backup destination url
* BACKUP_TARGET -- target url used for verify, restore, etc.

backup OP [duplicity-options]

OP is one of:

* All parameters to the script will be passed along to duplicity.
  (Consult the duplicity man page for the appropriate options for each


Script to backup postgresql and mysql databases on server.

backup_databases [options]
   -H | --help              # print usage and exit
   -? | --?                 # same as 'help'
   -V | --version           # print script version and exit
   -v | --verbose           # generate verbose output
   -q | --quiet             # no verbose diagnostic output
   -X | --dry-run           # do dry run
   -D | --debug             # generate debug diagnostic output
   -d | --dst <ARG>         # specify destination directory for backups
   -k | --backups_to_keep <ARG> # number of backup snapshots to keep
   -u | --mysql_db_user <ARG> # specify mysql user to dump database (default: root)
   -h | --db_host <ARG>     # specify host for database (default: localhost)
   -x | --exclude <ARG>     # specify database to exclude from dumping

* Mysql password must be defined in user's .my.cnf config file to access mysql.


Script to dump database to stdout.

The following databases are supported: postgresql, mysql, sqlite3.

Usage: dump_db -t db_type -u db_user [-H db_host] [-o db_options] database
   -V | --version           # display version and exit
   -h | --help              # print usage and exit
   -? | --?                 # same as 'help'
   -v | --verbose           # generate verbose output
   -q | --quiet             # no verbose output
   -D | --debug             # generate debug diagnostic output
   -t | --type <ARG>        # specify database type (psql, mysql or sqlite3)
   -u | --user <ARG>        # specify ddtabase user
   -H | --host <ARG>        # specify database host (default: localhost)
   -o | --options <ARG>     # specify database dump options (can be set multiple times)
   -X | --dry-run           # do dry run

* This script assumes the caller has permission to dump the database
  owned by the specified user. If you're dumping a Postgresql
  database as an admin user, the script will ignore the database
  owner and dump the data using the default "postgres" or "pgsql"
* If no database user is specified and the script process is run by
  root, the script will use "root" as the database user.
* To dump a sqlite3 database, specify the path to the database file
  as the script argument. Host and user are ignore for sqlite3.
* The script will use the following default database dump options if
  none are specified:
  pgdump    : --clean --no-owner -Fp
  mysqldump : --complete-insert --create-options --quick --quote-names --set-charset
  sqlite3   : .dump
* You can export your default dump options to the environment before
  calling this script to use your own dump methods:

dump_db -t mysql -u server server_db > server_db.sql
dump_db -t psql -u server server_db > server_db.sql
dump_db -t sqlite3 /var/data/server_db > server_db.sqlite3


Script to back up postgresql databases in current cluster.

This script must be run by root

Usage: postgresql_db_backup [options]
   -V | --version           # display version and exit
   -h | --help              # print usage and exit
   -? | --?                 # same as 'help'
   -v | --verbose           # generate verbose output
   -q | --quiet             # no verbose output
   -D | --debug             # generate debug diagnostic output
   -t | --timestamp         # use timestamp in archive filename (default: no timestamp)
   -d | --dst_dir <ARG>     # save file in destination directory (default: /var/backups)
   -s | --suffix <ARG>      # append suffix to archive file (default: .sql)
   -z | --gzip              # compress archive using gzip (default: no compression)
   -l | --log <ARG>         # append log message to <ARG>
   -X | --dry-run           # do dry run


Script to archive directory as tarball.

Usage: archive_dir [options] source-directories ...
   -V | --version           # display version and exit
   -h | --help              # print usage and exit
   -? | --?                 # same as 'help'
   -v | --verbose           # generate verbose output
   -q | --quiet             # do not use verbose output
   -D | --debug             # generate debug diagnostic output
   -d | --dst <ARG>         # destination directory to store archives
   -t | --timestamp         # create timestamp subdirectory (YYYYMMDD) inside backup directory
   -c | --compress <ARG>    # compress tarball using: gzip | bz2 | none (default: none)
   -e | --exclude <ARG>     # exclude patterns listed in file <ARG>
   -o | --output <ARG>      # save tarball in <ARG> (ignores destination setting).
   -X | --dry-run           # do dry run

* Each source directory specified on the command line will be
  archived as a separate tarball.
* On Mac OS, tar will use the Apple Double format for archiving
  files with extended attributes. To bypass this behavior, set
  COPYFILE_DISABLE to 1 before calling this script:
  export COPYFILE_DISABLE=1; archive_dir ...
* Use the -o (--output) option to specify the destination for the
  tarball. This option can be specified multiple times and the
  script will store the output paths and map them to each of the
  source directories to archive (the list is order sensitive). If
  there is no output file specified for a source directory, the
  script will use the default algorithm to generate a label with
  timestamp for the tarball.
* If an output file (argument to -o/--output) is a full path with
  directory components, the script will ignore the destination
  directory (argument to -d/--dst) and save the archive in the
  specified output path.
* Use the -X (--dry-run) option to examine the backup operation
  before doing a real execution.

  archive_dir -v -d /mnt/backups -t -c bz2 /home/user1 /home/user2
  archive_dir -v -d /mnt/backups -t -c bz2 /home/user1 -o /tmp/user1.tar.bz2
  archive_dir -v -d /tmp/archives -c bz2 \
      /home/user1 -o user1.tar.bz2 \
      /home/user2 -o user2.tar.bz2
  archive_dir -v -d /mnt/archives --compress=bz2 --timestamp \
      /home/user1 -o user1.tar.bz2 \
      /home/user2 -o user2.tar.bz2
  archive_dir -v -d /tmp/archives -c bz2 \
      /home/user1 -o /backups/user1.tar.bz2 \
      /home/user2 -o user2.tar.bz2 \
  export COPYFILE_DISABLE=1; archive_dir -v -d ~/backups ~/webdev/images \
      && unset COPYFILE_DISABLE


Script to delete obsolete backup archives (by deleting subdirectories wit the format YYYYMMDD).

Usage: prune_archives
   -V | --version           # display version and exit
   -h | --help              # print usage and exit
   -? | --?                 # same as 'help'
   -v | --verbose           # generate verbose output
   -q | --quiet             # do not produce verbose output
   -d | --days <ARG>        # number of snapshot days to keep (default is 30)
   -X | --dry-run           # do dry run

prune_archives -v /mnt/backups -d 90
prune_archives -v /mnt/backups --days 90
prune_archives -v /mnt/backups -d 90 -X
prune_archives -v /mnt/backups --days 90 --dry-run

* This script will scan backup directories and delete subdirectories
  with the format YYYYMMDD older than X number of days.
* If BACKUP_SNAPSHOTS_TO_KEEP is set as an export environment var,
  the script will use that as the default snapshot days to keep
  backups.  If BACKUP_SNAPSHOTS_TO_KEEP is not set, the script will
  use the default of 30 days or whatever is specified in the -d
  (--days) argument.
* Use the -X (--dry-run) option to test the script operation before
  performing a real pruning.


Script to send email with MIME attachment using local host sendmail.

Usage: send_email [options]

  -h, --help            show this help message and exit
  -v, --verbose         verbose output (print diagnostic info)
  -X, --dry-run         dry run (prints email message info without sending)
  -s subject, --subject=subject
                        email subject
  -r from_email, --from=from_email
                        from address
  -c cc_email, --cc=cc_email
                        cc addresses
  -b bcc_email, --bcc=bcc_email
                        bcc addresses
  -m message, --message=message
  -f msgfile, --file=msgfile
                        file containing message body
  -a attachment, --attachment=attachment

* If parameters contain both "msgfile" and "message," script will use
  content of "msgfile" as message body.
* If parameters contain neither "msgfile" not "message," script will
  read content from stdin (stdin content must be text).
* (If you have binary content, do not pipe it through stdin, attach
  the file instead using the -a|--attachment option.
* Only one msgfile (using the -f|--file option) is allowed for the
  message body, but you can attach as many files as you like using


echo "Backup all done and successful." > /tmp/msgfile

send_email -s "Backup Report" < msgfile
send_email -s "Backup Report" -f msgfile
echo "All done." | send_email -s "Backup Report"

# Send email with MIME attachment:
send_email -s "Backup Report" \
    -a latest-log-report.txt.gz < /tmp/msgfile


The notify script is now symlinked to send_email. Send_email options are compatible and a superset of notify options. You should be able to use either one interchangably without modifications to your code.


notify -s "Backup Report" < msgfile
notify -s "Backup Report" -f msgfile
echo "All done." | notify -s "Backup Report"

How to Install

NOTE: The installation script will copy the backup scripts to /usr/local/bin. You must have root permission to perform the installation.

Clone the repository to a source directory on your server (for example, /usr/local/src/packages):

[ ! -d /usr/local/src/packages ] && sudo mkdir /usr/local/src/packages
cd /usr/local/src/packages
sudo hg clone

Navigate to backup-scripts directory and execute "install-scripts"

cd backup-scripts
sudo ./install-scripts

Add to logrotate tasks for your backup logs to /etc/logrotate.d/backups:

sudo vi /etc/logrotate.d/backups
/var/log/backups/*.log {
    rotate 30
    create 640 root adm

More Info

For more info, contact Kevin Chan