BBSDEV-18370 Update backup scripts to support changes in 6.0

Merged
#53 · Created  · Last updated

Merged pull request

Merged in BBSDEV-18370-support-bitbucket-6-0 (pull request #53)

571eb09·Author: ·Closed by: ·2018-10-18

Description

  • "home" strategies are renamed to "disk" strategies, and now support backup and restore of a home directory and any number of data stores

  • in order to support this change, a number of config variables have been added, renamed, changed to arrays

Some topics for discussion…

Backwards compatibility

With this change, a number of variables in the config file (bitbucket.diy-backup.vars.sh) have been renamed and turned into arrays. Trying to run these scripts with an old backup vars file wouldn't work. I could add something to check whether the old variables are set, and if so put the right values in the new variables, but this felt really messy and unnecessary, especially given this change is to support a major version bump. The does raise the question of whether these scripts should be versioned as well, but again that feels like overkill.

Multiple NFS servers

My changes have added support for multiple stores on the same NFS server, but not stores on different NFS servers. In the case of the rsync strategy it’s not a problem, because the script can be run from a node which will have access to all stores. In the case of the ZFS and amazon EBS strategies though, supporting stores on multiple NFS servers means having configuration to say which stores are configured on which servers, and then having the script SSH into each server and run commands on each. Again, this adds more complexity to these scripts.

If we don’t implement this, consumers of the script can still write their own wrapper script to SSH to the correct servers and run the scripts (in fact we already do this in our stash dev deploy script). If we go this route I will need to make some changes to allow the scripts to run without a shared home configured.

Zero downtime backups

We want to be sure there’s no risk of corruption/data inconsistency with zero downtime backups. I don’t think there will be any issues here, given the data stores hold distinct repositories.

Disaster Recovery

At the moment, setup-disk-replication.sh will fail if it finds any of the filesystems its trying to replicate have already been created on the standby. This won’t be the nicest experience if someone has added a data store and is attempting to set up replication for it if they’ve already got replication set up for their home directory. Running the setup with only the new data store set in the config variables should work though, so I don’t think there’s any major problem here.

Testing

I’ll spend some time manually testing my changes this week, and we will blitz this as well.

Edit: testing notes here.

0 attachments

0 comments

Loading commits...