OK, so, I feel like we should explain what happened here. I'll remain brief.
When repositories are saved on our end (when you hit "Save settings" for a repository admin, and when strips are carried out, and what-not), we need to a bit of housekeeping; we need to ping S3 and go "hey, this download is now private" (or public), we need to update all the events in our database, and other stuff like that.
Now, up until yesterday, we did all this synchronously, while you were waiting for it. We don't allow anything to run for longer than 30 seconds on the frontend, so you get a "someone kicked the bucket" page, meaning that our app server killed the worker cause it was taking too long to finish. Unfortunately, in this case, the turn of events was something like this:
Put the repository into the "stripping" state.
Start updating the ACLs on S3 and updating the events.
TortoiseHG, having 426 uploaded files, couldn't do all this in less than 30 seconds.
Worker was killed, leaving the repository in the "stripping" state.
We're now rehashing ACLs and events in a background task (we do this for most things that can be run in the background, and have variable runtime complexities.) Calling save() on any repository is now a trivial task, and it doesn't take longer depending on the number of files uploaded or events recorded for the repository.