I've got somewhere to the bottom of the issues I had last month with database locked messages.
If Second mount process is erroneously started up ( In my case my the overnight team misreading the server notes) the new process correctly(?) refuses to start up with the following error : (there's at least one extra debug statement from last time I was tracking this)
mount.s3ql[127165:MainThread] s3ql.mount.determine_threads: Using 10 upload threads. mount.s3ql[127165:MainThread] s3ql.mount.main: Autodetected 4034 file descriptors available for cache entries mount.s3ql[127165:MainThread] s3ql.mount.get_metadata: Using cached metadata. mount.s3ql[127165:MainThread] s3ql.database.__init__: Starting db connection <s3ql.database.Connection object at 0x7f440e1fefd0> mount.s3ql[127165:MainThread] s3ql.database.__init__: Exception BusyError: database is locked when executing 'PRAGMA synchronous = OFF' mount.s3ql[127165:MainThread] s3ql.database.__init__: Exception BusyError: database is locked when executing 'PRAGMA journal_mode = OFF' mount.s3ql[127165:MainThread] root.excepthook: File system damaged or not unmounted cleanly, run fsck!
But at this point the existing mount process while still running seem to go into a strange state; in my case the tar process unpacking into the mount point seem to continue but an ls of the mount point show an empty directory.
s3qlcp commands fails on a directory in the filesystem - still there on a remount - but (obviously not listed with ls as that show nothing) with 'directory does not exist'
I believe wither the exiting mount process should die as well; or preferably continue unaffected.