Now installing as -e from code rather than wheel, which makes paths to manage.py a lot easier. Refs #793 [skip ci]
Using the beforedist.txt rsync to copy the relevant parts of the repo to the docker/code folder.
You should now be able to go to http://localhost/ and add your new superuser to the groups as normal
If you put some files in the imports folder, we can then import it. The command will depend on the name of the folder your repo is checked out into - for me it is bbOpenREM - you will need to change the name of the container to match as they adopt the folder name. The ./imports folder is /imports in the openrem container. For example:
Anyone playing along might like to try again - I’d messed up the logs folder for flower.
Other useful commands:
docker-compose down to shutdown the containers, docker-compose down -v to destroy the volumes too (so you’ll have a fresh database, migrations folder, media folder on the next start)
docker ps to see running containers, docker ps -a to see all containers. Also docker-compose ps.
docker-compose logs -f to see the logs from all the containers whizz by; Ctrl-C to quit.
docker logs container_name_1 -f to see the logs from container_name_1, which takes its name from the folder you start in and the container name, so for me the flower logs are in docker logs bbopenrem_flower_1 -f
Giving flower access to the logs folder. Refs #793 [skip ci] not ready for testing
I think that what we need for the database is a separate local folder that is mounted in the docker container as a volume. It is then easy to back up this local folder, or re-deploy it to another server if required. Or am I missing something?
Sorry, psls was my mistake.
Do you think we need access directly to the postgres db folders, or for postgres to have access to a folder (like the imports and logs folders) where backups can be dumped and imported from?
I think that the method doesn’t matter, as long as it’s easy to obtain a backup of the current database, and also easy to restore a database backup to the Dockerised system.
I can’t find a way of running the jodogne/orthanc image and then running the OpenREM scripts in the openrem container. It seems it might be possible to run the docker command from within the container by having a volume that is the docker.sock of the host, but it isn’t clear that this will work on Windows hosts, even if I can make it work!
So currently it is looking like there will need to be an openrem image that contains everything already included, plus dcmtk and Java and pixelmed and Orthanc. This image is used for the main openrem container, the worker container, the flower container, and as many DICOM store nodes as are required. I’m assuming we can set some environment variables that will dictate which of those containers will have active DICOM Store nodes?
Added restart: always to the components of the yml file. On my Windows computer this causes OpenREM and the associated components to automatically restart when the host computer is rebooted. Confirmed as working on my Windows system using Docker Desktop. References issue #793 [skip ci] as not ready for testing yet.
We can use wget in orthanc to call a Django view to trigger imports. Refs #793 [skip ci] not ready for testing
Eg. http://localhost/import/rdsr/?dicom_path=%2Fimports%2FDX-RDSR-Canon_CXDI.dcm successfully causes the referenced RDSR in /imports to be imported!
Basic Orthanc setup working. DICOM store RDSR to localhost 4242 is then sent to OpenREM using wget. Need to expand to other extractors, probably replace wget with inbuilt Orthanc HttpPost. Refs #793 [skip ci] not ready for testing
Should enable you to get started without checking out the repo.
This would be easier to use as a downloadable zip file - but then how would you manage updates. Or a small git repo - but then you’d need git.
And as it stands, users would have to update the docker compose file with particulars for orthanc if we are allowing users to set up more than one instance.
I think I would favour a small git repository to contain the required files. A user doesn’t necessarily need git to access this: on Bitbucket a user can click on the “Download repository” link to get hold of a zip file containing all the files. Extracting the zip file is much easier than creating a series of files from scratch.
I had decided on zip, small repo is a good idea I think.
I am currently working on converting all the settings in the lua script to use environment variables we can set in an env file.
I’ve updated the Orthanc image so we can feed the Lua script variables in as environment values, and replaced the instructions at https://hub.docker.com/r/openrem/openrem with a link to download a zip from bitbucket.
I haven’t created the small repo as suggested yet, but I will. I need to think through how to have the main repo, presumably auto-building docker images and also working outside docker, plus this little repo with just the contents of the zip file.
Added restart policy to orthanc, changed the others to unless-stopped. Refs #793 [skip ci] not ready for testing
OpenREM: In the main repo (this one). Need to work out whether to automatically build from the Docker Hub end, based on branch name or tag, or to build based on rules in Bitbucket pipelines which are then pushed to Docker Hub.