OpenREM installation with Docker

Issue #793 resolved
David Platten created an issue

For instructions and to install the develop branch of OpenREM, see

Comments (91)

  1. Ed McDonagh

    I’d like to get the pynetdicom storescp working to a level that we can drop our dependence on Orthanc. I’d be interested in your view on this, but it would be one less docker image to orchestrate.

    Having said that it might be that Orthanc is the easy part of the Docker challenge!

  2. Ed McDonagh

    Now installing as -e from code rather than wheel, which makes paths to a lot easier. Refs #793 [skip ci] Using the beforedist.txt rsync to copy the relevant parts of the repo to the docker/code folder.

    → <<cset 00d10e7b5aae>>

  3. Ed McDonagh

    Right. Might work now if you want to play along…

    • Check out branch issue793reorganisedocker
    • Make sure docker and docker-compose are installed
    • In the root of the checked out repository (same level as docker-compose.yml) use the following command:
    docker-compose up -d --build
    • This should bring up a collection of containers. We then need to do the following to get everything started:
    docker-compose exec openrem python makemigrations remapp --noinput
    docker-compose exec openrem python migrate --noinput
    docker-compose exec openrem python createsuperuser
    docker-compose exec openrem python collectstatic --noinput --clear
    • You should now be able to go to http://localhost/ and add your new superuser to the groups as normal
    • If you put some files in the imports folder, we can then import it. The command will depend on the name of the folder your repo is checked out into - for me it is bbOpenREM - you will need to change the name of the container to match as they adopt the folder name. The ./imports folder is /imports in the openrem container. For example:
    docker exec -it bbopenrem_openrem_1 /imports/CT-RDSR-Siemens_Flash-QA-DS.dcm
    • You should now see that the file has been imported in the web interface; tasks should be working if you export it; RabbitMQ, Celery and Flower should all be working in the tasks page.
    • Log files should be found in ./logs

    If you do have a play, I’d be very interested to see how you get on.

    Remaining off the top of my head:

    • DICOM Store SCP
    • Creating an image containing all the dependencies to upload to dockerhub
    • Creating an image containing everything to deploy to an end user
    • Lots of other things probably

  4. Ed McDonagh

    Anyone playing along might like to try again - I’d messed up the logs folder for flower.

    Other useful commands:

    • docker-compose down to shutdown the containers, docker-compose down -v to destroy the volumes too (so you’ll have a fresh database, migrations folder, media folder on the next start)
    • docker ps to see running containers, docker ps -a to see all containers. Also docker-compose ps.
    • docker-compose logs -f to see the logs from all the containers whizz by; Ctrl-C to quit.
    • docker logs container_name_1 -f to see the logs from container_name_1, which takes its name from the folder you start in and the container name, so for me the flower logs are in docker logs bbopenrem_flower_1 -f

  5. David Platten reporter

    I’ve imported some RDSRs - that worked perfectly too. I used a wildcard to import lots at once, and that worked.

  6. Ed McDonagh

    Database data is in the postgres_data volume; /var/lib/postgresql/data inside the db container. You can see inside the volumes by using

    docker volumes ps to list them, and

    docker volume inspect volume_name to see where they are on your disk

  7. Ed McDonagh

    I haven’t set up for Toshiba imports yet - missed that off my list of things I need to do.

    I was thinking I need to work out if there should be a separate Java container, or if I have to have Java installed in the main image.

  8. David Platten reporter

    On my Windows system I had to use docker volume ls to see a list of volumes (docker volumes ps doesn’t work).

    This showed that there was a volume called openrem_postgres_data.

    Running docker volume inspect openrem_postgres_data shows me:

            "CreatedAt": "2020-03-05T09:31:02Z",
            "Driver": "local",
            "Labels": {
                "com.docker.compose.project": "openrem",
                "com.docker.compose.version": "1.24.1",
                "com.docker.compose.volume": "postgres_data"
            "Mountpoint": "/var/lib/docker/volumes/openrem_postgres_data/_data",
            "Name": "openrem_postgres_data",
            "Options": null,
            "Scope": "local"

    However, the above doesn’t show me where this data is stored on my Windows file system.

    I’ve looked into this a bit (, and found that my Docker settings are configured to put virtual hard disks on my C:\ drive. However, at this location there is just one very large file (45 GB). I assume that within this there is all my Docker stuff.

    I think that what we need for the database is a separate local folder that is mounted in the docker container as a volume. It is then easy to back up this local folder, or re-deploy it to another server if required. Or am I missing something?

  9. Ed McDonagh

    Sorry, ps ls was my mistake.

    Do you think we need access directly to the postgres db folders, or for postgres to have access to a folder (like the imports and logs folders) where backups can be dumped and imported from?

  10. David Platten reporter

    I think that the method doesn’t matter, as long as it’s easy to obtain a backup of the current database, and also easy to restore a database backup to the Dockerised system.

  11. Ed McDonagh

    I can’t find a way of running the jodogne/orthanc image and then running the OpenREM scripts in the openrem container. It seems it might be possible to run the docker command from within the container by having a volume that is the docker.sock of the host, but it isn’t clear that this will work on Windows hosts, even if I can make it work!

    So currently it is looking like there will need to be an openrem image that contains everything already included, plus dcmtk and Java and pixelmed and Orthanc. This image is used for the main openrem container, the worker container, the flower container, and as many DICOM store nodes as are required. I’m assuming we can set some environment variables that will dictate which of those containers will have active DICOM Store nodes?

  12. David Platten reporter

    Added restart: always to the components of the yml file. On my Windows computer this causes OpenREM and the associated components to automatically restart when the host computer is rebooted. Confirmed as working on my Windows system using Docker Desktop. References issue #793 [skip ci] as not ready for testing yet.

    → <<cset 5c8dc9334b81>>

  13. Ed McDonagh

    We can use wget in orthanc to call a Django view to trigger imports. Refs #793 [skip ci] not ready for testing Eg. http://localhost/import/rdsr/?dicom_path=%2Fimports%2FDX-RDSR-Canon_CXDI.dcm successfully causes the referenced RDSR in /imports to be imported!

    → <<cset c3c3eacb1460>>

  14. Ed McDonagh

    Works as POST, but need to use CSRF_TRUSTED_ORIGIN rather than @csrf_exempt, and return something other than the hompage. Refs #793 [skip ci] not ready for testing

    → <<cset 63c50b6052ae>>

  15. Ed McDonagh

    Named openrem and orthanc containers. RDSR import from POST works, but can't work out how to properly allow the cross-site without disabling csrf on the view. Refs #793 [skip ci] not ready for testing

    → <<cset a256cc982247>>

  16. Ed McDonagh

    Basic Orthanc setup working. DICOM store RDSR to localhost 4242 is then sent to OpenREM using wget. Need to expand to other extractors, probably replace wget with inbuilt Orthanc HttpPost. Refs #793 [skip ci] not ready for testing

    → <<cset f9eed7f75687>>

  17. Ed McDonagh

    I tried another method, of starting with openjdk:slim then copying in the python:slim container, but that made the image bigger still.

    @David Platten - do you have a dataset to hand to see if the toshiba import works?

  18. Ed McDonagh

    This would be easier to use as a downloadable zip file - but then how would you manage updates. Or a small git repo - but then you’d need git.

    And as it stands, users would have to update the docker compose file with particulars for orthanc if we are allowing users to set up more than one instance.

  19. David Platten reporter

    I think I would favour a small git repository to contain the required files. A user doesn’t necessarily need git to access this: on Bitbucket a user can click on the “Download repository” link to get hold of a zip file containing all the files. Extracting the zip file is much easier than creating a series of files from scratch.

  20. Ed McDonagh

    I had decided on zip, small repo is a good idea I think.

    I am currently working on converting all the settings in the lua script to use environment variables we can set in an env file.

  21. Ed McDonagh

    I’ve updated the Orthanc image so we can feed the Lua script variables in as environment values, and replaced the instructions at with a link to download a zip from bitbucket.

    I haven’t created the small repo as suggested yet, but I will. I need to think through how to have the main repo, presumably auto-building docker images and also working outside docker, plus this little repo with just the contents of the zip file.

  22. Ed McDonagh

    Enabled setting of lua parameters by env variables. Couldn't get the syntax right for an env file, kept in docker-compose.yml. Refs #793 [skip ci] not ready for testing

    → <<cset 5be36fa2518b>>

  23. Ed McDonagh

    Current status



  24. Log in to comment