OpenREM installation with Docker
For instructions and to install the develop branch of OpenREM, see https://bitbucket.org/openrem/docker/src/develop/
Comments (91)
-
reporter -
I’d like to get the pynetdicom storescp working to a level that we can drop our dependence on Orthanc. I’d be interested in your view on this, but it would be one less docker image to orchestrate.
Having said that it might be that Orthanc is the easy part of the Docker challenge!
-
reporter Using Alpine as a base for the container should result in a smaller footprint:
https://blog.realkinetic.com/building-minimal-docker-containers-for-python-applications-37d0272c52f3
Not sure if the lack of glibc will cause issues with the python packages. If so, maybe https://github.com/sgerrand/alpine-pkg-glibc will be helpful.
-
Don't think Celery needs to be restricted anymore on either platform (in Docker). Refs
#793→ <<cset 4c00f4e3f99e>>
-
Initial sort of working Docker install, lots to do :-) Refs
#793[skip ci] for now as settings.py has been changed.→ <<cset 1dbd4f029e29>>
-
Don't think Celery needs to be restricted anymore on either platform (in Docker). Refs
#793→ <<cset 4c00f4e3f99e>>
-
Initial sort of working Docker install, lots to do :-) Refs
#793[skip ci] for now as settings.py has been changed.→ <<cset 093e1d1a3782>>
-
Now installing as -e from code rather than wheel, which makes paths to manage.py a lot easier. Refs
#793[skip ci] Using the beforedist.txt rsync to copy the relevant parts of the repo to the docker/code folder.→ <<cset 00d10e7b5aae>>
-
Updating .gitignore. Refs
#793[skip ci]→ <<cset 223753132dd8>>
-
Starting to build prod files up again after deleting all my work on Friday due to a misplaced *... Refs
#793[skip ci]→ <<cset 6e3d2b30d738>>
-
Moved copy of code into root so manage.py is at the expected level (and we don't have an openrem/openrem path). Refs
#793[skip ci]→ <<cset 74d4beca2e82>>
-
Fixing a couple of mistakes, currently working in a limited fashion. Refs
#793[skip ci]→ <<cset bd8b3975e660>>
-
Using nginx, but static files not currently in the right place. Refs
#793[skip ci]→ <<cset 2afdd647d4d6>>
-
Fixed issues with static files. Refs
#793[skip ci]→ <<cset 37b3d7fde8bb>>
-
Making media volume available to nginx. Refs
#793[skip ci]→ <<cset b5ec096d15e5>>
-
Moved to port 80. Refs
#793[skip ci]→ <<cset d086dac62fa9>>
-
migrations needs to be persistent. Need to bind folder to be able to run openrem_rdsr.py etc on local files. Refs
#793[skip ci]→ <<cset e68257ec5881>>
-
RabbitMQ management status indicator now working. Refs
#793[skip ci]→ <<cset a54ec168b9ac>>
-
Celery now connected, but media paths not working correctly. Refs
#793[skip ci]→ <<cset 1b6388119b72>>
-
Adding pycache to distexclude. Refs
#793[skip ci]→ <<cset ff54d78f2e31>>
-
Celery now working, but files not saved to media yet. Refs
#793[skip ci] not ready for testing→ <<cset 74be5a2bbcdd>>
-
Exports working. Refs
#793[skip ci] not ready for testing→ <<cset 5986f19c5639>>
-
Added flower. Refs
#793[skip ci] not ready for testing→ <<cset a050b4fdf1cf>>
-
Reorganised structure so rsync no longer needed. Need to add copying median and wsgi. Refs
#793[skip ci] not ready for testing→ <<cset 570ea830b7d2>>
-
Removed .prod so that we can use defaults. Refs
#793[skip ci] not ready for testing→ <<cset 11eff71b64a3>>
-
Added moving 0002 and wsgi to Dockerfile, and updated dockerignore. Refs
#793[skip ci] not ready for testing→ <<cset ecf960abb6f8>>
-
Changed scratch to imports, added folder. Added logs folder. Refs
#793[skip ci] not ready for testing→ <<cset bdb044ccfa4c>>
-
Right. Might work now if you want to play along…
- Check out branch
issue793reorganisedocker
- Make sure docker and docker-compose are installed
- In the root of the checked out repository (same level as docker-compose.yml) use the following command:
docker-compose up -d --build
- This should bring up a collection of containers. We then need to do the following to get everything started:
docker-compose exec openrem python manage.py makemigrations remapp --noinput docker-compose exec openrem python manage.py migrate --noinput docker-compose exec openrem python manage.py createsuperuser docker-compose exec openrem python manage.py collectstatic --noinput --clear
- You should now be able to go to http://localhost/ and add your new superuser to the groups as normal
- If you put some files in the
imports
folder, we can then import it. The command will depend on the name of the folder your repo is checked out into - for me it isbbOpenREM
- you will need to change the name of the container to match as they adopt the folder name. The./imports
folder is/imports
in the openrem container. For example:
docker exec -it bbopenrem_openrem_1 openrem_rdsr.py /imports/CT-RDSR-Siemens_Flash-QA-DS.dcm
- You should now see that the file has been imported in the web interface; tasks should be working if you export it; RabbitMQ, Celery and Flower should all be working in the tasks page.
- Log files should be found in
./logs
If you do have a play, I’d be very interested to see how you get on.
Remaining off the top of my head:
- DICOM Store SCP
- Creating an image containing all the dependencies to upload to dockerhub
- Creating an image containing everything to deploy to an end user
- Lots of other things probably
- Check out branch
-
Pointing Celery logs and pid to logs folder. Refs
#793[skip ci] not ready for testing→ <<cset 049eab502ae6>>
-
Anyone playing along might like to try again - I’d messed up the logs folder for flower.
Other useful commands:
docker-compose down
to shutdown the containers,docker-compose down -v
to destroy the volumes too (so you’ll have a fresh database, migrations folder, media folder on the next start)docker ps
to see running containers,docker ps -a
to see all containers. Alsodocker-compose ps
.docker-compose logs -f
to see the logs from all the containers whizz by;Ctrl-C
to quit.docker logs container_name_1 -f
to see the logs fromcontainer_name_1
, which takes its name from the folder you start in and the container name, so for me the flower logs are indocker logs bbopenrem_flower_1 -f
-
Giving flower access to the logs folder. Refs
#793[skip ci] not ready for testing→ <<cset cfa078dc6ad6>>
-
reporter Perfect - it works now (on Windows using Docker Desktop).
-
reporter I’ve imported some RDSRs - that worked perfectly too. I used a wildcard to import lots at once, and that worked.
-
reporter @Ed McDonagh where is the database data being kept?
-
reporter Is it set up to work with the legacy Toshiba import?
-
Database data is in the
postgres_data
volume;/var/lib/postgresql/data
inside thedb
container. You can see inside the volumes by usingdocker volumes ps
to list them, anddocker volume inspect volume_name
to see where they are on your disk -
I haven’t set up for Toshiba imports yet - missed that off my list of things I need to do.
I was thinking I need to work out if there should be a separate Java container, or if I have to have Java installed in the main image.
-
reporter On my Windows system I had to use
docker volume ls
to see a list of volumes (docker volumes ps
doesn’t work).This showed that there was a volume called
openrem_postgres_data
.Running docker
volume inspect openrem_postgres_data
shows me:[ { "CreatedAt": "2020-03-05T09:31:02Z", "Driver": "local", "Labels": { "com.docker.compose.project": "openrem", "com.docker.compose.version": "1.24.1", "com.docker.compose.volume": "postgres_data" }, "Mountpoint": "/var/lib/docker/volumes/openrem_postgres_data/_data", "Name": "openrem_postgres_data", "Options": null, "Scope": "local" } ]
However, the above doesn’t show me where this data is stored on my Windows file system.
I’ve looked into this a bit (https://stackoverflow.com/questions/43181654/locating-data-volumes-in-docker-desktop-windows), and found that my Docker settings are configured to put virtual hard disks on my C:\ drive. However, at this location there is just one very large file (45 GB). I assume that within this there is all my Docker stuff.
I think that what we need for the database is a separate local folder that is mounted in the docker container as a volume. It is then easy to back up this local folder, or re-deploy it to another server if required. Or am I missing something?
-
Sorry,
ps
ls
was my mistake.Do you think we need access directly to the postgres db folders, or for postgres to have access to a folder (like the imports and logs folders) where backups can be dumped and imported from?
-
reporter I think that the method doesn’t matter, as long as it’s easy to obtain a backup of the current database, and also easy to restore a database backup to the Dockerised system.
-
I can’t find a way of running the
jodogne/orthanc
image and then running the OpenREM scripts in the openrem container. It seems it might be possible to run the docker command from within the container by having a volume that is the docker.sock of the host, but it isn’t clear that this will work on Windows hosts, even if I can make it work!So currently it is looking like there will need to be an openrem image that contains everything already included, plus dcmtk and Java and pixelmed and Orthanc. This image is used for the main openrem container, the worker container, the flower container, and as many DICOM store nodes as are required. I’m assuming we can set some environment variables that will dictate which of those containers will have active DICOM Store nodes?
-
reporter Added restart: always to the components of the yml file. On my Windows computer this causes OpenREM and the associated components to automatically restart when the host computer is rebooted. Confirmed as working on my Windows system using Docker Desktop. References issue
#793[skip ci] as not ready for testing yet.→ <<cset 5c8dc9334b81>>
-
Changes to reduce the size of the image. Added orthanc to start working out how to make use of it! Refs
#793[skip ci] not ready for testing→ <<cset ed1ac09bf430>>
-
We can use wget in orthanc to call a Django view to trigger imports. Refs
#793[skip ci] not ready for testing Eg. http://localhost/import/rdsr/?dicom_path=%2Fimports%2FDX-RDSR-Canon_CXDI.dcm successfully causes the referenced RDSR in /imports to be imported!→ <<cset c3c3eacb1460>>
-
Need to use POST rather than GET, so currently working out how to handle the csrf token.
-
Works as POST, but need to use CSRF_TRUSTED_ORIGIN rather than @csrf_exempt, and return something other than the hompage. Refs
#793[skip ci] not ready for testing→ <<cset 63c50b6052ae>>
-
Named openrem and orthanc containers. RDSR import from POST works, but can't work out how to properly allow the cross-site without disabling csrf on the view. Refs
#793[skip ci] not ready for testing→ <<cset a256cc982247>>
-
Basic Orthanc setup working. DICOM store RDSR to localhost 4242 is then sent to OpenREM using wget. Need to expand to other extractors, probably replace wget with inbuilt Orthanc HttpPost. Refs
#793[skip ci] not ready for testing→ <<cset f9eed7f75687>>
-
Now using Orthanc Lua instead of wget. Took ages to work it out! Refs
#793[skip ci] not ready for testing→ <<cset 4b9ae1bfeeb4>>
-
- changed milestone to 1.0.0
-
assigned issue to
-
Can now send any DICOM objects to Orthanc to import except Toshiba/Java. Changed Orthanc port to 104. Refs
#793[skip ci] not ready for testing→ <<cset 0b69b1622606>>
-
Now includes JRE, pixelmed and dcmtk. Tosh function not tested. Image now much larger (976 MB). Refs
#793[skip ci] not ready for testing→ <<cset bd53931d98d6>>
-
Was 525 MB I think, so significant increase.
-
Adding correct call hopefully for ct_toshiba. Refs
#793[skip ci] not ready for testing→ <<cset 53e74956b5e5>>
-
I tried another method, of starting with
openjdk:slim
then copying in thepython:slim
container, but that made the image bigger still.@David Platten - do you have a dataset to hand to see if the toshiba import works?
-
Using the osimis orthanc so we can use environment variables to set AET etc. Refs
#793[skip ci] not ready for testing→ <<cset 5a7d70f85e50>>
-
Removed commented line. Refs
#793[skip ci] not ready for testing→ <<cset afc96253ceaf>>
-
If anyone gets a chance, take a look at following the instructions at https://hub.docker.com/r/openrem/openrem
Should enable you to get started without checking out the repo.
-
This would be easier to use as a downloadable zip file - but then how would you manage updates. Or a small git repo - but then you’d need git.
And as it stands, users would have to update the docker compose file with particulars for orthanc if we are allowing users to set up more than one instance.
-
reporter I think I would favour a small git repository to contain the required files. A user doesn’t necessarily need git to access this: on Bitbucket a user can click on the “Download repository” link to get hold of a zip file containing all the files. Extracting the zip file is much easier than creating a series of files from scratch.
-
I had decided on zip, small repo is a good idea I think.
I am currently working on converting all the settings in the lua script to use environment variables we can set in an env file.
-
I’ve updated the Orthanc image so we can feed the Lua script variables in as environment values, and replaced the instructions at https://hub.docker.com/r/openrem/openrem with a link to download a zip from bitbucket.
I haven’t created the small repo as suggested yet, but I will. I need to think through how to have the main repo, presumably auto-building docker images and also working outside docker, plus this little repo with just the contents of the zip file.
-
Added restart policy to orthanc, changed the others to unless-stopped. Refs
#793[skip ci] not ready for testing→ <<cset ad95b0ed3e53>>
-
Enabled setting of lua parameters by env variables. Couldn't get the syntax right for an env file, kept in docker-compose.yml. Refs
#793[skip ci] not ready for testing→ <<cset 5be36fa2518b>>
-
Removed all the files that are now in openrem/nginx, openrem/orthanc and openrem/docker. Refs
#793[skip ci] not ready for testing→ <<cset a6e0cc369fda>>
-
Current status
Images:
- Orthanc: in https://bitbucket.org/openrem/orthanc, commits to that repo on the master branch automatically trigger a build of the image on https://hub.docker.com/r/openrem/orthanc
- Nginx: in https://bitbucket.org/openrem/nginx, commits to that repo on the master branch automatically trigger a build of the image on https://hub.docker.com/r/openrem/nginx
- OpenREM: In the main repo (this one). Need to work out whether to automatically build from the Docker Hub end, based on branch name or tag, or to build based on rules in Bitbucket pipelines which are then pushed to Docker Hub.
Docker-compose:
- In repo https://bitbucket.org/openrem/docker/ with readme for instructions on installation
-
Changes to settings to allow testing to work again. Minor changes to ct_toshiba and import_views. Refs
#793→ <<cset 14e48b7a6ca0>>
-
Adding a docker manual step, Should apply to any branch ending in docker (for now). Refs
#793→ <<cset 1eda94ef19a4>>
-
Missed the dot! Refs
#793→ <<cset 0cdb9a04671c>>
-
So I think this page gives me what I need to setup a docker image workflow for OpenREM: https://think-engineer.com/blog/devops/bitbucket-pipelines-building-publishing-and-re-tagging-docker-images-in-a-ci-cd-workflow
I can put something similar together for this repo, then I need to work out how you do migrations and upgrades…
-
Adding in deploy to docker pipeline workflow. Refs
#793→ <<cset 253b3c82b8ec>>
-
Added pip cache to standard testing and docker cache to docker builds. Refs
#793→ <<cset a1c10cef4a46>>
-
Adding migration file for 0.10 t0 1.0. Refs
#793→ <<cset f1d278bd6d9d>>
-
Making 0.10 to 1.0 migration file available. Refs
#793→ <<cset dc37fe124cf8>>
-
Adding additional tag of branch name without commit to make it possible to refer to latest in branch for docker-compose. Refs
#793→ <<cset b9c1061a8c61>>
-
Addressing codacy issues. Should make a smaller image too. Refs
#793→ <<cset e55a80afdb5d>>
-
Setting medial and static URLs to match previous. Refs
#793→ <<cset 2eff2eab7b14>>
-
Setting database env variables to match postgres ones we need anyway. Refs
#793→ <<cset 54cab87e651c>>
-
Setting specific versions of pypi libraries for stability. Attempt to install pynetdicom from git as using 1.5 pre-release features. Refs
#793→ <<cset e1202d6beef8>>
-
Changing setup to make use of the requirements.txt. Refs
#793→ <<cset 0340d061ad6c>>
-
Attempt to handle pre-release pynetdicom in tox and Docker. Also hopefully pipeline python will now cache if tox uses same packages? Refs
#793→ <<cset 915dd81c2331>>
-
Didn't work, trying tarball. Refs
#793→ <<cset 9a520a247aaf>>
-
Didn't work, try again. Refs
#793→ <<cset e568f382f0ba>>
-
Works for me locally, lets try pipelines... Refs
#793→ <<cset fee48df19564>>
-
Commented pynetdicom out of requirements for now. Added to install docs (to be updated for docker). Refs
#793→ <<cset 08a1de7c78a9>>
-
Need to install pynetdicom within tox. Refs
#793→ <<cset 09afb9401c1c>>
-
In Docker needs to install without git. Refs
#793→ <<cset f75f5b7f4079>>
-
Hopefully clearing Codacy problems. Refs
#793→ <<cset 7a2a04df81d8>>
-
- changed status to resolved
Merged in issue793Finishingdocker (pull request #361)
Fixes issue
#793, basic Docker setup is now available. Documentation is needed along with testing, especially around upgrades from different versions of Postgres on Windows and Linux→ <<cset fba5fba529cb>>
-
- edited description
-
Added ref
#793and#821to changes.→ <<cset 10c97d2606d9>>
- Log in to comment
See / edit / add to this new wiki page: https://bitbucket.org/openrem/openrem/wiki/OpenREM%20and%20Docker