I know I need to include 127.0.0.1, but I don’t know what the [::1] means, and I don’t know if I have to leave the existing openrem and nginx there, or replace them with something else.
[::1] is local host in IPv6.
Having more than you need in there is fine. I agree that the text could be more helpful. What you need (I think) is the servername that clients will be using, but I need to check that it doesn’t need the container name for the in-Docker networking.
I’ll do some testing.
Running docker-compose up -d results in the following error on my Windows 10 Pro laptop:
D:\docker\OpenREM1.0dev>docker-compose up -d
openrem10dev_broker_1 is up-to-date
openrem-orthanc-1 is up-to-date
openrem-db is up-to-date
openrem10dev_worker_1 is up-to-date
openrem is up-to-date
openrem10dev_flower_1 is up-to-dateStarting openrem-nginx ...
Starting openrem-nginx ... error
ERROR: for openrem-nginx Cannot start service nginx: driver failed programming external connectivity on endpoint openrem-nginx (7df2d279e87d0ba75a56620ca149709c786d3c6afd9738213181520800f1b91f): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: for nginx Cannot start service nginx: driver failed programming external connectivity on endpoint openrem-nginx (7df2d279e87d0ba75a56620ca149709c786d3c6afd9738213181520800f1b91f): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: Encountered errors while bringing up the project.
I’ve cleaned out my other Docker containers and images, and still get this error.
What happens if you change it to a high port - in docker-compose.yml in the nginx section change the ports from 80:80 to 8080:80
Did it let you bring Orthanc up on port 104? In which case I wonder if you are running IIS or something, and the port is already in use. There’ll be some sort of netstat command you can run to see.
I’ve run a netstat command and found the PID of the process on port 80, but it’s not IIS. It’s a system process that I can’t track down (netstat -n -a -o). It’s ntoskrnl.exe that is using the port, located in C:\Windows\System32.
Changing this bit of the docker-compose.yml from
ports: - 80:80
ports: - 8080:80
And presumably as it was a clash rather than a high-port/low-port thing, it might work on 81 or similar too?
It is a clash, so would work on 81 or similar.
I’m now trying to upgrade using a 0.10.0 backup database.
The command docker cp /path/to/openremdump.bak db_backup/
@Ed McDonagh my Orthanc container won’t start. How do I trouble-shoot it?
PS D:\docker\OpenREM1.0dev> docker container listCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES158b449c2f5d nginx:1.17.8-alpine "nginx -g 'daemon of…" 2 days ago Up 45 hours 0.0.0.0:8080->80/tcp openrem-nginxaf13d80272e1 openrem/openrem:develop "/home/app/openrem/e…" 3 days ago Up 45 hours 8000/tcp openrem784f83f47c3f openrem/openrem:develop "/home/app/openrem/e…" 3 days ago Up 45 hours 5555/tcp openrem10dev_flower_1975c6a1415d2 openrem/openrem:develop "/home/app/openrem/e…" 3 days ago Up 45 hours openrem10dev_worker_1b69695875b0b openrem/orthanc "/bin/bash -c 'apt-g…" 3 days ago Restarting (100) 19 seconds ago openrem-orthanc-15a74ce55d787 postgres:12.0-alpine "docker-entrypoint.s…" 3 days ago Up 45 hours 5432/tcp openrem-db70aca4af0b1c rabbitmq:3-management-alpine "docker-entrypoint.s…" 3 days ago Up 45 hours 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 15691-15692/tcp, 25672/tcp openrem10dev_broker_1PS D:\docker\OpenREM1.0dev>
Get logs for the orthanc container with docker-compose logs -f orthanc_1 see if that gives you a clue.
The name we use is that of the service we define in the docker-compose.yml file, so orthanc_1.
for the vanilla
in the orthanc_1 section of docker-compose.yml has made it work for me. It must be related to the package update and installation in the openrem/orthanc container. However, the logs indicate that the updating had worked.
Actually, the Dockerfile contents for openrem/orthanc is:
FROM osimis/orthanc:masterRUN apt-get update && apt-get -y install zip unzip
It doesn’t actually tell Orthanc to run, it just runs an apt update and install.
Should there be a CMD in there at the end to specify which command to run within the container?
Assuming your openrem/orthanc image wasn’t old, it shouldn’t have made any difference. Can you see how old that image is (docker images) - and if it is more than 14 days old can you pull a new one and try it again? (docker pull openrem/orthanc)
The Dockerfile simply adds zip and unzip to the main release. I’m considering removing the functions so it isn’t necessary!
Hi David. Can you pull latest in again, and see if it now works for you? (docker pull openrem/orthanc:latest)
I have triggered a new build, and presumably the Osimis image has been updated because it only used the first two layers from cache. I have no idea what went wrong with the last one, obviously something did!
It now works for me again, specifying openrem/orthanc:latest in the docker-compose.yml file.
This works for me now - thanks.
I’ve set up a test Docker-based OpenREM system on a new server running Windows Server 2019.
At the moment I have installed Docker Desktop.
I’ve created a scheduled task to start Docker Desktop on boot. At the moment this is running as my local user - I need to get it running using the SYSTEM account.
When the system reboots OpenREM comes back to life. However, Celery does not. This is because the celery.pid file still exists in the logs folder.
Is there scope to delete celery.pid as part of the start up of OpenREM?
I have written a basic batch file that deletes logs\celery.pid before Docker Desktop is started. Hopefully this will fix the problem for me.
I thought I had dealt with that, but I obviously haven’t.
Can you disable the batch file that deletes the pid file, and modify the docker-compose.yml file as follows:
worker:command:celery worker -A openremproject -Q default --logfile=/logs/celery.log
i.e. to remove the --pidfile flag.
Then try again
@Ed McDonagh that worked - thanks. I just have to work out how to get Docker to launch at boot as the SYSTEM user rather than as me (my password will change at some point).
I also think that I should be using Docker Enterprise instead of Docker Desktop - that’s probably the answer.
For my Windows Server 2019 Docker installation port 80 worked for the webserver:
On Linux there’s a permissions issue once you’ve extracted the develop.zip file. When I tried the docker-compose up -d command several of the services kept on restarting. I then made some very generous changes to the permissions of the folder and all files in it, and the docker-compose up -d worked OK. Not sure what the exact permission requirements are.
Where did you extract the files to?
I created a new Ubuntu user called openrem and added it to the docker and sudo groups (I had to create the docker group first). I then logged in as this user and downloaded and extracted the develop.zip file into the openrem user's home directory.
I perhaps should add that I installed Docker during the Ubuntu Server installation process as a snap.
That is probably relevant, as I don’t think I’ve seen this before. But snap changes everything!
I just thought that using the snap was the easiest way of installing Docker, especially when it was being offered as part of the operating system process.
Absolutely. And we should definitely try it. But we should expect some permission issues etc as it is effectively a locked down container itself!
If I add a user: root line in each service then I don't have any permissions errors when running the snap-based Docker installation. Extract below:
I am running into permission issues with this too when running the docker-compose exec openrem python manage.py migrate remapp --fake command.
The “app” user doesn’t have permission to write files to the logs folder on the host.
Is this with a fresh folder - you don’t have any if the previous attempts files or permissions in there?
Yes, a fresh folder - I deleted the old one entirely, and then unzipped the develop.zip file again.
The permissions issue is specific to the logs folder. Allowing write access to all users from the host (sudo chmod 777 ./logs) fixes the problem for the snap and apt versions of Docker.
And which user owns the files on the Ubuntu side (and which on the Docker container side)?
On the host side:
openrem@openremubuntu:~/openrem-docker-6dda9460edd3$ ls -al ./logs/total 12drwxrwxrwx 2 openrem openrem 4096 Jan 29 10:35 .drwxrwxr-x 7 openrem openrem 4096 Jan 29 10:23 ..-rw-r--r-- 1 dplatten dplatten 708 Jan 29 10:23 celery.log-rw-rw-r-- 1 openrem openrem 0 Jan 15 22:55 .gitkeep-rw-r--r-- 1 dplatten dplatten 0 Jan 29 10:19 openrem_extractor.log-rw-r--r-- 1 dplatten dplatten 0 Jan 29 10:19 openrem.log-rw-r--r-- 1 dplatten dplatten 0 Jan 29 10:19 openrem_qr.log-rw-r--r-- 1 dplatten dplatten 0 Jan 29 10:19 openrem_store.log-rw-r--r-- 1 dplatten dplatten 0 Jan 29 10:35 testing
On the container side:
openrem@openremubuntu:~/openrem-docker-6dda9460edd3$ docker-compose exec openrem ls -al /logstotal 12drwxrwxrwx 2 1001 1001 4096 Jan 29 10:35 .drwxr-xr-x 1 root root 4096 Jan 29 10:23 ..-rw-rw-r-- 1 1001 1001 0 Jan 15 22:55 .gitkeep-rw-r--r-- 1 app app 708 Jan 29 10:23 celery.log-rw-r--r-- 1 app app 0 Jan 29 10:19 openrem.log-rw-r--r-- 1 app app 0 Jan 29 10:19 openrem_extractor.log-rw-r--r-- 1 app app 0 Jan 29 10:19 openrem_qr.log-rw-r--r-- 1 app app 0 Jan 29 10:19 openrem_store.log-rw-r--r-- 1 app app 0 Jan 29 10:35 testing
Hmm. I was expecting the host side to be owned by the openrem user.
So that is the same as mine now (apart from the openrem user) Was the 777 required? Or was that because the folder is owned by openrem and the files are written by dplatten?
I had to set 777 for the logs folder for it to work. I don’t understand why dplatten is involved. I may reset things and try again.
I expect the user that runs `docker-compose up` will own the files
I’ve just purged all of my containers and volumes, deleted the unzipped folder and started again.
All using the openrem user.
I had to use sudo chmod 777 ./logs again - after the initial docker-compose up -d several of the containers were in a restarting loop.
Once I’d made the chmod change the containers all came up.
However, on the host the log files are all owned by dplatten, not by openrem.
@Ed McDonagh does bringing the logs, db_backup etc folders into the containers require a rebuild of the docker images?
Are you referring to implementing the decision we made to not use bind mounts?
I’ve not worked it all through yet. For containers like nginx, using bind mounts is really useful to add configurations to vanilla upstream images. And for database backups and logs the tools aren’t quite as nice as I’d hoped - there is a cp function in either direction, which I guess could work for the config too, but no rm or similar.
So we might do something like:
Create db backups as before (we are referencing the container internal path as before, but this time it would be in a volume):
docker-compose -f /path/to/docker-compose.yml exec db pg_dump -U openrem_user -d openrem_prod -F c -f "/db_backup/openremdump-"$TODAY"
But then you copy it out using
docker cp openrem:db_backup/openremdump* .
Looking at the logs might be:
docker-compose exec openrem ls -lrth /logsdocker-compose exec openrem less /logs/openrem-qr.log
I haven’t tried these, so I’ve probably got the syntax wrong. Just wanted to get this message written before going out for a walk - this is the third attempt I’ve had to get this written - keep getting interrupted and losing the text!
Initial pass at installing or upgrading to docker with no bind volumes and no secrets file. Refs #823 [skip ci] docs only
I haven’t had a chance to test it, so expect mistakes. If you do, let me know!
Hi @David Platten don’t try it! Or if you do, expect it to fail!
Having the Orthanc scripts folder as a non-bind volume is a bit of a pain it turns out. Because the Lua script is referenced in the Orthanc config, the container falls over on startup. But we can’t copy the script in until the container is up.
Or we could use bind folders for things that the containers will read but not write - so Orthanc and Nginx configs in bind mounts, and things that the container will write such as logs, we use a non-bind volume.
What do you think @David Platten ?
I agree@Ed McDonagh - I think that using bind folders where the contents will be read is a good idea. The folders that need to be written to by containers can be kept inside the container.
Restoring config files to bind mounts so they don't need to be copied in. Refs #823 [skip ci] docs only
You are right. For docker-compose we have to use the service name defined in docker-compose.yml (db in this example), whereas for docker we need to use the container name, (openrem-db here).
What is troubling me is that I don’t know how all the containers get the prefix openrem- - will that always be the case?
Some of the containers are named with a prefix of the folder name (like openrem-docker-f4b53e8154b1_broker_1) but others don’t and I don’t know why. When I tried to start a second folder’s docker-compose the broker image came up because it has the folder name in the name, but others didn’t because of the name conflict.
We could possibly have the instruction as you had, but have a note that if it doesn’t work to check the name of the container?
I’ve only seen the db container as openrem-db. It’s only the worker and broker that I’ve seen with the extra characters in their container names.
On another note, I rebooted my virtualbox-based Ubuntu install with Docker-based OpenREM still running to see what would happen. The system wouldn’t shutdown as it was waiting for postgres to shutdown, but postgres was steadfastly staying put. I think postgres was waiting for all database connections to timeout.
Maybe the solution to this is to suggest to users that they include a script that is run at shutdown or reboot that runs docker-compose down, and also have docker-compose up -d run at boot?
Hmm. Something to look out for. I wonder if that is generally the case?
Re the “slug” in container names: using “container_name” in the docker-compose file sorts it:
The worker, flower and broker don’t have specific container_names, hence the “slug”.
Of course - how did I miss that! I’ll add them in next time I’m changing it, to tidy up. And I’ll test that you can still scale up the worker when it has a container name.
Re Postgres blocking the reboot: I forced the virtual machine off, and then restarted it. OpenREM didn’t come back up on reboot. A docker-compose up -d gave an error. However, a docker-compose down followed by a docker-compose up -d worked.
Adding a container name prevents scaling of containers. I think the only container you would scale is the worker container - would you agree?
I presume if you knew what you were doing there may be a case for scaling the database, but it is not simple and if you wanted to do that you’d probably be ok adapting the instructions to do so!
This would leave just the worker with a name prefixed with the folder name.