#215 Merged at ef24d9b
  1. Ed McDonagh

Small change (refs #651) got me talking myself into creating a new doc for installing on Ubuntu!

I’ve created a new OpenREM server from scratch at my institution, and these docs are from my notes on that install. They differ in that they assume a new database rather than migrating as I did.

This is the first install I have done with Orthanc, and the first with properly daemonised Celery and Gunicorn using systemd.

I also needed to work out how to have the different programs writing to the same log files without everything breaking each time a new log was created.

If anyone is able to test them, I’d be very grateful. Let me know if you might. Otherwise, I’ll roll it in to make the beta 2 release.

Docs are at

Code Quality

Comments (19)

  1. Ed McDonagh author

    I should add that on a fresh system this will fail at the pip install openrem step due to issue ref #656.

    If you add an extra step before installing openrem it will be fine:

    pip install django-debug-toolbar==1.9.1

    This will be fixed in 0.8.1b2

  2. David Platten

    I’ve started to run through the document on an Ubuntu 16.04.5 LTS system. When I run the pynetdicom install command I get the following, which has two “Failed” blocks, and says it both “Successfully built” and “Failed to build” pynetdicom… I haven’t gone any further with the instructions at the moment.

    (veopenrem) dplatten@newton:/var/dose/pixelmed$ pip install
    Collecting pynetdicom-0.8.2b2 from
      Downloading (47kB)
        100% |████████████████████████████████| 51kB 359kB/s 
      Running (path:/tmp/pip-install-WAxniK/pynetdicom-0.8.2b2/ egg_info for package pynetdicom-0.8.2b2 produced metadata for project name pynetdicom. Fix your #egg=pynetdicom-0.8.2b2 fragments.
    Requirement already satisfied: pydicom>=0.9.7 in /var/dose/veopenrem/lib/python2.7/site-packages (from pynetdicom) (0.9.9)
    Building wheels for collected packages: pynetdicom, pynetdicom
      Running bdist_wheel for pynetdicom ... done
      Stored in directory: /tmp/pip-ephem-wheel-cache-enRVYc/wheels/0b/9c/cf/dd059310ae37365f09c81b4be9017c27585519d1931d3f0815
      Running bdist_wheel for pynetdicom ... error
      Complete output from command /var/dose/veopenrem/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-WAxniK/pynetdicom/';f=getattr(tokenize, 'open', open)(__file__);'\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-AXw0FJ --python-tag cp27:
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
      IOError: [Errno 2] No such file or directory: '/tmp/pip-install-WAxniK/pynetdicom/'
      Failed building wheel for pynetdicom
      Running clean for pynetdicom
      Complete output from command /var/dose/veopenrem/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-WAxniK/pynetdicom/';f=getattr(tokenize, 'open', open)(__file__);'\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" clean --all:
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
      IOError: [Errno 2] No such file or directory: '/tmp/pip-install-WAxniK/pynetdicom/'
      Failed cleaning build dir for pynetdicom
    Successfully built pynetdicom
    Failed to build pynetdicom
    Installing collected packages: pynetdicom
    Successfully installed pynetdicom-0.8.2b2
    (veopenrem) dplatten@newton:/var/dose/pixelmed$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 16.04.5 LTS
    Release:    16.04
    Codename:   xenial
    (veopenrem) dplatten@newton:/var/dose/pixelmed$ 
  3. Ed McDonagh author

    Ok, I’ll put in a warning about that. I think I’ve added a warning in the normal docs. That is all fine - the key thing is that it finishes with Successfully installed pynetdicom-0.8.2b2

    It will be good to not have to depend on that decrepit fork of pynetdicom in the future!

    The bottom line is, all is well, carry on!

  4. David Platten

    The file in the document includes the use of LOG_ROOT. However, this isn’t in the file that is included:


    import os
    logfilename = os.path.join(MEDIA_ROOT, "openrem.log")
    qrfilename = os.path.join(MEDIA_ROOT, "openrem_qr.log")
    storefilename = os.path.join(MEDIA_ROOT, "openrem_store.log")
    extractorfilename = os.path.join(MEDIA_ROOT, "openrem_extractor.log")

  5. Ed McDonagh author

    I’ve just pushed an update - give it a few minutes to build then refresh the docs.

    The link was to the release branch that was deleted when the branch was released - I have now changed it to the tag of beta 1 (the tag doesn’t exist until the release is made, so it’s a bit of a chicken and egg issue..!

  6. David Platten

    OK. I’ve followed the document to the end, but didn’t do any of the gunicorn or nginx parts (I just used the runserver). All seems to be OK, other than my comments above. Does RabbitMQ require any configuration?

  7. Ed McDonagh author

    I don’t think so - what were you thinking of? Do you have some ideas for docs for managing RabbitMQ tasks?

    1. Tim de Wit

      I don’t have a full answer here.. since at times I’m still struggling with this. E.g. when a celery job has crashed and celery and/or rabbitmq has to be restarted. In several situations purging the rabbitmq queue (and deleting the temporary queues) solved the problem. Celery can’t be restarted if there’s still a celery job running, so in that case you’ll need to purge the rabbitmq queue, kill the celery job and restart celery. Below some random commands that I’ve used, which I hope we can combine into a consistent story if we join forces:

      Enable the rabbitmq_management console. This way queues can be managed inside your browser (http://openremserver::15672/#/queues)
      sudo rabbitmq-plugins enable rabbitmq_management

      First you’ll need a user account for logging in:

      sudo rabbitmqctl add_user <username> <password>
      sudo rabbitmqctl set_user_tags <username> administrator
      sudo rabbitmqctl set_permissions -p / <username> "." "." ".*"

      Or if you only want to use the commandline instead of the web-interface, here are several useful commands:

      sudo rabbitmqadmin list queues name
      sudo rabbitmqadmin delete queue name='queuename'

      Show details for specified queue:
      sudo rabbitmqadmin list queues vhost name node messages message_stats.publish_details.rate

      Reset all queues (note: also resets all users!):

      sudo rabbitmqctl stop_app
      sudo rabbitmqctl reset
      sudo rabbitmqctl start_app

      Check celery queue:

      celery inspect active

      • Create issue for adding celery management docs
  8. Ed McDonagh author

    @tcdewit I’m going to pull this in - I’d still appreciate any test runs you can do, I’ll post the new link when I have :-)

    1. Tim de Wit

      I don’t seem to have access to the above docs (“Permission denied“). Could you please fix this?