customized celery task_id?

Issue #800 resolved
Tim de Wit created an issue

When running multiple tasks (e.g. historical import of dose info from PACS) it would be extremely helpful if I could somehow identify the “Date from” and/or “Date to” from the task_id. This way, especially when tasks seem to have failed (or take forever to finish) it becomes very easy to repeat the query. This mostly applies to qrscu/movescu.

Comments (39)

  1. Tim de Wit reporter

    Just as an example.. some movescu tasks are already running for over a day without any way (known to me) to see their progress. Being able to identify them by date would help a lot… (terminate & retry).
    (btw I’m always retrieving 1 day at a time, to prevent missing data because of truncated PACS responses).

  2. Ed McDonagh

    Interesting idea. I wonder if there might be a different way to achieve it though?

    Have you tried looking for the uuid in the log file?

  3. Tim de Wit reporter

    qrscu task_id’s would probably be possible to get from the logfile, but movescu tasks don’t seem to contain any reference to the qrscu tasks spawning them.

  4. Ed McDonagh

    So if we can define the UUID used for the various tasks, then we can probably replace the UUID in the tasks list with a link to details of the query, or the export etc. Maybe as a hover-over/pop-up type thing. Or a column in its own right.

    Yes, it might still be a good idea to have multiple queues! It looks like we’ll be sticking with Celery now (but Dockerised on Windows), so maybe we should look at that again.

  5. Tim de Wit reporter

    That would probably be possible indeed! Sounds great if we could pull that off!

    As for the multiple queues.. the code changed “quite a bit” since the PR 2 years ago. I’t would probably be easier to implement the same thing again in the new version of qrscu.py. I could make the adjustments and create a new PR if you want.

  6. Ed McDonagh

    I agree that it would be better to start again. qrscu.py has changed quite a lot between the current release (0.10) and development as I’ve been rewriting to make use of pynetdicom 1.x, and will be making more changes before the 1.0 release.

    I am not intending to release another version with python 2.7, so any PRs now need to be against develop with Python 3 etc.

    I don’t know if you are able to set up a development environment with Python 3 to do this work?

  7. Tim de Wit reporter

    wrt queues: I’ll fix a python dev enviroment and start working on it next week.

    Would also be willing to contribute to your popup/hower-over idea but I’m not sure if I’m comfortable enough with the code to make the first move. I was thinking.. why not reuse the query-uuid as task_id for the corresponding task? On mouse-over we could show the actual query status (same like on the “query remote server” page but then also for tasks initiated somewhere else).

  8. Ed McDonagh

    What we can’t do is use the same uuid for the query task and the move task, as they need to be unique (naturally!), but we could set and store the uuid used in the query model in the database in order to identify it later.

  9. Tim de Wit reporter

    Do you happen to have a “one page complete ubuntu install“ already for the python3 environment? Might save me some work… 2 pages are also fine 🙂

  10. Tim de Wit reporter

    The query_id from the dicomquery table you mean? I didn’t realize it was used as task uuid?

  11. Ed McDonagh

    It isn’t; but we’ll want to be able to identify both the query celery task and the move celery task, which will need to have different uuids.

  12. Log in to comment