customized celery task_id?
When running multiple tasks (e.g. historical import of dose info from PACS) it would be extremely helpful if I could somehow identify the “Date from” and/or “Date to” from the task_id. This way, especially when tasks seem to have failed (or take forever to finish) it becomes very easy to repeat the query. This mostly applies to qrscu/movescu.
Comments (39)
-
reporter -
Interesting idea. I wonder if there might be a different way to achieve it though?
Have you tried looking for the uuid in the log file?
-
reporter qrscu task_id’s would probably be possible to get from the logfile, but movescu tasks don’t seem to contain any reference to the qrscu tasks spawning them.
-
reporter Seems to be possible by replacing everywhere delay() with apply_async() which allows additional parameters like task_id…. and queue (https://bitbucket.org/openrem/openrem/pull-requests/67/issue469addbackgroundqueue/diff)
#movescu.delay(str(query.query_id))
movescu.apply_async(kwargs={'query_id':str(query.query_id)},task_id="{}-{}".format(d.StudyDate,query.query_id))
-
So if we can define the UUID used for the various tasks, then we can probably replace the UUID in the tasks list with a link to details of the query, or the export etc. Maybe as a hover-over/pop-up type thing. Or a column in its own right.
Yes, it might still be a good idea to have multiple queues! It looks like we’ll be sticking with Celery now (but Dockerised on Windows), so maybe we should look at that again.
-
reporter That would probably be possible indeed! Sounds great if we could pull that off!
As for the multiple queues.. the code changed “quite a bit” since the PR 2 years ago. I’t would probably be easier to implement the same thing again in the new version of qrscu.py. I could make the adjustments and create a new PR if you want.
-
I agree that it would be better to start again. qrscu.py has changed quite a lot between the current release (0.10) and development as I’ve been rewriting to make use of pynetdicom 1.x, and will be making more changes before the 1.0 release.
I am not intending to release another version with python 2.7, so any PRs now need to be against develop with Python 3 etc.
I don’t know if you are able to set up a development environment with Python 3 to do this work?
-
reporter wrt queues: I’ll fix a python dev enviroment and start working on it next week.
Would also be willing to contribute to your popup/hower-over idea but I’m not sure if I’m comfortable enough with the code to make the first move. I was thinking.. why not reuse the query-uuid as task_id for the corresponding task? On mouse-over we could show the actual query status (same like on the “query remote server” page but then also for tasks initiated somewhere else).
-
What we can’t do is use the same uuid for the query task and the move task, as they need to be unique (naturally!), but we could set and store the uuid used in the query model in the database in order to identify it later.
-
reporter Do you happen to have a “one page complete ubuntu install“ already for the python3 environment? Might save me some work… 2 pages are also fine
-
reporter The query_id from the dicomquery table you mean? I didn’t realize it was used as task uuid?
-
It isn’t; but we’ll want to be able to identify both the query celery task and the move celery task, which will need to have different uuids.
-
To set up a python 3 environment, it is similar to the start of https://docs.openrem.org/en/0.10.0-docs/code_running_tests.html , except:
- Use a Python 3 virtual env instead of a Python 2.7 one
- Don’t install pynetdicom as described (it will already be installed in the previous step)
-
Regarding hover-over / popup: we already use something like this on the skin dose maps. It makes use of the static/js/skin-dose-maps/jquery.qtip.min.js JavaScript library. However, we may be better using a css tooltip: https://www.w3schools.com/css/css_tooltip.asp.
-
Initial stab at getting information through to tasks page. Refs
#800→ <<cset d47b976aa170>>
-
Not pretty, and only done for queries so far, but shows the query parameters and result message in the tasks table. Refs
#800→ <<cset e9f2e303aae8>>
-
-
Removing None filters from text. Refs
#800→ <<cset 2cbc83445164>>
-
Starting to get the export summary prepared. Refs
#800→ <<cset dd12eed82c4a>>
-
CT CSV query items now displayed. Refs
#800→ <<cset 789f087ea2dc>>
-
Moved CT csv code to common. Refs
#800→ <<cset 73029b0b84c4>>
-
Now works for ctxlsx too. Refs
#800→ <<cset ac7e67b4f707>>
-
Now works for ct_phe_2019 too. Refs
#800→ <<cset afce835eb580>>
-
Now works for DX exports too. Refs
#800→ <<cset bd85304104bf>>
-
Now works for MG exports too. Refs
#800→ <<cset 029ca4d4fdfe>>
-
RF exports now populating tasks source column too. Refs
#800→ <<cset 4839f0c80356>>
-
Added export filter information to exports page. Refs
#800→ <<cset 4fdb05a282f5>>
-
Minor change to make ct xlsx the same as all the other exports. Refs
#800→ <<cset cc0c5c750150>>
-
Deleting commented out code. Refs
#800→ <<cset 2b452825515e>>
-
Successful Move now has some result information - need to work on error situations. Refs
#800→ <<cset 0386e933fe1b>>
-
Properly separated query 'stage' and move_summary. Still need to do error situations. Refs
#800→ <<cset 24500ac63994>>
-
Tidying up dicomviews imports. Refs
#800→ <<cset 493f1ed8f91d>>
-
More tidying up dicomviews. Refs
#800→ <<cset ef605ad363aa>>
-
Now handles timeout caused by A-ABORT sent by Orthanc if unknown move destination is sent. Refs
#800→ <<cset 4fbed6267359>>
-
Now handles failure codes rather than treating them the same as success! Refs
#800→ <<cset 9ef043e11ddf>>
-
Adding ref
#800to changes→ <<cset 557e7d222fb1>>
-
Addressing some of the Codacy issues for ref
#800→ <<cset 669d3e7961a7>>
-
Disabling the a couple of the pylint error checks that aren't. Refs
#800→ <<cset 505a7402f021>>
-
- changed status to resolved
Merged in issue800customceleryID (pull request #351)
Fixes
#800Celery tasks with more information→ <<cset 3e68e01b1b5a>>
- Log in to comment
Just as an example.. some movescu tasks are already running for over a day without any way (known to me) to see their progress. Being able to identify them by date would help a lot… (terminate & retry).
(btw I’m always retrieving 1 day at a time, to prevent missing data because of truncated PACS responses).