Openskin white list systems
As discussed in the December meeting, we might want to white list systems that are compatible with the current version of openskin .
Comments (13)
-
reporter -
reporter checks overule option refs #888
→ <<cset b47564b78f79>>
-
reporter Hi@Luuk , I’ve added an “ignore whitelist” option in Django forms, a fixture dir, fixture dir in settings. It now checks whether the whitelist must be ignored and if not, it checks for each skin dose calculation if the system is on the whitelist. If the software version field is left blank, all software versions are allowed.
Is this close to what you had in mind?
Left to do:
- in case we do not want to ignore the whitelist and a system is not found suitable, what can we do to prevent the calculation restarting upon refreshing the page. (e.g. on our server we serialise “empty” structs i.e. skin_map = {0,0})
- something about django tests because the pipeline builds fail at the moment
- …?
-
Just a note: this is the same issue as
#883 -
reporter @Luuk Whoops, I don't know why I could not find it before.
Latest updates:
- I implemented the software version check as you suggested, i.e. equal and newer software versions are allowed.
- Mentioned the overrule option shortly in the docs (I think we should elaborate it perhaps?), I think I also want to mention that an advanced user can change this fixture file to add systems that the user trusts (otherwise whitelisting is very very binary and limited to the systems we provided), but I do not know how this works with the docker installed version. For now I’ve added the fixture dir to the settings file, manage.py loaddata openskin_white_list.json does the trick.
Whenever make_skin_map is called, it checks the system with the whitelisted systems, if no software version is specified in the fixture, all software version are allowed. Otherwise the versions are compared. I’ve separated the part in make_skin_map.py in a separate module to make it a bit more versatile.
As for the pipeline test, it fails. I’m not so sure why, everything works fine om my computer but perhaps this could be due to the fixture not being loaded during the tests. @Ed McDonagh what do you think?
-
Issue
#883was marked as a duplicate of this issue. -
The first test that fails @wens is test_skin_map_zee. This just treats OpenSkin as a black box -
test_files/RF-RDSR-Siemens-Zee.dcm
is imported, callstools.make_skin_map.make_skin_map
on the import, then finds the saved pickle file by reconstructing the path and checks the content.In this branch, the file is found, but the content is
{'skin_map': [0, 0], 'skin_map_version': '0.8'}.
This is why the test fails. What do you expect the content to be? -
reporter @Ed McDonagh @Luuk Hi Ed and Luuk, that result is exactly what I would expect with no fixture files loaded, I’ve added a lines to the test file that loads the fixtures but I was wondering whether it would be more logical to do that in the pipeline yml definition.
-
Loading fixtures using class attribute of TestCase. Also loading settings from django.conf. Refs #888
→ <<cset 5efcaa12cc83>>
-
Loading fixtures in the same way in the other test. All tests now passing locally. Refs #888
→ <<cset d0db3f3c1f90>>
-
Existing tests now fixed @wens
-
Not tested yet, but I think this is what is needed for the docs. Refs #888 [skip ci] docs only
→ <<cset 9ab518597752>>
-
reporter codacy issues refs #888
→ <<cset 8050aa0e5c01>>
- Log in to comment
(crude) approach to system white list based on manufacturer, model name and software version, added a check before calculating the skin map, white listed systems should be added to the fixture file. Refs issue #888
→ <<cset 977789d4b2e2>>