API

Issue #262 resolved
Simon Biggs created an issue

I would like to be able to help out in getting an API up and running. I was doing a little bit of reading (example: http://stackoverflow.com/a/7303888/3912576). I noticed that Tastypie is no longer being maintained.

Would I be able to have a look at the Django REST framework and see if I can get a demo API up and running?

Comments (29)

  1. Randle Taylor

    For sure. It's been my intention to rip out any Tastypie code and replace with DRF which I'd say is the defacto standard for REST apis these days. I work with it in my day job and don't have many complaints.

  2. Simon Biggs reporter

    On my end I'll be learning as I go. But more than happy to scrap big chunks if need be.

  3. Randle Taylor

    DRF can return other data formats (e.g. XML) but default serialization format will definitely be JSON.

  4. Simon Biggs reporter

    Heya Randle,

    I see that this is tagged 0.3.0. Is there a rough API in the works that is currently available?

    I would like to see how plausible embedding scriptedforms within QATrack is.

    I am keen if this is something that you believe might be doable. I have a few nights of this week available to attempt a prototype. It would be quite a dive in the deep end for me. Not sure if I would actually come out of it with anything to show for it. But I'd like to give it a try...

  5. Randle Taylor

    I've only got the initial sort of boilerplate foundation done in the py34_api branch. Right now you can only retrieve data from the API but can't post data yet. That will be this weeks project. An example of retrieving data looks like:

    import requests
    
    root = "http://127.0.0.1:8000/api/"
    token_url = root + "get-token/"
    response = requests.post(token_url, {'username': 'yourusername', 'password': 'yourpassword'})
    headers = {"Authorization": "Token %s" % response.json()['token']}
    url = root + "qa/testlists/"
    response = requests.get(url, headers=headers)
    
    print(resp.json())
    

    Once completed, posting new data should end up looking something like this:

    import requests
    
    root = "http://127.0.0.1:8000/api/"
    token_url = root + "get-token/"
    resp = requests.post(token_url, {'username': 'yourusername', 'password': 'yourpassword'})
    headers = {"Authorization": "Token %s" % resp.json()['token']}
    url = root + "qa/testlistinstances/"
    
    data = {
        'unit_test_collection_id': 1234,
        'day': 2, # only required when performing part of test list cycle (defaults to 1)
        'work_started': "2018-02-16 15:44",
        'work_completed': "2018-02-16 16:44", # optional (default to current datetime)
        'in_progress': False,
        'test_data': {
            'test_1': {'value': 1234, 'skipped': False, 'comment': "foo bar baz"},
            'test_2': {'value': 1234, 'skipped': False, 'comment': "foo bar baz"},
            ...
            'test_N': {'value': 1234, 'skipped': False, 'comment': "foo bar baz"},
        }
    }
    
    files = {'upload_file': open('file.txt','rb')}
    
    resp = requests.post(url, json=data, files=files, headers=headers)
    print(resp.status_code) # 200
    print resp.json() #  {'url': 'http://127.0.0.1:8000/api/qa/testlistinstance/987'}
    
  6. Simon Biggs reporter

    That looks like it could work quite nicely.

    So, I imagine the API is primarily reading data and submitting data to already created test lists. As in if scriptedforms is to be used as an interface it would need to be an interface to an already created test. Is that correct? Or might there be a way to use scriptedforms to facilitate test list creation as well? That seems a potentially a little far fetched unfortunately.

    What are your thoughts?

  7. Randle Taylor

    Yes that's correct, my thinking was that, at least initially, people would create the test list and assign it to a unit in the usual way before using it via the api. That said, it's entirely feasible that we could create test lists via the api eventually.

  8. Simon Biggs reporter

    Also, an example of the format that Scripted Forms currently stores is the following:

    {
      "_scriptedforms.__version__": "0.2.1",
      "bye": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": false
      },
      "data[0]": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": 22.0
      },
      "data[1]": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": -82.0
      },
      "data[2]": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": 5.6000000000000005
      },
      "hello": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": true
      },
      "machine": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": "2345"
      },
      "notes": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": "aaa"
      },
      "submit_count >= 10": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": true
      },
      "table": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": {
          "data": [
            {
              "Avg": 1.5,
              "Meas1": 1.0,
              "Meas2": 2.0,
              "Meas3": null,
              "index": "6MV"
            },
            {
              "Avg": 3.3333333333,
              "Meas1": 4.0,
              "Meas2": 5.0,
              "Meas3": 1.0,
              "index": "10MV"
            }
          ],
          "schema": {
            "fields": [
              {
                "name": "index",
                "type": "string"
              },
              {
                "name": "Meas1",
                "type": "number"
              },
              {
                "name": "Meas2",
                "type": "number"
              },
              {
                "name": "Meas3",
                "type": "number"
              },
              {
                "name": "Avg",
                "type": "number"
              }
            ],
            "pandas_version": "0.20.0",
            "primaryKey": [
              "index"
            ]
          }
        }
      },
      "world": {
        "defined": true,
        "signature": null,
        "timestamp": null,
        "userid": null,
        "value": true
      }
    

    The "defined" could be mapped to skipped within QATrack. The value for most types could be mapped directly to a test. But what about the table type that can be seen above? Is there any way that could map to something within QATrack? The format I've used is a python pandas "tojson" method.

    Also okay of course if the table type isn't possible in this iteration.

  9. Simon Biggs reporter

    Makes sense regarding making the test first. It is definitely a good first step that makes the set up usable.

  10. Randle Taylor

    There's nothing currently that would map directly to a table like that..you'd have to split it up into individual tests.

  11. Simon Biggs reporter

    Yeah, I thought that might be an issue. Hmmm. Thanks Randle.

    I'll see if I can think of any neat solutions.

  12. Simon Biggs reporter

    So I think I would need to make a scriptedforms-qatrack which has a few additions. It would request username and password when the site is first accessed and retrieve the token from QATrack.

    The markdown files defining the forms would need a way to align themselves with the QATrack test list.

    The form should then initially call QATrack and get all of the test names within the test list. It can then present an error at the top of the form declaring each test item for which there is not yet a variable defined within the form. It would also present an error if the variable type chosen for the QATrack tests could not be mapped across.

    Because the form live updates as the template is edited you would be able to see which variable names (tests) are still required to be added as you build the form.

    Potentially for some test lists within QATrack you could just add placeholder tests which store the values and leave the python logic up to scriptedforms. In the future I imagine that this could work that as the scripted form is being built, placeholder tests within the test list can be created and removed as needed via API.

    Ideally there would be a visual indication on each of the variables within the GUI for those that were actually being sent back to QATrack.

    The last step would be on QATrack end would be to define the ability to set an alternative URL for test usage, it would also need to provide a return address as a url parameter so that scriptedforms knows what url to send the user back to when the user hits complete. QATrack might even be able to send the API token for the current user along with the URL so that the user doesn't have to login again within scriptedforms...

  13. Simon Biggs reporter

    Upon sleeping upon it I believe potentially I was a little too gung ho. I think it would be prudent to let everything mature a little bit. As scriptedforms matures I'll keep the QATrack API in the back of my mind, and keep checking up on it.

    Maybe in about 6 months I'll revisit integration.

    Thanks Randle :)

  14. Vincent Leduc

    Hi Randle,

    Very happy to see you've begun tackling the API.

    I assume your are planning to support all types of tests (boolean, simple numerical, etc.)? I'm especially curious about the File Upload type. Would we be able to upload a file as a base64 encoded string, for instance?

    Also, I envision the following use case for the API. A physicist wants to perform the monthly QA on a linac. So she clicks the Perform button for the unit in question, which opens up the page to edit a test list instance associated with the unit test collection for the monthly QA.

    She inputs manually a few results in the form, but eventually comes to a test which is actually performed via external software. The results are to be uploaded to QATrack+ via the API.

    How are the tests results to be inserted in the specific test list instance she was editing? It's even possible she has not even submitted the form yet.

    It seems, from what I understand, that the API in its soon-to-be state would create a new test list instance each time results are posted? I think ideally there would be a mechanism to mixing manually entered values with API-uploaded values in the same test list instance. This implies it would have to be possible to use the API to upload results for a subset of tests within a test collection, and to select to which test list instance the results should belong.

    Hope I'm making sense. I'm just curious about your thoughts on this, and how you see the API being used. Thanks!

  15. Randle Taylor

    I assume your are planning to support all types of tests (boolean, simple numerical, etc.)? I'm especially curious about the File Upload type. Would we be able to upload a file as a base64 encoded string, for instance?

    Yes the hope is I can support all the different test types including file uploads. Generally you won't need to encode files yourself as whatever you are using to POST the data to the server will handle that. I edited the sample of posting data above to show what attaching a file would look like using Python/requests.

    It seems, from what I understand, that the API in its soon-to-be state would create a new test list instance each time results are posted?

    Yes that's correct with the caveat that you should also be able to edit existing test list instances.

    How are the tests results to be inserted in the specific test list instance she was editing? It's even possible she has not even submitted the form yet.

    I see exactly what you mean but I think the way you are proposing to do that where you are pushing data to an ongoing test list would be difficult to orchestrate.

    Instead I would suggest in this case just using a file upload test to populate your other test values. So rather than your script posting test values directly to QATrack+, it instead writes those test values to a file which the user then uploads while performing the test list. This takes an extra step (because the user has to manually select the file to upload) but it is much much simpler from an implementation perspective (especially since it already works!).

    It also doesn't have to specifically be a file upload. You can for example easily use an external API to get the current air temperature & pressure (probably a bad idea!) using a composite test:

    import requests
    response = requests.get("http://weatherapi.com/api/weather/ON/Port+Elgin/")
    payload = response.json()
    weather = {
        'temperature': payload['temperature'],
        'pressure': payload['pressure'],
    }
    

    I do see the value in your "hybrid" manual/api use case but I'm not sure it justifies the complexity the implementation would require .

    Does that make sense? Thanks for your comments / ideas :)

  16. Vincent Leduc

    Thanks Randle.

    What you're saying does make sense. I agree that there is added complexity to being able to push data to an ongoing test list instance. I was trying to see how those technical obstacles could be dealt with.

    I guess the results pushed via the API could be stored in some kind of staging area within QATrack, awaiting to be imported by the user in the ongoing test list. The file upload seems like the best solution, and actually represents pretty much the same number of manual operations as selecting a result from the hypothetical staging area.

  17. Randle Taylor

    Another way you could do it is push a new (partially completed) test list via the API with the "In Progress" flag set, then your user could complete that test list instance it via the web interface.

  18. Vincent Leduc

    Right. Although this implies the user has to remember first to complete the tests pushed through the API, then move on to the rest of the manual tests (which might not work if the various tests must be completed in a different order).

    This makes me think: would it be a good idea to have the ability within QATrack to merge the results of several in-progress test list instances of the same test collection? This would allow the user to complete different parts of the test collection at different times (with or without the API), and combine the results once finished.

    Of course then you'd have to handle conflicts somehow. But in a first iteration, merges could only be allowed if no test results overlap between the test instances. Or, by default, recent results could overwrite older ones.

  19. Randle Taylor

    "This makes me think: would it be a good idea to have the ability within QATrack to merge the results of several in-progress test list instances of the same test collection?"

    I'm not sure I see a lot of value in this. Why not edit the existing in progress test list instead of creating another new in progress instance?

  20. Randle Taylor

    I've been making pretty good progress on the API (on the py34_api branch). The code is a bit hairy at this point but everything is working well so far. I have to retract my earlier statement about not being required to base64 encode uploads though as I forgot it's not easily possible to submit binary files along with JSON data. B64 encoding works well though.

    Functional example demonstrating file uploads (using text and base64 ), adding comments to test instances, adding comments to the test list instance, adding attachments to the test list:

    import base64
    import requests
    
    root = "http://127.0.0.1:8000/api/"
    token_url = root + "get-token/"
    resp = requests.post(token_url, {'username': 'user', 'password': 'password'})
    headers = {"Authorization": "Token %s" % resp.json()['token']}
    url = root + "qa/testlistinstances/"
    
    
    data = {
        'unit_test_collection': 'http://127.0.0.1:8000/api/qa/unittestcollections/101/',
        'work_completed': '2018-07-25 10:49:47',
        'work_started': '2018-07-25 10:49:00',
        'tests': {
            'simple_numerical_test': {
                'value': 1
            },
            'string_test': {
                'value': 'Pass',
                "skipped": True
            },
            'another_string_test': {
                'value': 'High',
                "comment": "hello d3"
            },
            'upload_text_test': {
                'filename': 'test.txt',
                'value': 'hello text',  # or e.g. open("text_file.txt", "r").read()
                'encoding': 'text'
            },
            'upload_binary_test': {
                'filename': 'image.png',
                'value': base64.b64encode(open("path/to/image.png", 'rb').read()).decode(),
                'encoding': 'base64'
            },
        },
        "attachments": [
            {
                'filename': 'some_report.pdf',
                'value': base64.b64encode(open("/path/to/some_report.pdf", 'rb').read()).decode(),
                'encoding': 'base64'
            },
        ],
    
    }
    
    
    resp = requests.post(url, json=data, headers=headers)
    print(resp.status_code) # 201
    

    I think probably the last feature I want to implement for this release is editing test lists instances and then more features can be added as demand arises. That will mean the functionality of the API for this release would be:

    • create, read, and edit TestListInstances (i.e. perform QA, the main purpose for the API)
    • read only api for all other models (Units, TestLists, TestListCycles, UnitTestCollections etc)

    down the line we can add features such as being able to create new test lists, or assigning test lists to units if there is demand/need for it.

  21. Vincent Leduc

    Hi Randle,

    Thanks for the example! And being able to read all the models will be very neat.

    About the ability to merge test list instances: I agree, while not terribly useful, it can happen several colleagues start working on a test list and don't realize there is already one in progress.

    From the API perspective: opening up to external systems can allow a lot of content to flood in. Imagine a physicist repeating a test 5-6 times, each time making adjustments until the tolerance is met, with an external system pushing a test list instance for each time he completes the test. He might want to consolidate his results. His test list instances could also include results that he entered manually. These could be merged with the results pushed via API by merging all the test list instances.

    Mind you, I'm not saying this feature will be useful for most users, or even that it should be implemented, I was just thinking out loud.

  22. Randle Taylor

    Thanks Simon! Still a work in progress but hope they will be useful to everyone...they have not been fun to write 😆

  23. Log in to comment