Ability to use output from earlier test in input to later test

kra avatarkra created an issue

I have some tests that need to keep state in between them.

For example, I have a HTTP API where the caller POSTs to one URL, gets a generated ID back, and GETs another URL with that ID as an argument for a status update. The GET must be constructed with the output of the POST.

The simplest way to do this might be with a regexp group, where the matched group in a (re) test were available to the input of a future test. But I think this will have to be a little more involved than having the last test's groups available, I can think of things I want to test which would need to remember the output from 2 or more interactions back. Maybe a global dict and named test lines, like:

match previous test
  $ do_setup foo
  You gave me foo, the ID is (.+) (re)
  $ use_id \1
  You used id (.+), which matches foo
match earlier test with setup_id label
  $ setup_id do_setup bar
  You gave me bar, the ID is (.+) (re)
  $ do_something
  OK
  $ use_id setup_id:\1
  You used id (.+), which matches bar

Maybe the answer is a pyunit test runner? I was thinking about something like that, but I don't want to give up the simplicity of the current format and the auto-patching, it's let me write tests very quickly. Are there any examples of scripting cram.test()?

Comments (5)

  1. kra

    Or here's a random idea, provide a keyword so users can write their own helpers:

    this works
      $ export FOO=bar
      $ echo $FOO
      bar
    a keyword would let us do this
      $ echo $FOO
      $FOO (env)
    

    If we could use the environment when matching the output, then we could write our own helpers which could put the output in environment variables.

  2. Brodie Rao

    (env) sounds interesting, and shouldn't be hard to implement.

    I'm not sure if it would work in your specific case, but you could also redirect a command's output to a file, cat it to confirm the output, and then use that log file elsewhere in the test.

  3. kra

    Here's a proof of concept:

    https://gist.github.com/891679

    It was unwieldy to implement this outside of cram.py, because the test commands couldn't alter the calling environment - had to write to a temp file and put that in the env. Also, this would probably get unwieldy when shell quoting was needed.

    The example shows that some kind of output parsing is needed. Going down that route, I'm liking your suggestion of just teeing to an output file and having a helper construct the new arguments for the next command instead of mucking with environment variables. I think that unixy philosophy would fit cram well.

  4. kra

    Here's a proof of concept from a different angle. The idea is:

    - Tell me what stream to look at (stdin, stdout)

    - Tell me how to turn that string into key-value pairs

    - Put those key-value pairs in the environment

    Caller would supply the key-value helpers.

    Normal usage - pin and txid change with each POST.
      $ $TESTDIR/client.py POST /verify/v1/call phone=5035551212 message=message
      200 OK
      {"stat": "OK", "response": {"pin": ".*", "txid": ".*"}} (re)
    We need that txid for this GET.
      $ $TESTDIR/client.py GET /verify/v1/status txid=XXX
      400 Bad Request
      {"stat": "FAIL", "code": 40003, "message": "Invalid txid"}
    
    Write environment to /tmp/call-env - we probably want a $SCRATCHDIR.
      $ $TESTDIR/cram_helper.py --envout=/tmp/call_env $TESTDIR/client.py \
      > POST /verify/v1/call phone=5035551212 message=message
      200 OK
      {"stat": "OK", "response": {"pin": ".+", "txid": ".+"}} (re)
    We wrote the stdout output to call_env.
      $ cat /tmp/call_env
      '200 OK\n{"stat": "OK", "response": {"pin": "6813", "txid": "df94beb8-46a9-4af4-99ef-6731d0ae67dc"}}\n'
    Use a test-specific helper to output lines which can be used to export to
    the environment.  It outputs assignment lines from the flattened JSON part of the output:
      $ $TESTDIR/env_munger.py /tmp/call_env
      stat=OK
      response_pin=6813
      response_txid=df94beb8-46a9-4af4-99ef-6731d0ae67dc
    Put variables in the environment with a kluge.  This would be easier in
    cram.py.
      $ for i in `$TESTDIR/env_munger.py /tmp/call_env`; do export $i; done
      $ echo $response_txid
      df94beb8-46a9-4af4-99ef-6731d0ae67dc
    Now we can test with that env var.
      $ $TESTDIR/client.py GET /verify/v1/status txid=$response_txid
      200 OK
      {"stat": "OK", "response": {"info": "Call request initialized", "state": "started", "event": "INITIALIZED"}}
    
  5. kra

    I've refined this and it's pretty small, this is actually fairly doable without any changes to cram. I am teeing the output as you suggested, and use a utility to set env vars with that output.

    Setup:
      $ source $TESTDIR/setup.sh
    Start call to get a txid
      $ $TESTDIR/client.py POST /verify/v1/call phone=5035551212 message=message | \
    tee call_env
      200 OK
      {"stat": "OK", "response": {"pin": ".+", "txid": ".+"}} (re)
    Write txid to environment
      $ `$TESTDIR/client_to_env.py call_env`
    
    Test: valid txid should display status through ended/COMPLETED.
      $ $TESTDIR/client.py GET /verify/v1/status txid=$response_txid
      200 OK
      {"stat": "OK", "response": {"info": "Call request initialized", "state": "sta\
    rted", "event": "INITIALIZED"}}
    

    client_to_env.py is my test helper. It reads the tee'd output and prints "export foo=bar" lines. I evaluate that output rather than sourcing a shell script because I didn't want to parse JSON in the shell.

    The output parsing will always be test-specific, but turning that into shell lines and evaluating that is a little clumsy. This looks like it would be simpler if put into cram, since you can give an env dict to the subshell - I see that cram already does this for some specific variables. Maybe with a way to indicate what the helper command is - the simplest would be a reserved env var, but there could be prompt syntax.

      $ $CRAM_HELPER=client_to_env.py  # client_to_env.py outputs a dict
      $ test_something   # if CRAM_HELPER is set, will be used to populate env for this subshell
    
      $(env) cram_helper.py --foo=bar # $(env) means "run this command before all test commands
      $ test_something 
      $ test_another_thing
      $(env)   # stop with the helper
    

    OTOH, the current setup isn't so bad, and the dict-to-strings-to-eval part will be common to every test that needs state, so it'll be a common module. So this looks usable - I just find the string output and evaluation a little icky :)

  6. Log in to comment
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.