A package to help
* Collect results on different software / hardware platforms, and help to merge them into a unified results database
* Manage and record information that drives a progressive refactor
* Compare results changes along the revision history of some library
* Specify a sequence of well defined app states to record
* Compare images
* Recycle a collection of small demo scripts that exercises a library into a semiautomatic test suite: if the demos results are deemed correct for revision yyy, and the results are the same at revision zzz, then revision zzz should have the same correctness than revision yyy
The current development focus is to support maintenance and testing for a not too big library, with a small number of demos (190), and here results mean snapshots of screen renders.
While the design and support emerges from the use cases, code will be kept very simple to facilitate redesign and refactors.
PIL / Pillow, six
Needs python 2.6+, python 3.3+ supported from v0.2
Package would not be released until the code stabilizes.
The recommended installation method is the .pth method
* hg clone https://bitbucket.org/ccanepa/remembercases somedir
* edit your favorite .pth ( or add a new text file with extension .pth into site-packages) adding the path to somedir in a new line; save; you are done.
Milestone 1 (done)
* persist information collected
* support to collect info about the presence or absence of certain strings in the demos code
* support to run scripts in another process, with a timeout, capturing stderr
* support to specify the desired demo states, and drive the screen capture
* basic annotations support
* first automatic snapshot taken
Milestone 2 (working)
* add testinfo and do snapshot for all static demos
* add testinfo and do snapshots for all non static, non interactive demos
* add testinfo and support for keyboard driven, easy interactive demos
* add support for mouse interactions if feasible
* add support for other user-driven events if possible
* all demos with testinfo, at least 80% demos with meaningful, repeatable snapshots.
* build support for image comparison, use cases are
- same testbed: exact image comparison (done)
- different testbeds: robust measures for images similarity
* build support to efficiently store snapshots coming from different testbeds
* support queering svn if certain demo has changed between revision n and m
* bonus 1: same for hg
* bonus 2: same for gif, bzr
* support to facilitate results validation in a secondary testbed when results have been validated in a primary testbed
* same when the only change is target library revision