Benchmarking made easy
The basic ideas that drove the development:
- Benchmarking and (especially) interpreting benchmark results is always
a monkey business. The tool should produce raw numbers, letting the
user to use whichever statistics she needs to make up (desired) results.
- Benchmark results should be kept and managed at a single place so one
can view and retrieve all past benchmark results pretty much the same
way as one can view and retrieve past versions of the software from
a source code management tool.
- simple - creating a benchmark is as simple as writing a method in a class
- flexible - a special set-up and/or warm-up routines could be specified at benchmark-level as well as set of parameters to allow fine-grained measurements
under different conditions
- batch runner - contains a batch runner allowing one to run benchmarks from a command line or at CI servers such as Jenkins.
- web - comes with simple web interface to gather and process benchmark results. Example...
Supported Languages / Runtimes
Planned (would be nice if somebody does it)
- Java (on hold)
...could be found on wiki.
- Jan Vraný
<jan.vrany [*] fit.cvut.cz>
- Marcel Hlopko
<marcel [*] hlopko.com>