a python library to build empirical performance models on performance data gathered with ACLib.

Status master branch: Codeship Status for mlindauer/EPM

What are Empirical Performance Models for Algorithm Configuration

This repository contains code to build empirical performance models (EPM) from data collected by running configurators on ACLib benchmark scenarios. EPMs are regression models that characterize a given algorithm’s performance across problem instances and/or parameter settings. One application we use here, it to construct cheap-to-evaluate surrogate benchmarks by replacing expensive target algorithm runs with EPM predictions. Such benchmark scenarios share the same configuration space as the real benchmark, but are less expensive to run. They can be useful for development, unit-testing and even to compare configuration algorithms.


  • numpy >= 1.9.2
  • scipy >= 0.17.0
  • nose >= 1.3.4
  • scikit-learn >= 0.15.2
  • mattdaemon >= 1.1.1

  • pyrfr >= 0.1.0 (see below)

# Installing pyrfr can be a bit tricky as we need an older version. The following usually works
git clone https://github.com/automl/random_forest_run.git
cd random_forest_run/
git checkout c4e7307
git cherry-pick 0ea8e93
python setup.py install


  • matplotlib >= 1.4.3
  • tabulate >= 0.7.5

How to install

git clone git@bitbucket.org:mlindauer/epm.git
cd epm
python setup.py install

optionally you can run tests with python setup.py test

Note Some tests may fail as the tests check for the exact same output which will be different for different numpy/scipy/scikit-learn versions.

How to build EPMs

We provide an example in epm/example with subsampled toydata for the CPLEX RCW scenario. The following guide will walk you through the steps of training an EPM and using is as an benchmark scenario.

[1] Dataformat

Besides *.json files containing performance data, we also require two more file:

  • param.pcs defines the configuration space with all parameter ranges and potential conditions
  • instance-features.txt lists instance features for each instance

[2] Train an EPM.

As a machine learning model, we used quantile regression forests as implemented in random_forest_run: The next command trains a QRF on the data provided in ./CPLEX-RCW-cont_toydata/SMAC.json and saves it as CPLEX-RCW.par10.model.pkl.

The script will load the data and imputates values of right-censored data points (runs where we only observed a lower limit on the running time). You can run this command from examples/

python ../scripts/surrogate/train_surrogate_PAR10.py --cutoff 10000 --pcs CPLEX-RCW-cont_toydata/param.pcs --features CPLEX-RCW-cont_toydata/instance-features.txt --model rfrq --save ./CPLEX-RCW --par 10 CPLEX-RCW-cont_toydata/SMAC.json

NOTE: This might take a while

[2] Use an EPM as a surrogate.

Next, you can create a daemon and use the trained model as a surrogate benchmark. Firstly, we will start the process in the background. Secondly, we will send a query to the process. We provide two scripts for that, namely surrogate_daemonizer.py and surrogate_communicator.py

Once started, the process runs as a daemon and uses a TCP/IP connection to communicate, but it will search for a free port on its own and stores the port number in a file $PORTFILE (unique for each daemon). This file has to be specified beforehand:

export PORTFILE=/tmp/portfile
python ../scripts/surrogate/surrogate_daemonizer.py --pkl ./CPLEX-RCW.par10.model.pkl --dir ./ --pid 123 --start
# Check whether daemon is alive
python ../scripts/surrogate/surrogate_daemonizer.py --pkl ./CPLEX-RCW.par10.model.pkl --dir ./ --pid 123 --status

If that worked you can look at the the daemon output in ./daemon_123.pid.err and query it for the performance of a configuration on an instance:

python ../scripts/surrogate/surrogate_communicator.py <instance_name> <instance_specific_info> <cutoff> <runlength> <seed> [-<param_name> <value]*

Note: already slight errors in the query will make daemon to crash, but you will always see a error message in the output file. This is implemented to prevent wrong usage. Note: As a default the daemon stops itself after idling for 600s. Note:If the daemon crashed you should check the output in ./daemon_123.pid.err. To restart you need to delete the portfile, all output files and start the daemon again.

A query would then be for example:

python ../scripts/surrogate/surrogate_communicator.py instances/mip/data/RCW2/RCW-INSTANCES-2/map1027-s200538-b04-h40-n5.lp 0 10000.0 2147483647 -1 -barrier_algorithm 0 -barrier_crossover 0 -barrier_limits_corrections -1 -barrier_limits_growth 1.0E12 -barrier_ordering 0 -barrier_startalg 1 -emphasis_memory no -emphasis_mip 0 -emphasis_numerical no -feasopt_mode 0 -lpmethod 0 -mip_cuts_cliques 0 -mip_cuts_covers 0 -mip_cuts_disjunctive 0 -mip_cuts_flowcovers 0 -mip_cuts_gomory 0 -mip_cuts_gubcovers 0 -mip_cuts_implied 0 -mip_cuts_mcfcut 0 -mip_cuts_mircut 0 -mip_cuts_pathcut 0 -mip_cuts_zerohalfcut 0 -mip_limits_aggforcut 3 -mip_limits_cutpasses 0 -mip_limits_cutsfactor 4.0 -mip_limits_gomorycand 200 -mip_limits_gomorypass 0 -mip_limits_submipnodelim 500 -mip_ordertype 0 -mip_strategy_backtrack 0.9999 -mip_strategy_bbinterval 7 -mip_strategy_branch 0 -mip_strategy_dive 0 -mip_strategy_file 1 -mip_strategy_fpheur 0 -mip_strategy_heuristicfreq 0 -mip_strategy_lbheur no -mip_strategy_nodeselect 1 -mip_strategy_presolvenode 0 -mip_strategy_probe 0 -mip_strategy_rinsheur 0 -mip_strategy_search 0 -mip_strategy_startalgorithm 0 -mip_strategy_subalgorithm 0 -mip_strategy_variableselect 0 -network_netfind 2 -network_pricing 0 -preprocessing_aggregator -1 -preprocessing_boundstrength -1 -preprocessing_coeffreduce 2 -preprocessing_dependency -1 -preprocessing_dual 0 -preprocessing_fill 10 -preprocessing_linear 1 -preprocessing_numpass -1 -preprocessing_reduce 3 -preprocessing_relax -1 -preprocessing_repeatpresolve -1 -preprocessing_symmetry -1 -read_scale 0 -sifting_algorithm 0 -simplex_crash 1 -simplex_dgradient 0 -simplex_limits_perturbation 0 -simplex_limits_singularity 10 -simplex_perturbation_switch no -simplex_pgradient 0 -simplex_pricing 0 -simplex_refactor 0 -simplex_tolerances_markowitz 0.01

When finished, you can stop the daemon with python ../scripts/surrogate/surrogate_daemonizer.py --pkl ./CPLEX-RCW.par10.model.pkl --dir ./ --pid 123 --stop

[3] Use an EPM as a surrogate benchmark for Algorithm configuration.

You can now combine the steps above and benchmark algorithm configuration methods by (1) Training an EPM (2) run it in the background as a surrogate benchmark and (3) make your configurator to query the EPM instead of running the real target algorithm.

[4] Use our data.

We provide data on the accompanying website ml4aad