Wiki

Clone wiki

prost / Installation

Installation

In the following, we describe how to install PROST on a Ubuntu system. Over the last few years, we had several Ubuntu versions installed and never encountered any problems with the installation routine, so we assume that it should work on any sufficiently recent version. At the moment, no other operating system is supported, but we are working on the integration of a patch that allows to run PROST on Windows (see issue #31 for more information).

PROST is distributed in the hope that it will be useful to anyone, but without any warranty. Even though the source files in the repository do not contain licensing information at the moment, the planner is published under the GNU General Public License 3 (GPL).


Dependencies

  • Mercurial
  • g++
  • make
  • BuDDy

All dependencies are installed by the following command:

$ sudo apt-get install mercurial g++ make bison flex libbdd-dev

Obtaining PROST

The source code is made available in a public mercurial repository. It is cloned to the directory PROST_ROOT by:

$ hg clone http://hg@bitbucket.org/tkeller/prost PROST_ROOT

Compilation

The planner consists of two parts, a parsing component which can be found in the PROST_ROOT/src/rddl_parser directory, and the search component in PROST_ROOT/src/search. Both components are compiled by typing

$ make

in their respective directory to compile in RELEASE mode, or

$ make debug

for DEBUG mode. If you plan to perform experiments with PROST, please use the RELEASE mode. Both components must be compiled to use PROST.


Running PROST

PROST is an online planner that interleaves planning and execution, so it requires interaction with a simulation environment. We use the server from the rddlsim project. Please follow the installation instructions by Scott Sanner that can be found here to install rddlsim in the directory RDDLSIM_ROOT and create a symlink to it by executing

$ ln -s RDDLSIM_ROOT ./rddlsim

in the PROST_ROOT/testbed folder.

This enables the usage of the run-server script from the testbed folder. To start the server with all instances from IPPC 2011 or 2014, execute

$ ./run-server

and to start the server with all instances that can be found in the directory BENCHMARK_FOLDER/rddl/, run

$ ./run-server BENCHMARK_FOLDER 2323 30 0 0 0

For instance, to start the server with the instances from IPPC2011, run

$ ./run-server benchmarks/ippc2011

(Please have a look at the rddlsim documentation for information on the remaining parameters).

PROST consists of two components: parser (in src/rddl_parser) and search (in src/search). Both are called sequentially with the plan.py script as

$ ./plan.py INSTANCE_NAME CONFIG [HOSTNAME] [PORT]

, where INStANCE_NAME is the name of an instance loaded with rddlsim (e.g., /elevators_inst_mdp__1.rddl for the first instance of the elevators domain of IPC 2011). If you use PROST this way, the parser will create an output file that is passed to the search component. At the end of the run, that output file is deleted.

CONFIG is the configuration you want to use for the search component. To run the PROST planner in the version that was used at IPPC 2011 (2014) and a seed of 1, replace CONFIG with "[PROST -s 1 -se [IPPC2011]]" ("[PROST -s 1 -se [IPPC2014]]"). HOSTNAME and PORT are optional and refer to the host and port where rddlsim is expecting the communication with PROST. By default, this are "localhost" and "2323". If you don't specifiy these parameters when starting rddlsim, you don't have to specify them for PROST either. For more information on search parameters of PROST and to see all search algorithms and heuristics that are implemented, run plan.py without any arguments.

To summarize, the following command runs the PROST version that was used at IPPC 2014 on the first instance of the elevators domain seeded with 1:

$ ./plan.py elevators_inst_mdp__1 "[PROST -s 1 -se [IPPC2014]]"

Note that there is also a README file in the repository. In case neither this description nor the README file are sufficient for your needs, please do not hesitate to contact us via email.

Updated