Commits

Ruben Martinez-Cantin committed 00e5145

Tuning demos

Comments (0)

Files changed (5)

doxygen/demos.dox

 
 \section pydemos Python demos
 
-These demos use the Python interface of the library. They can be found in the \c /python subfolder.
+These demos use the Python interface of the library. They can be found
+in the \c /python subfolder.
 
-Make sure that the interface has been generated and that it can be found in the corresponding path (i.e. PYTHONPATH).
+Make sure that the interface has been generated and that it can be
+found in the corresponding path (i.e. PYTHONPATH).
 
 \subsection pyapidemo Interface test
 
-\b demo_quad provides an simple example (quadratic function). It shows the continuous and discrete cases and it also compares the callback and inheritance interfaces. It is the best starting point to start playing with the Python interface.
+\b demo_quad provides an simple example (quadratic function). It shows
+the continuous and discrete cases and it also compares the standard
+(Cython) and the object oriented interfaces. It is the best starting
+point to start playing with the Python interface.
 
-\b demo_dimscaling shows a 20D quadratic function with different smoothness in each dimension. It also show the speed of the library for <em>high dimensional functions</em>.
+\b demo_dimscaling shows a 20 dimensional quadratic function with
+different smoothness in each dimension. It also show the speed of the
+library for <em>high dimensional functions</em>.
 
-\b demo_distance is equivalent to the demo_quad example, but it includes a penalty term with respect to the distance between the current and previous sample. For example, it can be used to model sampling strategies which includes a mobile agent, like a robotic sensor as seen in \cite Marchant2012.
+\b demo_distance is equivalent to the demo_quad example, but it
+includes a penalty term with respect to the distance between the
+current and previous sample. For example, it can be used to model
+sampling strategies which includes a mobile agent, like a robotic
+sensor as seen in \cite Marchant2012.
 
 \subsection pyproc Multiprocess demo
 
-\b demo_multiprocess is a simple example that combines BayesOpt with the brilliant multiprocessing library. It shows how simple BayesOpt can be used in a parallelized setup, where one process is dedicated for the BayesOpt and the rests are dedicated to function evaluations.
+\b demo_multiprocess is a simple example that combines BayesOpt with
+the standard Python multiprocessing library. It shows how simple
+BayesOpt can be used in a parallelized setup, where one process is
+dedicated for the BayesOpt and the rests are dedicated to function
+evaluations. Also, it shows that BayesOpt is thread-safe.
 
 \subsection pycam Computer Vision demo
 
-\b demo_cam is a demonstration of the potetial of BayesOpt for parameter tuning. The advantage of using BayesOpt versus traditional strategies is that it only requires knowledge of the <em>desired behavior</em>, while traditional methods for parameter tuning requires deep knowledge of the algorithm and the meaning of the parameters.
+\b demo_cam is a demonstration of the potetial of BayesOpt for
+parameter tuning. The advantage of using BayesOpt versus traditional
+strategies is that it only requires knowledge of the <em>desired
+behavior</em>, while traditional methods for parameter tuning requires
+deep knowledge of the algorithm and the meaning of the parameters.
 
-In this case, it takes a simple example (image binarization) and show how a simple behavior (balanced white/black result) matches the result of the adaptive thresholding from Otsu's method -default in SimpleCV-. Besides, it find the optimal with few samples (typically between 10 and 20)
+In this case, it takes a simple example (image binarization) and show
+how optimizing a simple behavior (balanced white/black result) could
+match the result of the adaptive thresholding from Otsu's method
+-default in SimpleCV-. Besides, it finds the optimal with few samples
+(typically between 10 and 20). It can also be used as reference for
+hyperparameter optimization.
 
-demo_cam requires SimpleCV and a webcam.
+demo_cam requires SimpleCV and a compatible webcam.
 
 \section matdemos MATLAB/Octave demos
 
-These demos use the Matlab interface of the library. They can be found in the \c /matlab subfolder.
+These demos use the Matlab interface of the library. They can be found
+in the \c /matlab subfolder.
 
-Make sure that the interface has been generated. If the library has been generated as a shared library, make sure that it can be found in the corresponding path (i.e. LD_LIBRARY_PATH in Linux/MacOS) before running MATLAB/Octave.
+Make sure that the interface has been generated. If the library has
+been generated as a shared library, make sure that it can be found in
+the corresponding path (i.e. LD_LIBRARY_PATH in Linux/MacOS) before
+running MATLAB/Octave.
 
 \subsection matapidemo Interface test
 
-\b runtest shows the discrete and continuous interface. The objective function can be selected among all the functions defined in the \c /matlab/testfunctions subfolder, which includes a selection of standard test functions for nonlinear optimization:
+\b demo_test shows the discrete and continuous interface. The
+objective function can be selected among all the functions defined in
+the \c /matlab/testfunctions subfolder, which includes a selection of
+standard test functions for nonlinear optimization. By default, the
+test run these functions:
 
-- Quadratic function
 - Branin
+- Quadratic function (discrete version)
+- Hartmann 6D
+
+However, these functions are also available for testing:
+
+- Ackley
+- Camelback
 - Langermann
 - Michaelewicz
 - Rosenbrock

examples/bo_compare.cpp

 
   /* Branin */
   log.open("branin.log");
+  par.n_init_samples = 5;
+  par.n_iterations = 190;
 
   for (size_t ii = 0; ii < 10; ++ii)
     {
       par.random_seed = ii;
-      par.n_iterations = 190;
       BraninNormalized branin(par);
       vectord result(2);
 
 
   /* Camel */
   log.open("camel.log");
+  par.n_init_samples = 5;
+  par.n_iterations = 90;
 
   for (size_t ii = 0; ii < 10; ++ii)
     {
       par.random_seed = ii;
-      par.n_iterations = 90;
       ExampleCamelback camel(par);
       vectord result(2);
 
 
   /* Hart */
   log.open("hart.log");
+  par.n_init_samples = 10;
+  par.n_iterations = 190;
 
   for (size_t ii = 0; ii < 10; ++ii)
     {
       par.random_seed = ii;
-      par.n_iterations = 190;
       ExampleHartmann6 hart(par);
       vectord result(6);
 

matlab/demo_test.m

+% 
+% -------------------------------------------------------------------------
+%    This file is part of BayesOpt, an efficient C++ library for 
+%    Bayesian optimization.
+%
+%    Copyright (C) 2011-2014 Ruben Martinez-Cantin <rmcantin@unizar.es>
+%
+%    BayesOpt is free software: you can redistribute it and/or modify it 
+%    under the terms of the GNU General Public License as published by
+%    the Free Software Foundation, either version 3 of the License, or
+%    (at your option) any later version.
+%
+%    BayesOpt is distributed in the hope that it will be useful, but 
+%    WITHOUT ANY WARRANTY; without even the implied warranty of
+%    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+%    GNU General Public License for more details.
+%
+%    You should have received a copy of the GNU General Public License
+%    along with BayesOpt.  If not, see <http://www.gnu.org/licenses/>.
+% ------------------------------------------------------------------------
+%
+clear all, close all
+addpath('testfunctions')
+
+params.n_iterations = 190;
+params.n_init_samples = 10;
+params.crit_name = 'cEI';
+params.surr_name = 'sStudentTProcessNIG';
+params.noise = 1e-6;
+params.kernel_name = 'kMaternARD5';
+params.kernel_hp_mean = [1];
+params.kernel_hp_std = [10];
+params.verbose_level = 1;
+params.log_filename = 'matbopt.log';
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+disp('Continuous optimization');
+fun = 'branin'; n = 2;
+lb = zeros(n,1);
+ub = ones(n,1);
+
+tic;
+bayesoptcont(fun,n,params,lb,ub)
+toc;
+disp('Press INTRO');
+pause;
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+disp('Discrete optimization');
+% We still use branin
+% The set of points must be numDimension x numPoints.
+np = 100;
+xset = repmat((ub-lb),1,np) .* rand(n,np) - repmat(lb,1,np);
+
+tic;
+bayesoptdisc(fun, xset, params)
+toc;
+yset = zeros(np,1);
+for i=1:np
+    yset(i) = feval(fun,xset(:,i));
+end;
+[y_min,id] = min(yset);
+disp('Actual optimal');
+disp(xset(:,id));
+disp(y_min);
+disp('Press INTRO');
+pause;
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+disp('Continuous optimization');
+fun = 'hartmann'; n = 6;
+lb = zeros(n,1);
+ub = ones(n,1);
+
+tic;
+bayesoptcont(fun,n,params,lb,ub)
+toc;

matlab/runtest.m

-% 
-% -------------------------------------------------------------------------
-%    This file is part of BayesOpt, an efficient C++ library for 
-%    Bayesian optimization.
-%
-%    Copyright (C) 2011-2014 Ruben Martinez-Cantin <rmcantin@unizar.es>
-%
-%    BayesOpt is free software: you can redistribute it and/or modify it 
-%    under the terms of the GNU General Public License as published by
-%    the Free Software Foundation, either version 3 of the License, or
-%    (at your option) any later version.
-%
-%    BayesOpt is distributed in the hope that it will be useful, but 
-%    WITHOUT ANY WARRANTY; without even the implied warranty of
-%    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-%    GNU General Public License for more details.
-%
-%    You should have received a copy of the GNU General Public License
-%    along with BayesOpt.  If not, see <http://www.gnu.org/licenses/>.
-% ------------------------------------------------------------------------
-%
-clear all, close all
-addpath('testfunctions')
-
-params.n_iterations = 190;
-params.n_init_samples = 10;
-params.crit_name = 'cEI';
-params.surr_name = 'sStudentTProcessNIG';
-params.noise = 1e-6;
-params.kernel_name = 'kMaternARD5';
-params.kernel_hp_mean = [1];
-params.kernel_hp_std = [10];
-params.verbose_level = 1;
-params.log_filename = 'matbopt.log';
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-disp('Continuous optimization');
-fun = 'branin'; n = 2;
-lb = zeros(n,1);
-ub = ones(n,1);
-
-tic;
-bayesoptcont(fun,n,params,lb,ub)
-toc;
-disp('Press INTRO');
-pause;
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-disp('Discrete optimization');
-% We still use branin
-% The set of points must be numDimension x numPoints.
-np = 100;
-xset = repmat((ub-lb),1,np) .* rand(n,np) - repmat(lb,1,np);
-
-tic;
-bayesoptdisc(fun, xset, params)
-toc;
-yset = zeros(np,1);
-for i=1:np
-    yset(i) = feval(fun,xset(:,i));
-end;
-[y_min,id] = min(yset);
-disp('Actual optimal');
-disp(xset(:,id));
-disp(y_min);
-disp('Press INTRO');
-pause;
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-disp('Continuous optimization');
-fun = 'hartmann'; n = 6;
-lb = zeros(n,1);
-ub = ones(n,1);
-
-tic;
-bayesoptcont(fun,n,params,lb,ub)
-toc;

python/demo_dimscaling.py

 # We decided to change some of them
 params['n_init_samples'] = 150
 params['n_iter_relearn'] = 20
-#params['noise'] = 0.01
-params['kernel_name'] = "kMaternISO3"
+params['noise'] = 1e-10
+params['kernel_name'] = "kMaternISO5"
 params['kernel_hp_mean'] = [1]
 params['kernel_hp_std'] = [5]
 params['surr_name'] = "sStudentTProcessNIG"
 
 print "Global optimal", 0, np.arange(1,1+dim)
 
-print "Distance", math.sqrt(mvalue*dim)
+print "Y Gap", mvalue
+print "X Gap", math.sqrt(mvalue*dim)