Ruben Martinez-Cantin avatar Ruben Martinez-Cantin committed d01a8e5

Improved documentation. Added new kernel functions (poly, RQ)

Comments (0)

Files changed (13)

   par.alpha = PRIOR_ALPHA;
   par.beta = PRIOR_BETA;
   par.noise = DEFAULT_NOISE;
-  par.surr_name = "STUDENT_T_PROCESS_JEFFREYS";
+  par.surr_name = "sStudentTProcessJef";
   par.kernel.name = "kSum(kSEISO,kConst)";
   par.mean.name = "mSum(mConst,mConst)";
   par.l_type = L_ML;
   par.mean.coef_std[0] = MEAN_SIGMA;
   par.mean.n_coef = 1;
   par.noise = DEFAULT_NOISE;
-  par.surr_name = "STUDENT_T_PROCESS_JEFFREYS";
+  par.surr_name = "sStudentTProcessJef";
   par.n_iterations = 20;       // Number of iterations
   par.n_init_samples = 20;
   /*******************************************/

app/bo_display.cpp

   parameters.n_init_samples = 10;
   parameters.n_iter_relearn = 0;
   parameters.n_iterations = 300;
-  //parameters.surr_name = "STUDENT_T_PROCESS_NORMAL_INV_GAMMA";
-  parameters.surr_name = "GAUSSIAN_PROCESS_NORMAL";
+  //parameters.surr_name = "sStudentTProcessNIG";
+  parameters.surr_name = "sGaussianProcessNormal";
 
   parameters.crit_name = "cLCB";
   parameters.crit_params[0] = 5;
   parameters.n_crit_params = 1;
 
+  parameters.kernel.name = "kSum(kPoly3,kRQISO)";
   parameters.kernel.hp_mean[0] = 1;
+  parameters.kernel.hp_mean[1] = 1;
+  parameters.kernel.hp_mean[2] = 1;
+  parameters.kernel.hp_mean[3] = 1;
   parameters.kernel.hp_std[0] = 5;
-  parameters.kernel.n_hp = 1;
+  parameters.kernel.hp_std[1] = 5;
+  parameters.kernel.hp_std[2] = 5;
+  parameters.kernel.hp_std[3] = 5;
+  parameters.kernel.n_hp = 4;
 
   parameters.verbose_level = 2;
 
   bopt_params parameters = initialize_parameters_to_default();
   parameters.n_init_samples = 10;
   parameters.n_iterations = 300;
-  parameters.surr_name = "GAUSSIAN_PROCESS_ML";
+  parameters.surr_name = "sGaussianProcessML";
   /*  parameters.kernel.hp_mean[0] = 1.0;
   parameters.kernel.hp_std[0] = 100.0;
   parameters.kernel.n_hp = 1;

doxygen/introduction.dox

 in practice, there might be some mismodelling errors which can lead to
 instability of the recursion if neglected.
 
-\section models Bayesian optimization general model
+\section modbopt Bayesian optimization general model
 
 In order to simplify the description, we are going to use a special
 case of Bayesian optimization model defined previously which

doxygen/models.dox

-/*!
+/*! \page modelopt Models and functions
+
+This library was originally developed for as part of a robotics
+research project \cite MartinezCantin09AR \cite MartinezCantin07RSS,
+where a Gaussian process with hyperpriors on the mean and signal
+covariance parameters. Then, the metamodel was constructed using the
+Maximum a Posteriory (MAP) of the parameters. By that time, it only
+supported one kernel function, one mean function and one criterion.
+
+However, the library now has grown to support many more surrogate
+models, with different distributions (Gaussian processes,
+Student's-t processes, etc.), with many kernels and mean
+functions. It also provides different criteria (even some combined
+criteria) so the library can be used to any problem involving some
+bounded optimization, stochastic bandits, active learning for
+regression, etc.
+
+\section surrmod Surrogate models
+
+As seen in Section \ref modopt this library implements only one
+general regression model. However, we can assign a set of priors on
+the parameters of the model \f$\mathbf{w}\f$, \f$\sigma_s^2\f$ (the
+kernel hyperparameter will be discussed in Section \ref
+learnker). Thus, the options are:
+
+\li "sGaussianProcess": a standard Gaussian process where the
+hyperparameters are known.
+\li "sGaussianProcessML": a standard Gaussian process where the
+hyperparameters are estimated directly from data using maximum
+likelihood estimates.
+\li "sGaussianProcessNormal": a Gaussian process with a Normal 
+prior on the mean function parameters \f$\mathbf{w}\f$ and known 
+\f$\sigma_s^2\f$.
+\li "sStudentTProcessJef": in this case we use the Jeffreys prior 
+for \f$\mathbf{w}\f$ and \f$\sigma_s^2\f$. This is a kind of 
+uninformative prior which is invariant to reparametrizations. Once
+we set a prior on \f$\sigma_s^2\f$ the posterior becomes a Student's
+t Process.
+\li "sStudentTProcessNIG": in this case we standard conjugate priors,
+that is, a Normal prior on \f$\mathbf{w}\f$ and a Inverse Gamma on 
+\f$\sigma_s^2\f$. Therefore, the posterior is again a Student's t process.
+
+Gaussian processes are a very general model that can achieve good
+performance with a reasonable computational cost. However, Student's t
+processes, thanks to the hierarchical structure of priors, provide an
+even more general setup for a minor extra cost. Furthermore, the
+Student's t distribution is robust to outliers and heavy tails in the
+data.
+
+\section kermod Kernel (covariance) models
+
+One of the critical components of Gaussian and Student's t processes
+is the definition of the kernel function, which defines the
+correlation between points in the input space. As a correlation
+function, the kernel must satisfy a set of properties (e.g.: being
+positive definite). All the kernel models available and its
+combinations satisfy the kernel restrictions.
+
+The functions with \b "ISO" in their name are \a isotropic function,
+that is, they share a single set of parameters for all the dimensions
+of the input space.
+
+The functions with \b "ARD" in their name use <em>Automatic Relevance
+Determination</em>, that is, they use independent parameters for every
+dimension of the problem. Therefore, they can be use to find the \a
+relevance of the each feature in the input space. In the limit, this
+can be used for feature selection.
+
+\subsection singker Atomic kernels
+\li "kConst": a simple constant function.
+\li "kLinear", "kLinearARD": a linear function.
+\li "kMaternISO1",
+"kMaternISO3","kMaternISO5","kMaternARD1","kMaternARD3","kMaternARD5":
+Matern kernel functions. The number divided by 2 represents the order
+of the function. See \cite Rasmussen:2006 for a description.
+\li "kPolyX": Polynomial kernel function. X is a number 1-6 which
+represents the exponent of the function.
+\li "kSEARD","kSEISO": Squared exponential kernel, also known as
+Gaussian kernel.
+\li "kRQISO": Rational quadratic kernel, also known as Student's t
+kernel.
+
+\subsection combker Binary kernels
+This kernels allow to combine some of the previous kernels.
+\li "kSum": Sum of kernels.
+\li "kProd": Product of kernels.
+
+Note that the binary kernels only admits two terms. However, we can
+combine them for more complex operations. For example if we write:
+
+"kSum(kMaternISO3,kSum(kRQISO,kProd(kPoly4,kConst))"
+
+it represents the expresion: Matern(3) + RationalQuadratic + C*Polynomial^4 
+
+In this case, the vector of parameters is splited from left to right:
+1 for the Matern function, 2 for the RQ function, 2 for polynomial
+function and 1 for the constant. If the vector of parameters have more
+or less than 6 elements, the system complains.
+
+\section parmod Parametric (mean) functions
+
+Although the nonparametric process is able to model a large amount of
+funtions, we can model the expected value of the nonparametric process
+as a parametric function. This parametric model will help to capture
+large offsets and global trends. 
+
+The usage is analogous to the kernel functions.
+
+\li "mZero","mOne","mConst": constant functions. For simplicity and
+because they are largely used, we provide special cases f(x) = 0 and
+f(x) = 1.
+\li "mLinear": linear function.
+\li "mSum": binary function which can be used to combine other functions.
+
+\section critmod Selection criteria
+
+As discussed in \ref introbopt, one of the critical aspects for
+Bayesian optimization is the decision (loss) function. Unfortunately,
+the functions described there are unavailable, because they assume
+knowledge of the optimal value \f$x^*\f$. However, we can define proxies for
+those functions.
+
+Some criteria, such as the expected improvement and the lower
+confidence bound admits an annealed version "cXXXa". In that version,
+the parameter that is used to trade off exploration and exploitation
+changes over time to priorize exploration at the begining and
+exploitation at the end.
+
+Many criteria depends on the prediction function, which can be a
+Gaussian or a Student's t distribution, depending on the surrogate
+model. However, the library includes all the criteria for both
+distributions, and the system automatically selected the correct one.
+
+\subsection atomcri Atomic criteria
+
+\li "cEI","cBEI","cEIa": The most extended and reliable algorithm is
+the Expected Improvement algorithm \cite Mockus78. In this case we
+provide the general version from \cite Schonlau98 which includes an
+exponent to trade off exploration and exploitation "cEI". Whe also includes
+a variation from \cite Mockus1989 which add a \a bias or \a threshold
+to the improvement "cBEI".
+\li "cLCB", "cLCBa": Another popular algorithm is the Lower Confidence
+Bound (LCB), or UCB in case of maximization. Introduced by 
+\cite cox1992statistical as Sequential Design for Optimization (SDO).
+\li "cPOI": Probability of improvement, by \cite Kushner:1964
+\li "cExpReturn","cThompsonSampling","cOptimisticSampling": This
+criteria are related with the predicted return of the function. The
+first one is literally the expected return of the function (mean
+value). The second one is based on the Thompson sampling (drawing a
+random sample from the predicted distribution). Finally, the
+optimistic sampling takes the minimum of the other two (mean vs random).
+
+\li "cAopt": This is based on the A-optimality criteria. It is the
+predicted variance at the query point. Thus, this criteria is intended
+for \b exploration of the input space, not for optimization.
+\li "cDistance": This criteria adds a cost to a query point based on
+the distance with respect to the previous evaluation. Combined with other
+criteria functions, it might provide a more realistic setup for certain
+applications \cite Marchant2012
+
+
+\subsection combcri Combined criteria
+
+\li "cSum","cProd": Sum and product of different criteria functions.
+\li "cHedge", "cHedgeRandom": Bandit based selection of the best
+criteria based on the GP-Hedge algorithm \cite Hoffman2011. It
+automatically learns based on the behaviour of the criteria during the
+optimization process. The original version "cHedge" uses the maximum
+expected return as a \a reward for each criteria. We add a variant
+"cHedgeRandom" where the \a reward is defined in terms of Thompson
+sampling.
+
+In this case, the combined criteria admits more that two
+functions. For example:
+
+"cHedge(cSum(cEI,cDistance),cLCB,cPOI,cOptimisticSampling)"
+
+\subsection learnmod Methods for learning the kernel parameters  
+
+As commented before, we consider that the prior of the kernel
+hyperparameters \f$\theta\f$ --if available-- is independent of other
+variables. Thus, either if we are going to use maximum likelihood,
+maximum a posteriori or a fully Bayesian approach, we need to find the
+likelihood function of the observed data for the parameters. Depending
+on the model, this function will be a multivariate Gaussian
+distribution or multivariate t distribution. In general, we present
+the likelihood function up to a constant factor, that is, we remove
+the terms independent of \f$\theta\f$ from the log likelihood. In
+practice, wether we use ML or MAP point estimates or full Bayes MCMC
+posterior, the constant factor is not needed.
+
+We are going to consider the following algorithms to learn the kernel
+hyperparameters:
+
+\li Cross-validation (L_LOO): In this case, we try to maximize the
+average predicted log probability by the <em>leave one out</em> (LOO)
+strategy. This is sometimes called a pseudo-likelihood.
+
+\li Maximum Likelihood (L_ML) For any of the models presented, one
+approach to learn the hyperparameters is to maximize the likelihood of
+all the parameters \f$\mathbf{w}\f$, \f$\sigma_s^2\f$ and
+\f$\theta\f$. Then, the likelihood function is a multivariate Gaussian
+distribution. We can obtain a better estimate if we adjust the number
+of degrees of freedom, this is called <em>restricted maximum
+likelihood</em>. The library automatically selects the restricted
+version, if it is suitable.
+\li Posterior maximum likelihood (L_MAP): In this case, the likelihood
+function is modified to consider the posterior estimate of
+\f$(\mathbf{w},\sigma_s^2)\f$ based on the different cases defined in
+Section \ref surrmods. In this case, the function will be a
+multivariate Gaussian or t distribution, depending on the kind of
+prior used for \f$\sigma_s^2\f$.
+
+\li Maximum a posteriori (L_ML or L_MAP): We can modify any of the
+previous algorithms by adding a prior distribution \f$p(\theta)\f$. By
+default, we add a normal prior on the kernel hyperparameters. However,
+if the variance of the prior \a hp_std is invalid (<=0), then we
+assume no prior. Since we assume that the hyperparameters are independent,
+we can apply priors selectively only to a small set.
 
 */

doxygen/reference.dox

 /*! \page reference Reference Manual
 \tableofcontents
 
-Originally, it was developed for as part of a robotics research
-project \cite MartinezCantin09AR \cite MartinezCantin07RSS, where a
-Gaussian process with hyperpriors on the mean and signal covariance
-parameters. Then, the metamodel was constructed using the Maximum a
-Posteriory (MAP) of the parameters.
-
-However, the library now has grown to support many more surrogate
-models, with different distributions (Gaussian processes,
-Student's-t processes, etc.), with many kernels and mean
-functions. It also provides different criteria (even some combined
-criteria) so the library can be used to any problem involving some
-bounded optimization, stochastic bandits, active learning for
-regression, etc.
-
 \section testing Testing the installation.
 
 The library includes several test program that can be found in the \em
 
 \li \b surr_name: Name of the hierarchical surrogate function
 (nonparametric process and the hyperpriors on sigma and w). See
-section ??? for a detailed description. [Default S_GaussianProcess]
+Section \ref surrmod for a detailed description. [Default "sGaussianProcess"]
 \li \b sigma_s: Signal variance (if known) [Default 1.0]
 \li \b noise: Observation noise. For computer simulations or
 deterministic functions, it should be close to 0. However, to avoid
 \li \b alpha, \b beta: Inverse-Gamma prior hyperparameters (if
 applicable) [Default 1.0, 1.0]
 \li \b l_type: Learning method for the kernel hyperparameters. See
-section ??? for a detailed description [Default L_MAP]
+section \ref learnmod for a detailed description [Default L_MAP]
 
 \subsection critpar Exploration/exploitation parameters
 
 combination of them. It is used to select which points to evaluate for
 each iteration of the optimization process. Could be a combination of
 functions like "cHedge(cEI,cLCB,cPOI,cThompsonSampling)". See section
-??? for the different possibilities. [Default: "cEI]"
+critmod for the different possibilities. [Default: "cEI]"
 \li \b crit_params, \b n_crit_params: Array with the set of parameters
 for the selected criteria. If there are more than one, the parameters
 are split among them according to the number of parameters required
 
 \li \b kernel.name: Name of the kernel function. Could be a
 combination of functions like "kSum(kSEARD,kMaternARD3)". See section
-for the different posibilities. [Default: "kMaternISO3"]
+kermod for the different posibilities. [Default: "kMaternISO3"]
 \li \b kernel.hp_mean, \b kernel.hp_std, \b kernel.n_hp: Kernel
 hyperparameters prior. Any "ilegal" standard deviation (std<=0)
 results in a maximum likelihood estimate. Depends on the kernel
 \subsection meanpar Mean function parameters
 
 \li \b mean.name: Name of the mean function. Could be a combination of
-functions like "mSum(mOne, mLinear)". See section for the different
+functions like "mSum(mOne, mLinear)". See section parmod for the different
 posibilities. [Default: "mOne"]
 \li \b mean.coef_mean, \b kernel.coef_std, \b kernel.n_coef: Mean
 function coefficients. The standard deviation is only used if the

include/kernel_atomic.hpp

     };
   };
 
+  /** Polynomial covariance function*/
+  class Polynomial: public AtomicKernel
+  {
+  public:
+    Polynomial(){ mExp = 1; };
 
+    int init(size_t input_dim)
+    {
+      n_params = 2;
+      n_inputs = input_dim;
+      return 0;
+    };
 
+    double operator()( const vectord &x1, const vectord &x2)
+    {
+      double xx = boost::numeric::ublas::inner_prod(x1,x2); 
+      return params(0)*params(0) * (params(1)+xx)*mExp;
+    };
+    double gradient( const vectord &x1, const vectord &x2,
+		     size_t component)
+    {    
+      assert(false); return 0.0;
+    };
+  protected:
+    size_t mExp;
+  };
+
+  class Polynomial2: public Polynomial { public: Polynomial2(){ mExp = 2;};};
+  class Polynomial3: public Polynomial { public: Polynomial3(){ mExp = 3;};};
+  class Polynomial4: public Polynomial { public: Polynomial4(){ mExp = 4;};};
+  class Polynomial5: public Polynomial { public: Polynomial5(){mExp = 5;};};
+  class Polynomial6: public Polynomial { public: Polynomial6(){mExp = 6;};};
 
   /** \brief Square exponential (Gaussian) kernel. Isotropic version. */
   class SEIso: public ISOkernel
     };
   };
 
+
+  /** \brief Rational quadratic (Student's t) kernel. Isotropic version. */
+  class RQIso: public ISOkernel
+  {
+  public:
+    int init(size_t input_dim)
+    {
+      n_params = 2;
+      n_inputs = input_dim;
+      return 0;
+    };
+
+    double operator()( const vectord &x1, const vectord &x2)
+    {
+      double rl = computeWeightedNorm2(x1,x2);
+      double k = rl*rl/(2*params(1));
+      return pow(1+k,-params(1));
+    };
+    double gradient(const vectord &x1, const vectord &x2,
+		    size_t component)
+    {
+      assert(false); return 0.0;
+    };
+  };
+
   //@}
 
 } //namespace bayesopt
   timestamp = {2010.06.18}
 }
 
+@INCOLLECTION{cox1997sdo,
+  author = {Cox, Dennis D and John, Susan},
+  title = {SDO: A statistical method for global optimization},
+  booktitle = {Multidisciplinary Design Optimization: State of the Art},
+  publisher = {SIAM: Philadelphia},
+  year = {1997},
+  pages = {315--329},
+  owner = {rmcantin},
+  timestamp = {2013.05.15}
+}
+
+@INPROCEEDINGS{cox1992statistical,
+  author = {Cox, Dennis D and John, Susan},
+  title = {A statistical method for global optimization},
+  booktitle = {Systems, Man and Cybernetics, 1992., IEEE International Conference
+	on},
+  year = {1992},
+  pages = {1241--1246},
+  organization = {IEEE},
+  owner = {rmcantin},
+  timestamp = {2013.05.15}
+}
+
 @INPROCEEDINGS{Diaconis1988,
   author = {Persi Diaconis},
   title = {Bayesian Numerical Analysis},
   timestamp = {2010.06.18}
 }
 
+@BOOK{Mockus1989,
+  title = {Bayesian Approach to Global Optimization},
+  publisher = {Kluwer Academic Publishers},
+  year = {1989},
+  editor = {Michiel HazewinkeI},
+  author = {Mockus, Jonas},
+  volume = {37},
+  series = {Mathematics and Its Applications},
+  owner = {rmcantin},
+  timestamp = {2013.05.15}
+}
+
 @ARTICLE{Mockus94,
   author = {Jonas Mockus},
   title = {Application of Bayesian Approach to Numerical Methods of Global and
   timestamp = {2010.06.18}
 }
 
-@ARTICLE{OHagan1992,
+@ARTICLE{O'Hagan1992,
   author = {Anthony O'Hagan},
   title = {Some {B}ayesian Numerical Analysis},
   journal = {Bayesian Statistics},

python/bayesopt.cpp

-/* Generated by Cython 0.16 on Tue May 14 01:43:57 2013 */
+/* Generated by Cython 0.16 on Wed May 15 16:39:41 2013 */
 
 #define PY_SSIZE_T_CLEAN
 #include "Python.h"
 static char __pyx_k__mean_coef_mean[] = "mean_coef_mean";
 static char __pyx_k__n_init_samples[] = "n_init_samples";
 static char __pyx_k__n_iter_relearn[] = "n_iter_relearn";
-static char __pyx_k__GAUSSIAN_PROCESS[] = "GAUSSIAN_PROCESS";
+static char __pyx_k__sGaussianProcess[] = "sGaussianProcess";
 static char __pyx_k__ascontiguousarray[] = "ascontiguousarray";
 static char __pyx_k__initialize_params[] = "initialize_params";
 static char __pyx_k__optimize_discrete[] = "optimize_discrete";
 static PyObject *__pyx_kp_u_6;
 static PyObject *__pyx_kp_u_8;
 static PyObject *__pyx_kp_u_9;
-static PyObject *__pyx_n_s__GAUSSIAN_PROCESS;
 static PyObject *__pyx_n_s__L_MAP;
 static PyObject *__pyx_n_s__RuntimeError;
 static PyObject *__pyx_n_s__ValueError;
 static PyObject *__pyx_n_s__optimize_discrete;
 static PyObject *__pyx_n_s__params;
 static PyObject *__pyx_n_s__range;
+static PyObject *__pyx_n_s__sGaussianProcess;
 static PyObject *__pyx_n_s__s_mu;
 static PyObject *__pyx_n_s__s_theta;
 static PyObject *__pyx_n_s__sigma_s;
   if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__n_iter_relearn), __pyx_int_30) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 159; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
   if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__verbose_level), __pyx_int_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 159; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
   if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__log_filename), ((PyObject *)__pyx_kp_s_1)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 159; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
-  if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__surr_name), ((PyObject *)__pyx_n_s__GAUSSIAN_PROCESS)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 159; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
+  if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__surr_name), ((PyObject *)__pyx_n_s__sGaussianProcess)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 159; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
 
   /* "bayesopt.pyx":167
  *         "log_filename"   : "bayesopt.log" ,
- *         "surr_name" : "GAUSSIAN_PROCESS" ,
+ *         "surr_name" : "sGaussianProcess" ,
  *         "sigma_s"  : 1.0,             # <<<<<<<<<<<<<<
  *         "noise"  : 0.001,
  *         "alpha"  : 1.0,
   __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
 
   /* "bayesopt.pyx":168
- *         "surr_name" : "GAUSSIAN_PROCESS" ,
+ *         "surr_name" : "sGaussianProcess" ,
  *         "sigma_s"  : 1.0,
  *         "noise"  : 0.001,             # <<<<<<<<<<<<<<
  *         "alpha"  : 1.0,
   {&__pyx_kp_u_6, __pyx_k_6, sizeof(__pyx_k_6), 0, 1, 0, 0},
   {&__pyx_kp_u_8, __pyx_k_8, sizeof(__pyx_k_8), 0, 1, 0, 0},
   {&__pyx_kp_u_9, __pyx_k_9, sizeof(__pyx_k_9), 0, 1, 0, 0},
-  {&__pyx_n_s__GAUSSIAN_PROCESS, __pyx_k__GAUSSIAN_PROCESS, sizeof(__pyx_k__GAUSSIAN_PROCESS), 0, 0, 1, 1},
   {&__pyx_n_s__L_MAP, __pyx_k__L_MAP, sizeof(__pyx_k__L_MAP), 0, 0, 1, 1},
   {&__pyx_n_s__RuntimeError, __pyx_k__RuntimeError, sizeof(__pyx_k__RuntimeError), 0, 0, 1, 1},
   {&__pyx_n_s__ValueError, __pyx_k__ValueError, sizeof(__pyx_k__ValueError), 0, 0, 1, 1},
   {&__pyx_n_s__optimize_discrete, __pyx_k__optimize_discrete, sizeof(__pyx_k__optimize_discrete), 0, 0, 1, 1},
   {&__pyx_n_s__params, __pyx_k__params, sizeof(__pyx_k__params), 0, 0, 1, 1},
   {&__pyx_n_s__range, __pyx_k__range, sizeof(__pyx_k__range), 0, 0, 1, 1},
+  {&__pyx_n_s__sGaussianProcess, __pyx_k__sGaussianProcess, sizeof(__pyx_k__sGaussianProcess), 0, 0, 1, 1},
   {&__pyx_n_s__s_mu, __pyx_k__s_mu, sizeof(__pyx_k__s_mu), 0, 0, 1, 1},
   {&__pyx_n_s__s_theta, __pyx_k__s_theta, sizeof(__pyx_k__s_theta), 0, 0, 1, 1},
   {&__pyx_n_s__sigma_s, __pyx_k__sigma_s, sizeof(__pyx_k__sigma_s), 0, 0, 1, 1},

python/bayesopt.pyx

         "n_iter_relearn" : 30,
         "verbose_level"  : 1,
         "log_filename"   : "bayesopt.log" ,
-        "surr_name" : "GAUSSIAN_PROCESS" ,
+        "surr_name" : "sGaussianProcess" ,
         "sigma_s"  : 1.0,
         "noise"  : 0.001,
         "alpha"  : 1.0,

src/kernel_functors.cpp

     registry["kMaternARD3"] = & create_func<MaternARD3>;
     registry["kMaternARD5"] = & create_func<MaternARD5>;
   
+    registry["kPoly1"] = & create_func<Polynomial>;
+    registry["kPoly2"] = & create_func<Polynomial2>;
+    registry["kPoly3"] = & create_func<Polynomial3>;
+    registry["kPoly4"] = & create_func<Polynomial4>;
+    registry["kPoly5"] = & create_func<Polynomial5>;
+    registry["kPoly6"] = & create_func<Polynomial6>;
+
     registry["kSEARD"] = & create_func<SEArd>;
     registry["kSEISO"] = & create_func<SEIso>;
 
+    registry["kRQISO"] = & create_func<RQIso>;
+
     registry["kSum"] = & create_func<KernelSum>;
     registry["kProd"] = & create_func<KernelProd>;
   }

src/nonparametricprocess.cpp

 
     std::string name = parameters.surr_name;
 
-    if (!name.compare("GAUSSIAN_PROCESS"))
+    if (!name.compare("sGaussianProcess"))
       s_ptr = new GaussianProcess(dim,parameters);
-    else  if(!name.compare("GAUSSIAN_PROCESS_ML"))
+    else  if(!name.compare("sGaussianProcessML"))
       s_ptr = new GaussianProcessML(dim,parameters);
-    else  if(!name.compare("GAUSSIAN_PROCESS_NORMAL"))
+    else  if(!name.compare("sGaussianProcessNormal"))
       s_ptr = new GaussianProcessNormal(dim,parameters);
-    else if (!name.compare("STUDENT_T_PROCESS_JEFFREYS"))
+    else if (!name.compare("sStudentTProcessJef"))
       s_ptr = new StudentTProcessNIG(dim,parameters); 
-    else if (!name.compare("STUDENT_T_PROCESS_NORMAL_INV_GAMMA"))
+    else if (!name.compare("sStudentTProcessNIG"))
       s_ptr = new StudentTProcessNIG(dim,parameters); 
     else
       {
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.