Wiki

Clone wiki

cosmosis / Maxlike Sampler


MaxLike

The MaxLike Sampler is a wrapper around the SciPy fmin optimization routine which contains various different methods and uses the simplex algorithm by default.

WARNING: Numerical optimizers are terrible:

Even in simple likelihood spaces numerical optimizers are flaky and frequently inaccurate. You should try several different starting points in parameter space and see what happens. Also you should experiment with the various minimizers below to see which suits your needs.


Options

tolerance

Float. Fractional convergence criterion for each parameter. NB This means different things for different samplers. The default 1e-3 is a reasonable value for the Nelder-Mead method.

maxiter

Integer. Maximum number of algorithm iterations to evaluate before giving up. Default is 1000.

output_ini

Optional string filename for a new values ini file which will be created. This file will have the same parameter ranges as the input to the sampler, but the parameter starting values will be the best-fit value that is found.

output_cov

Optional string filename. Some samplers generate a covariance matrix at the end of the sampling process. If so and this parameter is set the covmat is saved to this filename.

method

See below for a list of supported optimization methods.


Methods

You can set the "method" option to any of the names in bold below.

These notes are adapted from the scipy help page; see there for more information and references.

  • Nelder-Mead Simplex algorithm. This algorithm has been successful in many applications but other algorithms using the first and/or second derivatives information might be preferred for their better performances and robustness in general.

  • Powell The method used in CosmoMC. Modification of Powell’s method, which is a conjugate direction method. It performs sequential one-dimensional minimizations along each vector of the directions set (direc field in options and info), which is updated at each iteration of the main minimization loop. The function need not be differentiable, and no derivatives are taken.

  • CG A nonlinear conjugate gradient algorithm by Polak and Ribiere, a variant of the Fletcher-Reeves method. Only the first derivatives are used.

  • BFGS Quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno pp. 136. It uses the first derivatives only. BFGS has proven good performance even for non-smooth optimizations. Returns a covariance matrix.

  • Newton-CG A Newton-CG algorithm (also known as the truncated Newton method). It uses a CG method to the compute the search direction. See also TNC method for a box-constrained minimization with a similar algorithm.

  • L-BFGS-B The L-BFGS-B algorithm for bound constrained minimization.

  • TNC A truncated Newton algorithm to minimize a function with variables subject to bounds. This algorithm uses gradient information; it is also called Newton Conjugate-Gradient. It differs from the Newton-CG method described above as it wraps a C implementation and allows each variable to be given upper and lower bounds.

  • COBYLA The Constrained Optimization BY Linear Approximation (COBYLA) method. The algorithm is based on linear approximations to the objective function and each constraint. The method wraps a FORTRAN implementation of the algorithm.

  • SLSQP Sequential Least SQuares Programming to minimize a function of several variables with any combination of bounds, equality and inequality constraints.

  • dogleg The dog-leg trust-region algorithm for unconstrained minimization.

  • trust-ncg The Newton conjugate gradient trust-region algorithm for unconstrained minimization.

Updated