Clone wiki

libsgd / Home

libsgd -- A library for Stochastic Gradient Descent in Machine Learning

Definition of SGD by Wikipedia:

Stochastic gradient descent is a general optimization algorithm, but is typically used to fit the parameters of a machine learning model. In standard (or "batch") gradient descent, the true gradient is used to update the parameters of the model. The true gradient is usually the sum of the gradients caused by each individual training example. The parameter vectors are adjusted by the negative of the true gradient multiplied by a step size. Therefore, batch gradient descent requires one sweep through the training set before any parameters can be changed. In stochastic (or "on-line") gradient descent, the true gradient is approximated by the gradient of the cost function only evaluated on a single training example. The parameters are then adjusted by an amount proportional to this approximate gradient. Therefore, the parameters of the model are updated after each training example. For large data sets, on-line gradient descent can be much faster than batch gradient descent.

Colors display predicted values by a 3 layers perceptron trained with the target hills function materialized by the contour plot


Use mercurial to clone this repository or download the latest source snapshot from the default branch and then follow the instructions in the README.txt file. The source allow for two installations: a pure C shared library and a python package with a command line utility.



Other resources


Feedback is always very much appreciated: