Flexible activation functions for neural networks

This is a very small MATLAB library implementing adaptable activation
functions (AFs) for neural networks. Each AF is parameterized as a lookup
table, and the output is obtained via cubic spline interpolation over the
four right-most control points with respect to the activation. A preprint
is available on arXiv:

while previous works on this topic can be found in [1, 2].

A classical neural network is implemented in the class 'StandardNN', while
the neural network with flexible functions in 'SplineNN'. As an example,
a spline NN with 4 hidden neurons is constructed as (see the help for more

    n = SplineNN('Spline NN', 4)

while training is performed as:

    n = n.train(X, Y, options)

where {X,Y} are the matrices of input/output samples (one sample per row),
options is a struct with all parameters, including:

    - 'lambda': regularization factor for the weights.
    - 'lambda_q0': regularization factor for the splines (only SplineNN).
    - 'epochs': number of epochs for training.

We provide two different optimization functions, namely a conjugate gradient
descent and the ADAM [3] algorithm for mini-batch training. By default this
the former, to select the latter pass it as an additional parameter during
    n = SplineNN('Spline NN', 4, 'optimizer', AdamOptimizer()).

A test script (including a grid-search evaluation over the parameters) for
repeating the experiments in the paper is provided in 'test_networks.m'. A
small unitary test suite is also available in the folder 'tests', which can
be run with 'run_tests.m'.

The code is distributed under BSD-2 license. Please see the file called LICENSE.

The code uses several utility functions from MATLAB Central. Copyright
information and licenses can be found in the 'utils' folder.

The CGOptimizer is adapted from here:


   o If you have any request, bug report, or inquiry, you can contact
     the author at simone [dot] scardapane [at] uniroma1 [dot] it.
   o Additional contact information can also be found on the website of
     the author:

[1] Vecci, L., Piazza, F., & Uncini, A. (1998). Learning and approximation 
    capabilities of adaptive spline activation function neural networks. 
    Neural Networks, 11(2), 259-270.
[2] Guarnieri, S., Piazza, F., & Uncini, A. (1999). Multilayer feedforward 
    networks with adaptive spline activation function. IEEE Transactions on 
    Neural Networks, 10(3), 672-683.
[3] Kingma, D., & Ba, J. (2014). Adam: A method for stochastic optimization. 
    arXiv preprint arXiv:1412.6980.