Distributed Training for Echo State Networks

This code implements a distributed training protocol for a class of recurrent
neural networks known as Echo State Networks [1]. The algorithm is based
on the well-known Alternating Direction Method of Multipliers (ADMM)
optimization procedure [2]. It assumes that the training data is distributed
throughout a network of agents, and trains an ESN without reliance
on a centralized controller. The paper describing the algorithm is under early
press on Neural Networks, scheduled for publication in the special issue
"Neural Network Learning in Big Data".

If you use this code or any derivatives thereof in your research, please cite
the following paper:

   author={Scardapane, S. and Wang, D. and Panella, M.}, 
   journal={Neural Networks}, 
   title={A decentralized training algorithm for Echo State Networks in distributed big data applications}, 

To launch a simulation, simply use the script 'run_simulation.m'. All the
configuration parameters are specified in the 'config.m' file. Three models
are compared:

   * A centralized ESN (C-ESN), where training data is first collected on 
     a centralized controller.
   * A local ESN (L-ESN), where no communication between nodes is performed.
   * ADMM-ESN, which is trained using the ADMM protocol.

The three ESNs share the same parameters. To change the dataset, uncomment
the respective line in the configuration file (lines 16-20).

The code is distributed under BSD-2 license. Please see the file called LICENSE.

Parts of the code are based on the Simple ESN Toolbox:
Copyright information is given in the respective functions as due.

The code uses several utility functions from MATLAB Central. Copyright
information and licenses can be found in the 'utils' folder.

Network topology in folder 'classes' is adapted from the Lynx MATLAB toolbox:


   o If you have any request, bug report, or inquiry, you can contact
     the author at simone [dot] scardapane [at] uniroma1 [dot] it.
   o Additional contact information can also be found on the website of
     the author:

[1] Scardapane, S., Wang, D., & Panella, M. (2015). A decentralized training
    algorithm for Echo State Networks in distributed big data applications. 
    Neural Networks, doi: 10.1016/j.neunet.2015.07.006.
[2] Lukosevicius, M., & Jaeger, H. (2009). Reservoir computing approaches to
    recurrent neural network training. Computer Science Review, 3(3), 127-149.
[3] Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). 
    Distributed optimization and statistical learning via the alternating 
    direction method of multipliers. Foundations and Trends in Machine 
    Learning, 3(1), 1-122.