Companion Code Version: 1.0
This companion code is for:
 H. Raja and W.U. Bajwa, "Cloud K-SVD: A Collaborative Dictionary Learning Algorithm for Big, Distributed Data," IEEE Trans. Signal Process., vol. 64, no. 1, pp. 173–188, 2016.
 Z. Shakeri, H. Raja, and W.U. Bajwa, "Dictionary learning based nonlinear classifier training from distributed data," in Proc. 2nd IEEE Global Conf. Signal and Information Processing (GlobalSIP'14), Symposium on Network Theory, Atlanta, GA, Dec. 3-5, 2014.
The code is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Detailed legalese as well as a human-readable summary of this license is available at https://creativecommons.org/licenses/by-nc-sa/4.0/
Any part of this code used in your work should be cited as follows:
H. Raja and W.U. Bajwa. "Cloud K-SVD: A Collaborative Dictionary Learning Algorithm for Big, Distributed Data," IEEE Trans. Signal Process., vol. 64, no. 1, pp. 173–188, 2016, Companion Code, ver. 1.0.
Reporting of issues
Any issues in this code should be reported to W.U. Bajwa and H. Raja. However, this companion code is being provided on an "As IS" basis to support the ideals of reproducible research. As such, no guarantees are being made that the reported issues will be eventually fixed.
This code has been tested in the following computational environments. While it can run in other environments also, we can neither provide such guarantees nor can help you make it compatible in other environments.
- PC: Windows 7 and Matlab R2013b
- Mac: Not tested
- Linux: Not tested
Dependency on external packages
For performing the sparse coding step, we are using the OMP package by Ron Rubinstein (http://www.cs.technion.ac.il/~ronrubin). You need to install this OMP package before executing the scripts. Instructions for installation of this package are provided in the 'Read Me' file within the 'ompbox10' folder. While we have included this package as part of the current release under the folder 'ompbox10', it can also be downloaded from the following link:
Script1_ComparisonForVaryingPowerIterations.m -- Generates synthetic data and distributes it across different nodes/sites and then learns the dictionary using cloud K-SVD. For comparison purposes, dictionary is also learned using K-SVD in which case all the data is available at a central location. Further, comparison with the case where only locally available data is used to learn a dictionary is also provided.
Script2_ComparisonForVaryingConsensusIterations.m -- Generates synthetic data and distributes it across different nodes/sites and then learns the dictionary using cloud K-SVD. For comparison purposes, dictionary is also learned using K-SVD in which case all the data is available at a central location. Further, comparison with the case where only locally available data is used to learn a dictionary is also provided.
Script3_ClassificationUniform.m -- Demonstrates the application of distributed dictionary learning using cloud K-SVD in solving classification problem for real data using MNIST dataset when data is equally distributed among sites. Furthermore, in this file cloud K-SVD is used to implement D-K-SVD algorithm in distributed settings to achieve better performance for classification tasks.
Script4_ClassificationImbalance.m -- Demonstrates robustness of dictionary learning using cloud K-SVD when quality or quantity of data is not same across all the sites/nodes. In this case simulations show that using local data only results in high variablility in performance across sites/nodes, while using cloud K-SVD we get similar performance across sites/nodes.
Script5_PlottingFigure_2_TSP.m -- Used for plotting Figures 2(a)-2(d) in .
Script6_PlottingFigure_3_TSP.m -- Used for plotting Figure 3 in .
Script7_Plotting_GlobalSIP.m -- Used to generate figures appearing in .
Script8_MNIST_Classification.m -- Uses MNIST data for solving classification problem using representative dictionaries. Results from this script are used to generate classification plots in Figure 3 of .