Zoltan Szabo avatar Zoltan Szabo committed f90e2f2

Verification of the H/I/D/C function arguments: unified across different estimators. Further verification (compatibility of ds and Y): added to I-estimators (the estimations themselves have not changed). Some comment unification: carried out.

Comments (0)

Files changed (112)

+-Verification of the H/I/D/C function arguments: unified across different estimators. Further verification (compatibility of ds and Y): added to I-estimators (the estimations has not changed). Some comment unification: carried out. 
+
 v0.22 (Dec 01, 2012):
 -Cauchy-Schwartz and Euclidean distance based divergence estimators: added; see 'DCS_KDE_iChol_initialization.m', 'DCS_KDE_iChol_estimation.m', 'DED_KDE_iChol_initialization.m', 'DED_KDE_iChol_estimation.m'.
 -Cauchy-Schwartz and Euclidean distance based quadratic mutual information estimators: added; see 'IQMI_CS_KDE_direct_initialization.m', 'IQMI_CS_KDE_direct_estimation.m', 'IQMI_CS_KDE_iChol_initialization.m', 'IQMI_CS_KDE_iChol_estimation.m', 'IQMI_ED_KDE_iChol_initialization.m', 'IQMI_ED_KDE_iChol_estimation.m'.

code/H_I_D_C/base_estimators/CCE_kNN_k_estimation.m

 function [CE] = CCE_kNN_k_estimation(Y1,Y2,co)
-%Estimates the cross-entropy (CE) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample) using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the cross-entropy (CE) of Y1 and Y2 using the kNN method (S={k}).
 %
 %We make use of the naming convention 'C<name>_estimation', to ease embedding new cross estimation methods.
 %
-%REFERENCE: Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153-2182, 2008.
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: cross estimator object.
+%
+%REFERENCE: 
+%   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153-2182, 2008.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/DBhattacharyya_kNN_k_estimation.m

 function [D] = DBhattacharyya_kNN_k_estimation(Y1,Y2,co)
-%Estimates the Bhattacharyya distance of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the Bhattacharyya distance of Y1 and Y2 using the kNN method (S={k}).
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
-%	Barnabas Poczos and Liang Xiong and Dougal Sutherland and Jeff Schneider. Support Distribution Machines. Technical Report, 2012. "http://arxiv.org/abs/1202.0302"
+%   Barnabas Poczos and Liang Xiong and Dougal Sutherland and Jeff Schneider. Support Distribution Machines. Technical Report, 2012. "http://arxiv.org/abs/1202.0302"
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/DCS_KDE_iChol_estimation.m

 function [D] = DCS_KDE_iChol_estimation(Y1,Y2,co)
-%Estimates the Cauchy-Schwartz divergence using Gaussian KDE (kernel density estimation) and incomplete Cholesky decomposition. The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal; otherwise their minimum is taken. Cost parameters are provided in the cost object co.
+%Estimates the Cauchy-Schwartz divergence using Gaussian KDE (kernel density estimation) and incomplete Cholesky decomposition. 
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
-%REFERENCE: Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal; otherwise their minimum is taken. 
+%  co: divergence estimator object.
+%
+%REFERENCE: 
+%   Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/DED_KDE_iChol_estimation.m

 function [D] = DED_KDE_iChol_estimation(Y1,Y2,co)
-%Estimates the Euclidean distance based divergence using Gaussian KDE (kernel density estimation) and incomplete Cholesky decomposition. The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal; otherwise their minimum is taken. Cost parameters are provided in the cost object co.
+%Estimates the Euclidean distance based divergence using Gaussian KDE (kernel density estimation) and incomplete Cholesky decomposition. 
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
-%REFERENCE: Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal; otherwise their minimum is taken. 
+%  co: divergence estimator object.
+%
+%REFERENCE: 
+%   Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/DHellinger_kNN_k_estimation.m

 function [D] = DHellinger_kNN_k_estimation(Y1,Y2,co)
-%Estimates the Hellinger distance of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the Hellinger distance of Y1 and Y2 using the kNN method (S={k}).
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
-%	Barnabas Poczos and Liang Xiong and Dougal Sutherland and Jeff Schneider. Support Distribution Machines. Technical Report, 2012. "http://arxiv.org/abs/1202.0302"
+%   Barnabas Poczos and Liang Xiong and Dougal Sutherland and Jeff Schneider. Support Distribution Machines. Technical Report, 2012. "http://arxiv.org/abs/1202.0302"
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/DKL_kNN_k_estimation.m

 function [D] = DKL_kNN_k_estimation(Y1,Y2,co)
-%Estimates the Kullback-Leibler divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the Kullback-Leibler divergence (D) of Y1 and Y2 using the kNN method (S={k}). 
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
 %   Fernando Perez-Cruz. Estimation of Information Theoretic Measures for Continuous Random Variables. Advances in Neural Information Processing Systems (NIPS), pp. 1257-1264, 2008.
 %   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153-2182, 2008.

code/H_I_D_C/base_estimators/DKL_kNN_kiTi_estimation.m

 function [D] = DKL_kNN_kiTi_estimation(Y1,Y2,co)
-%Estimates the Kullback-Leibler divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the Kullback-Leibler divergence (D) of Y1 and Y2 using the kNN method (S={k}).
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
 %   Quing Wang, Sanjeev R. Kulkarni, and Sergio Verdu. Divergence estimation for multidimensional densities via k-nearest-neighbor distances. IEEE Transactions on Information Theory, 55:2392-2405, 2009.
 %

code/H_I_D_C/base_estimators/DKL_kNN_kiTi_initialization.m

 function [co] = DKL_kNN_kiTi_initialization(mult)
-%Initialization of the kNN (k-nearest neighbor, S_1={k_1}, S_2={k_2}) based Kullback-Leibler divergence estimator. Here, ki-s depend on the number of samples, they are set in 'DKullback_Leibler_kNN_kiTi_estimation.m'.
+%Initialization of the kNN (k-nearest neighbor, S_1={k_1}, S_2={k_2}) based Kullback-Leibler divergence estimator. Here, ki-s depend on the number of samples, they are set in 'DKL_kNN_kiTi_estimation.m'.
 %
 %Note:
 %   1)The estimator is treated as a cost object (co).

code/H_I_D_C/base_estimators/DL2_kNN_k_estimation.m

 function [D] = DL2_kNN_k_estimation(Y1,Y2,co)
-%Estimates the L2 divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the L2 divergence (D) of Y1 and Y2 using the kNN method (S={k}).
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
 %   Barnabas Poczos, Zoltan Szabo, Jeff Schneider: Nonparametric divergence estimators for Independent Subspace Analysis. EUSIPCO-2011, pages 1849-1853.
 %   Barnabas Poczos, Liang Xiong, Jeff Schneider. Nonparametric Divergence: Estimation with Applications to Machine Learning on Distributions. UAI-2011.

code/H_I_D_C/base_estimators/DMMDonline_estimation.m

 function [D] = DMMDonline_estimation(Y1,Y2,co)
-%Estimates divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample) using the MMD (maximum mean discrepancy) method, online. The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal. Cost parameters are provided in the cost object co.
+%Estimates divergence (D) of Y1 and Y2 using the MMD (maximum mean discrepancy) method, online. 
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal; otherwise their minimum is taken.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
-%  Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine  Learning Research 13 (2012) 723-773. See Lemma 14.
+%   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine  Learning Research 13 (2012) 723-773. See Lemma 14.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/DRenyi_kNN_k_estimation.m

 function [D] = DRenyi_kNN_k_estimation(Y1,Y2,co)
-%Estimates the Renyi divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the Renyi divergence (D) of Y1 and Y2 using the kNN method (S={k}). 
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
 %   Barnabas Poczos, Zoltan Szabo, Jeff Schneider: Nonparametric divergence estimators for Independent Subspace Analysis. EUSIPCO-2011, pages 1849-1853.
 %   Barnabas Poczos, Jeff Schneider: On the Estimation of alpha-Divergences. AISTATS-2011, pages 609-617.

code/H_I_D_C/base_estimators/DTsallis_kNN_k_estimation.m

 function [D] = DTsallis_kNN_k_estimation(Y1,Y2,co)
-%Estimates the Tsallis divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the Tsallis divergence (D) of Y1 and Y2 using the kNN method (S={k}).
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
 %REFERENCE: 
 %   Barnabas Poczos, Zoltan Szabo, Jeff Schneider: Nonparametric divergence estimators for Independent Subspace Analysis. EUSIPCO-2011, pages 1849-1853.
 %   Barnabas Poczos, Jeff Schneider: On the Estimation of alpha-Divergences. AISTATS-2011, pages 609-617.

code/H_I_D_C/base_estimators/HRenyi_GSF_estimation.m

 function [H] = HRenyi_GSF_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample)
-%using the GSF method. Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using the GSF (geodesic spanning forest) method.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
-%     Jose A. Costa, Alfred O. Hero. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. IEEE Transactions on Signal Processing 52(8):2210-2221, 2004.
+%   Jose A. Costa, Alfred O. Hero. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. IEEE Transactions on Signal Processing 52(8):2210-2221, 2004.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/HRenyi_GSF_initialization.m

 %
 %Note:
 %   1)The estimator is treated as a cost object (co).
-%   2) We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.

code/H_I_D_C/base_estimators/HRenyi_MST_estimation.m

 function [H] = HRenyi_MST_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample)
-%using the minimum spanning tree (MST). Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using the minimum spanning tree (MST).
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
 %   Joseph E. Yukich. Probability Theory of Classical Euclidean Optimization Problems, Lecture Notes in Mathematics, 1998, vol. 1675.
 %

code/H_I_D_C/base_estimators/HRenyi_kNN_1tok_estimation.m

 function [H] = HRenyi_kNN_1tok_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample) using the kNN method (S={1,...,k}). Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using the kNN method (S={1,...,k}).
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
 %   Barnabas Poczos, Andras Lorincz. Independent Subspace Analysis Using k-Nearest Neighborhood Estimates. ICANN-2005, pages 163-168.
 %

code/H_I_D_C/base_estimators/HRenyi_kNN_1tok_initialization.m

 %
 %Note:
 %   1)The estimator is treated as a cost object (co).
-%   2) We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.

code/H_I_D_C/base_estimators/HRenyi_kNN_S_estimation.m

 function [H] = HRenyi_kNN_S_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample) using the generalized k-nearest neighbor (S\subseteq {1,...,k}) method. Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using the generalized k-nearest neighbor (S\subseteq {1,...,k}) method.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
 %   David Pal, Barnabas Poczos, Csaba Szepesvari: Estimation of Renyi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs. NIPS-2010, pages 1849-1857.
 %

code/H_I_D_C/base_estimators/HRenyi_kNN_k_estimation.m

 function [H] = HRenyi_kNN_k_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using the kNN method (S={k}).
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
-%   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153�2182, 2008.
+%   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153-2182, 2008.
 %   Joseph E. Yukich. Probability Theory of Classical Euclidean Optimization Problems, Lecture Notes in Mathematics, 1998, vol. 1675.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")

code/H_I_D_C/base_estimators/HRenyi_spacing_E_estimation.m

 function [H] = HRenyi_spacing_E_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample) using an extension of the 'empiric entropy estimation of order m' method. Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using an extension of the 'empiric entropy estimation of order m' method.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods. 
 %
-%REFERENCE:
-%    	Mark P. Wachowiak,  Renata Smolikova, Georgia D. Tourassi, and Adel S. Elmaghraby. Estimation of generalized entropies with sample spacing. Pattern Analysis and Applications 8: 95-101, 2005.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Mark P. Wachowiak, Renata Smolikova, Georgia D. Tourassi, and Adel S. Elmaghraby. Estimation of generalized entropies with sample spacing. Pattern Analysis and Applications 8: 95-101, 2005.
 %      
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/HRenyi_spacing_V_estimation.m

 function [H] = HRenyi_spacing_V_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample) using an extension of Vasicek's spacing method. Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using an extension of Vasicek's spacing method.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods. 
 %
-%REFERENCE:
-%    	Mark P. Wachowiak,  Renata Smolikova, Georgia D. Tourassi, and Adel S. Elmaghraby. Estimation of generalized entropies with sample spacing. Pattern Analysis and Applications 8: 95-101, 2005.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Mark P. Wachowiak, Renata Smolikova, Georgia D. Tourassi, and Adel S. Elmaghraby. Estimation of generalized entropies with sample spacing. Pattern Analysis and Applications 8: 95-101, 2005.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 
 %verification:

code/H_I_D_C/base_estimators/HRenyi_weightedkNN_estimation.m

 function [H] = HRenyi_weightedkNN_estimation(Y,co)
-%Estimates the Renyi entropy (H) of Y (Y(:,t) is the t^th sample)
-%using the weighted k-nearest neighbor method. Cost parameters are provided in the cost object co.
+%Estimates the Renyi entropy (H) of Y using the weighted k-nearest neighbor method.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
 %   Kumar Sricharan and Alfred. O. Hero. Weighted k-NN graphs for Renyi entropy estimation in high dimensions, IEEE Workshop on Statistical Signal Processing (SSP), pages 773-776, 2011. 
-%NOTE:
-%  This code has been written on the basis of implementation kindly provided by Kumar Sricharan.
+%
+%Note: This code has been written on the basis of the implementation kindly provided by Kumar Sricharan.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/HRenyi_weightedkNN_initialization.m

 %
 %Note:
 %   1)The estimator is treated as a cost object (co).
-%   2) We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.

code/H_I_D_C/base_estimators/HShannon_Edgeworth_estimation.m

 function [H] = HShannon_Edgeworth_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Edgeworth expansion. Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Edgeworth expansion.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
-%REFERENCE: Marc Van Hulle. Edgeworth approximation of multivariate differential entropy. Neural Computation, 17(9), 1903-1910, 2005.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Marc Van Hulle. Edgeworth approximation of multivariate differential entropy. Neural Computation, 17(9), 1903-1910, 2005.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/HShannon_Voronoi_estimation.m

 function [H] = HShannon_Voronoi_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Voronoi regions. Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Voronoi regions.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
-%REFERENCE:
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
 %   Erik Miller. A new class of entropy estimators for multi-dimensional densities. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 297-300, 2003. 
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 %verification:
     if d==1

code/H_I_D_C/base_estimators/HShannon_kNN_k_estimation.m

 function [H] = HShannon_kNN_k_estimation(Y,co)
-%Estimates the Shannon differential entropy (H) of Y (Y(:,t) is the t^th sample)
-%using the kNN method (and neighbors S={k}). Cost parameters are provided in the cost object co.
+%Estimates the Shannon differential entropy (H) of Y using the kNN method (and neighbors S={k}).
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE:
-%   M. N. Goria, Nikolai N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Journal of Nonparametric Statistics, 17: 277�297, 2005. (S={k})
+%   M. N. Goria, Nikolai N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Journal of Nonparametric Statistics, 17: 277-297, 2005. (S={k})
 %   Harshinder Singh, Neeraj Misra, Vladimir Hnizdo, Adam Fedorowicz and Eugene Demchuk. Nearest neighbor estimates of entropy. American Journal of Mathematical and Management Sciences, 23, 301-321, 2003. (S={k})
-%   L. F. Kozachenko and Nikolai N. Leonenko. A statistical estimate for the entropy of a random vector. Problems of Information Transmission, 23:9�16, 1987. (S={1})
+%   L. F. Kozachenko and Nikolai N. Leonenko. A statistical estimate for the entropy of a random vector. Problems of Information Transmission, 23:9-16, 1987. (S={1})
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/HShannon_spacing_LL_estimation.m

 function [H] = HShannon_spacing_LL_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Correa's spacing method (locally linear regression). Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Correa's spacing method (locally linear regression).
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods. 
 %
-%REFERENCE:
-%    	Juan C. Correa. A new estimator of entropy. Communications in Statistics - Theory and Methods, Volume 24, Issue 10, pp. 2439-2449, 1995.  
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Juan C. Correa. A new estimator of entropy. Communications in Statistics - Theory and Methods, Volume 24, Issue 10, pp. 2439-2449, 1995.  
 %      
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 
 %verification:

code/H_I_D_C/base_estimators/HShannon_spacing_V_estimation.m

 function [H] = HShannon_spacing_V_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Vasicek's spacing method. Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Vasicek's spacing method.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
-%REFERENCE:
-%   Oldrich Vasicek. A test for normality based on sample entropy. Journal of the Royal Statistical Society, Series B, 38(1):54�59, 1976.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Oldrich Vasicek. A test for normality based on sample entropy. Journal of the Royal Statistical Society, Series B, 38(1):54-59, 1976.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 
 %verification:

code/H_I_D_C/base_estimators/HShannon_spacing_Vb_estimation.m

 function [H] = HShannon_spacing_Vb_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Vasicek's spacing method with a bias correction. Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Vasicek's spacing method with a bias correction.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
-%REFERENCE:
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
 %   Bert Van Es. Estimating Functionals Related to a Density by a Class of Statistics Based on Spacings. Scandinavian Journal of Statistics, 19:61-72, 1992.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 
 %verification:

code/H_I_D_C/base_estimators/HShannon_spacing_Vpconst_estimation.m

 function [H] = HShannon_spacing_Vpconst_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Vasicek's spacing method with piecewise constant correction. Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Vasicek's spacing method with piecewise constant correction.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
-%REFERENCE: Nader Ebrahimi, Kurt Pflughoeft, and Ehsan S. Soofi. Two measures of sample entropy. Statistics and Probability Letters, 20:225-234, 1994.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Nader Ebrahimi, Kurt Pflughoeft, and Ehsan S. Soofi. Two measures of sample entropy. Statistics and Probability Letters, 20:225-234, 1994.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 
 %verification:

code/H_I_D_C/base_estimators/HShannon_spacing_Vplin_estimation.m

 function [H] = HShannon_spacing_Vplin_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Vasicek's spacing method with piecewise linear correction. Cost parameters are provided in the cost object co.
+%Estimates the Shannon entropy (H) of Y using Vasicek's spacing method with piecewise linear correction. 
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
-%REFERENCE: Nader Ebrahimi, Kurt Pflughoeft and Ehsan S. Soofi. Two measures of sample entropy. Statistics and Probability Letters, 20(3):225-234, 1994.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Nader Ebrahimi, Kurt Pflughoeft and Ehsan S. Soofi. Two measures of sample entropy. Statistics and Probability Letters, 20(3):225-234, 1994.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 [d,num_of_samples] = size(Y);
 
 %verification:

code/H_I_D_C/base_estimators/HTsallis_kNN_k_estimation.m

 function [H] = HTsallis_kNN_k_estimation(Y,co)
-%Estimates the Tsallis entropy (H) of Y (Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). Cost parameters are provided in the cost object co.
+%Estimates the Tsallis entropy (H) of Y using the kNN method (S={k}).
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
 %
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
 %REFERENCE: 
-%   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153�2182, 2008.
+%   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153-2182, 2008.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/HqRenyi_CDSS_estimation.m

 function [H] = HqRenyi_CDSS_estimation(Y,co)
-%Estimates the quadratic Renyi entropy (H) of Y (Y(:,t) is the t^th sample) based on continuously differentiable sample spacing. Cost parameters are provided in the cost object co.
+%Estimates the quadratic Renyi entropy (H) of Y based on continuously differentiable sample spacing.
 %
 %We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods. 
 %
-%REFERENCE: Umut Ozertem, Ismail Uysal, and Deniz Erdogmus. Continuously differentiable sample-spacing entropy estimation. IEEE Transactions on Neural Networks, 19:1978-1984, 2008.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Umut Ozertem, Ismail Uysal, and Deniz Erdogmus. Continuously differentiable sample-spacing entropy estimation. IEEE Transactions on Neural Networks, 19:1978-1984, 2008.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%co.mult:OK.
+
 d = size(Y,1);
 
 %verification:

code/H_I_D_C/base_estimators/IGV_estimation.m

 function [I] = IGV_estimation(Y,ds,co)
 %Estimates the generalized variance (I). 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions. 
-%  co: initialized mutual information estimator object.
-%REFERENCE:
+%  co: mutual information estimator object.
+%
+%REFERENCE: 
 %   Zoltan Szabo and Andras Lorincz. Real and Complex Independent Subspace Analysis by Generalized Variance. ICA Research Network International Workshop (ICARN), pages 85-88, 2006.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 
 %co.mult:OK.
 
-if one_dimensional_problem(ds) && length(ds)==2
-    %initialization:
-        fs = IGV_dependency_functions;
-        num_of_functions = length(fs);
-        num_of_samples = size(Y,2);
-        I = 0;
-    %query for the current working environment:
-        environment_Matlab = working_environment_Matlab;
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+    if ~one_dimensional_problem(ds) || length(ds)~=2
+        error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
+    end
+
+%initialization:
+    fs = IGV_dependency_functions;
+    num_of_functions = length(fs);
+    num_of_samples = size(Y,2);
+    I = 0;
+
+%query for the current working environment:
+    environment_Matlab = working_environment_Matlab;
         
-    %I computation:
-        for k = 1 : num_of_functions %pick the k^th function (fs{k})
-            fY = feval(fs{k},Y).';
-            if strcmp(co.dependency,'cov')
-                c = (fY(:,1)-mean(fY(:,1))).' * (fY(:,2)-mean(fY(:,2))) / (num_of_samples-1);
-                I = I + c.^2; %cov instead of corr
-            else %corr/cor
-                if environment_Matlab%Matlab
-                    I = I + (corr(fY(:,1),fY(:,2))).^2; %corr instead of cov        
-                else%Octave
-                    I = I + (cor(fY(:,1),fY(:,2))).^2; %cor (and not 'corr') instead of cov        
-                end
+%I computation:
+    for k = 1 : num_of_functions %pick the k^th function (fs{k})
+        fY = feval(fs{k},Y).';
+        if strcmp(co.dependency,'cov')
+            c = (fY(:,1)-mean(fY(:,1))).' * (fY(:,2)-mean(fY(:,2))) / (num_of_samples-1);
+            I = I + c.^2; %cov instead of corr
+        else %corr/cor
+            if environment_Matlab%Matlab
+                I = I + (corr(fY(:,1),fY(:,2))).^2; %corr instead of cov        
+            else%Octave
+                I = I + (cor(fY(:,1),fY(:,2))).^2; %cor (and not 'corr') instead of cov        
             end
-        end    
-else
-    error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
-end
+        end
+    end    
+

code/H_I_D_C/base_estimators/IGV_initialization.m

 %
 %Note: 
 %   1)The estimator is treated as a cost object (co).
-%	2)We make use of the naming convention 'I<name>_initialization', to ease embedding new mutual information estimation methods.
+%   2)We make use of the naming convention 'I<name>_initialization', to ease embedding new mutual information estimation methods.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
 %OUTPUT:
 %   co: object (structure).
-%REFERENCE: 
-%   Zoltan Szabo, Andras Lorincz: Real and Complex Independent Subspace Analysis by Generalized Variance. ICARN-2006, pages 85-88.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/base_estimators/IHSIC_estimation.m

 function [I] = IHSIC_estimation(Y,ds,co)
 %Estimates mutual information (I) using the HSIC (Hilbert-Schmidt independence criterion) method. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
-%REFERENCE:
+%  co: mutual information estimator object.
+%
+%REFERENCE: 
 %   Arthur Gretton, Olivier Bousquet, Alexander Smola and Bernhard Scholkopf: Measuring Statistical Dependence with Hilbert-Schmidt Norms. ALT 2005, 63-78.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 
 %co.mult:OK.
 
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+
 %initialization:
     num_of_samples = size(Y,2);
     num_of_comps = length(ds);

code/H_I_D_C/base_estimators/IHoeffding_estimation.m

 function [I] = IHoeffding_estimation(Y,ds,co)
 %Estimates mutual information (I) using the multivariate version of Hoeffding's Phi. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
-%REFERENCE:
+%  co: mutual information estimator object.
+%
+%REFERENCE: 
 %   Sandra Gaiser, Martin Ruppert, Friedrich Schmid. A multivariate version of Hoeffding's Phi-Square. Journal of Multivariate Analysis. 101 (2010) 2571-2586.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
-if one_dimensional_problem(ds)
-    [d,num_of_samples] = size(Y);
-    U = copula_transformation(Y);
+%co.mult:OK.
 
-    %term1:
-        term1 = Hoeffding_term1(U);
-    %term2:
-        if co.small_sample_adjustment
-            term2 = -2 * sum(prod(1-U.^2-(1-U)/num_of_samples,1)) / (num_of_samples * 2^d);
-        else
-            term2 = -2 * sum(prod(1-U.^2,1)) / (num_of_samples * 2^d);
-        end
-    %term3:    
-        if co.small_sample_adjustment
-            term3 = ( (num_of_samples-1) * (2*num_of_samples-1) / (3*2*num_of_samples^2) )^d;
-        else
-            term3 = 1/3^d;
-        end
-    %I:
-        I = term1 + term2 + term3;        
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+    if ~one_dimensional_problem(ds)
+        error('The subspaces must be one-dimensional for this estimator.');
+    end
+
+[d,num_of_samples] = size(Y);
+U = copula_transformation(Y);
+
+%term1:
+   term1 = Hoeffding_term1(U);
+%term2:
+   if co.small_sample_adjustment
+       term2 = -2 * sum(prod(1-U.^2-(1-U)/num_of_samples,1)) / (num_of_samples * 2^d);
+   else
+       term2 = -2 * sum(prod(1-U.^2,1)) / (num_of_samples * 2^d);
+   end
+%term3:    
+   if co.small_sample_adjustment
+       term3 = ( (num_of_samples-1) * (2*num_of_samples-1) / (3*2*num_of_samples^2) )^d;
+   else
+       term3 = 1/3^d;
+   end
+%I:
+   I = term1 + term2 + term3;        
     
-    if co.mult%multiplicative constant, if needed
-        if co.small_sample_adjustment
-            t1 = sum((1 - [1:num_of_samples-1]/num_of_samples).^d .* (2*[1:num_of_samples-1]-1)) / num_of_samples^2;
-            t2 = -2 * sum( ( (num_of_samples * (num_of_samples-1) - [1:num_of_samples] .* ([1:num_of_samples]-1)) / (2*num_of_samples^2) ).^d ) / num_of_samples;
-            t3 = term3;
-            inv_hd = t1 + t2 + t3;%1/h(d,n)
-        else
-            inv_hd = 2/((d+1)*(d+2)) - factorial(d)/(2^d*prod([0:d]+1/2)) + 1/3^d;%1/h(d)
-        end
-        I = I / inv_hd;
+if co.mult%multiplicative constant, if needed
+    if co.small_sample_adjustment
+        t1 = sum((1 - [1:num_of_samples-1]/num_of_samples).^d .* (2*[1:num_of_samples-1]-1)) / num_of_samples^2;
+        t2 = -2 * sum( ( (num_of_samples * (num_of_samples-1) - [1:num_of_samples] .* ([1:num_of_samples]-1)) / (2*num_of_samples^2) ).^d ) / num_of_samples;
+        t3 = term3;
+        inv_hd = t1 + t2 + t3;%1/h(d,n)
+    else
+        inv_hd = 2/((d+1)*(d+2)) - factorial(d)/(2^d*prod([0:d]+1/2)) + 1/3^d;%1/h(d)
     end
-	I = sqrt(abs(I));
-else
-    error('The subspaces must be one-dimensional for this estimator.');
+    I = I / inv_hd;
 end
+
+I = sqrt(abs(I));
+

code/H_I_D_C/base_estimators/IKCCA_estimation.m

 function [I] = IKCCA_estimation(Y,ds,co)
 %Estimates mutual information (I) using the KCCA (kernel canonical correlation analysis) method. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
+%  co: mutual information estimator object.
+%
 %REFERENCE:
 %   Zoltan Szabo, Barnabas Poczos, Andras Lorincz: Undercomplete Blind Subspace Deconvolution. Journal of Machine Learning Research 8(May):1063-1095, 2007. (multidimensional case, i.e., ds(j)>=1)
 %   Francis Bach, Michael I. Jordan. Kernel Independent Component Analysis. Journal of Machine Learning Research, 3: 1-48, 2002. (one-dimensional case, i.e., ds(1)=ds(2)=...=ds(end)=1)
 
 %co.mult:OK.
 
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+
 R = compute_matrixR_KCCA_KGV(Y,ds,co);
 
 [temp,eig_min] = eigs(R,1,'SM');%compute the smallest eigenvalue of R

code/H_I_D_C/base_estimators/IKGV_estimation.m

 function [I] = IKGV_estimation(Y,ds,co)
 %Estimates mutual information (I) using the KGV (kernel generalized variance) method. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
+%  co: mutual information estimator object.
+%
 %REFERENCE:
 %   Zoltan Szabo, Barnabas Poczos, Andras Lorincz: Undercomplete Blind Subspace Deconvolution. Journal of Machine Learning Research 8(May):1063-1095, 2007. (multidimensional case, i.e., ds(j)>=1)
 %   Francis Bach, Michael I. Jordan. Kernel Independent Component Analysis. Journal of Machine Learning Research, 3: 1-48, 2002. (one-dimensional case, i.e., ds(1)=ds(2)=...=ds(end)=1)
 
 %co.mult:OK.        
 
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+
 R = compute_matrixR_KCCA_KGV(Y,ds,co);
 I = -log(det(R)) / 2;
  

code/H_I_D_C/base_estimators/IQMI_CS_KDE_direct_estimation.m

 function [I] = IQMI_CS_KDE_direct_estimation(Y,ds,co)
 %Estimates the Cauchy-Schwartz quadratic mutual information (I) directly based on Gaussian KDE (kernel density estimation). 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
-%REFERENCE:
+%  co: mutual information estimator object.
+%
+%REFERENCE: 
 %   Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 
 %co.mult:OK.
 
-if one_dimensional_problem(ds) && length(ds)==2
-	I = qmi_cs_2(Y(1:ds(1),:).',Y(ds(1)+1:ds(1)+ds(2),:).',co.sigma);
-else
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+    if ~one_dimensional_problem(ds) || length(ds)~=2
         error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
-end
+    end
 
+I = qmi_cs_2(Y(1:ds(1),:).',Y(ds(1)+1:ds(1)+ds(2),:).',co.sigma);
 
 
+

code/H_I_D_C/base_estimators/IQMI_CS_KDE_iChol_estimation.m

 function [I] = IQMI_CS_KDE_iChol_estimation(Y,ds,co)
 %Estimates the Cauchy-Schwartz quadratic mutual information (I) approximately, applying Gaussian KDE (kernel density estimation) and incomplete Cholesky decomposition. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
-%REFERENCE:
+%  co: mutual information estimator object.
+%
+%REFERENCE: 
 %   Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %co.mult:OK.
 
 %verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
     if length(ds)~=2 %M=2
         error('The number of components must be 2 for this estimator.');
     end

code/H_I_D_C/base_estimators/IQMI_ED_KDE_iChol_estimation.m

 function [I] = IQMI_ED_KDE_iChol_estimation(Y,ds,co)
 %Estimates the Euclidean distance based quadratic mutual information (I) approximately, applying Gaussian KDE (kernel density estimation) and incomplete Cholesky decomposition. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
-%REFERENCE:
-%      Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
+%  co: mutual information estimator object.
+%
+%REFERENCE: 
+%   Sohan Seth and Jose C. Principe. On speeding up computation in information theoretic learning. In International Joint Conference on Neural Networks (IJCNN), pages 2883-2887, 2009.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %co.mult:OK.
 
 %verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
     if length(ds)~=2 %M=2
         error('The number of components must be 2 for this estimator.');
     end

code/H_I_D_C/base_estimators/ISW1_estimation.m

 function [I] = ISW1_estimation(Y,ds,co)
 %Estimates mutual information (I) using Schweizer-Wolff's sigma. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
+%  co: mutual information estimator object.
+%
 %REFERENCE:
 %  Sergey Kirshner and Barnabas Poczos. ICA and ISA Using Schweizer-Wolff Measure of Dependence. International Conference on Machine Learning (ICML), 464-471, 2008.
 %  B. Schweizer and E. F. Wolff. On Nonparametric Measures of Dependence for Random Variables. The Annals of Statistics 9:879-885, 1981.
 
 %co.mult:OK.
 
-if one_dimensional_problem(ds) && length(ds)==2
-    %initialization:
-        bound = 1;
-        num_of_samples = size(Y,2);
-    if (num_of_samples <= co.max_num_of_samples_without_appr) %directly
-        r = rank_transformation(Y);
-        I = SW_sigma(r,bound);
-    else %approximation on a sparse grid
-        bin_size = ceil(num_of_samples/co.max_num_of_bins);
-        num_of_samples = floor(num_of_samples/bin_size)*bin_size;    
-        r = rank_transformation(Y(:,1:num_of_samples));
-        I = SW_sigma(r,bound,bin_size);
-    end    
-else
-    error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
-end
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+    if ~one_dimensional_problem(ds) || length(ds)~=2
+        error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
+    end
+
+%initialization:
+    bound = 1;
+    num_of_samples = size(Y,2);
+
+if (num_of_samples <= co.max_num_of_samples_without_appr) %directly
+    r = rank_transformation(Y);
+    I = SW_sigma(r,bound);
+else %approximation on a sparse grid
+    bin_size = ceil(num_of_samples/co.max_num_of_bins);
+    num_of_samples = floor(num_of_samples/bin_size)*bin_size;    
+
+    r = rank_transformation(Y(:,1:num_of_samples));
+    I = SW_sigma(r,bound,bin_size);
+end    
+

code/H_I_D_C/base_estimators/ISWinf_estimation.m

 function [I] = ISWinf_estimation(Y,ds,co)
 %Estimates mutual information (I) using Schweizer-Wolff's kappa. 
 %
+%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
+%  co: mutual information estimator object.
+%
 %REFERENCE:
 %  Sergey Kirshner and Barnabas Poczos. ICA and ISA Using Schweizer-Wolff Measure of Dependence. International Conference on Machine Learning (ICML), 464-471, 2008.
 %  B. Schweizer and E. F. Wolff. On Nonparametric Measures of Dependence for Random Variables. The Annals of Statistics 9:879-885, 1981.
 
 %co.mult:OK.
 
-if one_dimensional_problem(ds) && length(ds)==2
-    %initialization:
-        bound = 1;
-        num_of_samples = size(Y,2);
-    if (num_of_samples <= co.max_num_of_samples_without_appr) %directly
-        r = rank_transformation(Y);
-        I = SW_kappa(r,bound);
-    else %approximation on a sparse grid
-        bin_size = ceil(num_of_samples/co.max_num_of_bins);
-        num_of_samples = floor(num_of_samples/bin_size)*bin_size;    
-        r = rank_transformation(Y(:,1:num_of_samples));
-        I = SW_kappa(r,bound,bin_size);
-    end    
-else
-    error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
-end
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+    if ~one_dimensional_problem(ds) || length(ds)~=2
+        error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
+    end
+
+%initialization:
+    bound = 1;
+    num_of_samples = size(Y,2);
+
+if (num_of_samples <= co.max_num_of_samples_without_appr) %directly
+    r = rank_transformation(Y);
+    I = SW_kappa(r,bound);
+else %approximation on a sparse grid
+    bin_size = ceil(num_of_samples/co.max_num_of_bins);
+    num_of_samples = floor(num_of_samples/bin_size)*bin_size;    
+    r = rank_transformation(Y(:,1:num_of_samples));
+    I = SW_kappa(r,bound,bin_size);
+end    
+

code/H_I_D_C/meta_estimators/DJdistance_estimation.m

 function [D_J] = DJdistance_estimation(Y1,Y2,co)
-%Estimates the J-distance of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
-%using the relation: D_J(f_1,f_2) = D(f_1,f_2)+D(f_2,f_1), where D denotes the Kullback-Leibler divergence. The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
+%Estimates the J-distance of Y1 and Y2 using the relation: D_J(f_1,f_2) = D(f_1,f_2)+D(f_2,f_1), where D denotes the Kullback-Leibler divergence.
 %
-%We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
+%Note:
+%   1)We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
+%   2)This is a meta method: the Kullback-Leibler divergence estimator can be arbitrary.
+%
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution.
+%  co: divergence estimator object.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 
 %co.mult:OK.
 
-D_J =  D_estimation(Y1,Y2,co.member_co) + D_estimation(Y2,Y1,co.member_co);
+D_J =  D_estimation(Y1,Y2,co.member_co) + D_estimation(Y2,Y1,co.member_co);

code/H_I_D_C/meta_estimators/DJdistance_initialization.m

 %Note:
 %   1)The estimator is treated as a cost object (co).
 %   2)We make use of the naming convention 'D<name>_initialization', to ease embedding new divergence estimation methods.
+%   3)This is a meta method: the Kullback-Leibler divergence estimator can arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
     co.mult = mult;
     
 %other fields:
-    co.member_name = 'Renyi_kNN_k'; %you can change it to any Kullback-Leibler entropy estimator; the Renyi divergence (D_{R,alpha}) converges to the Shannon's (D): D_{R,alpha} -> D, as alpha-> 1.
+    co.member_name = 'Renyi_kNN_k'; %you can change it to any Kullback-Leibler entropy estimator; the Renyi divergence (D_{R,alpha}) converges to the Kullback-Leibler's (D): D_{R,alpha} -> D, as alpha -> 1.
     co.member_co = D_initialization(co.member_name,mult);

code/H_I_D_C/meta_estimators/DKL_CCE_HShannon_estimation.m

 function [D] = DKL_CCE_HShannon_estimation(Y1,Y2,co)
 %Estimates the Kullback-Leibler divergence (D) using the relation: D(f_1,f_2) = CE(f_1,f_2) - H(f_1). Here D denotes the Kullback-Leibler divergence, CE stands for cross-entropy and H is the Shannon differential entropy. 
-%This is a "meta" method, i.e., the cross-entropy and Shannon entropy estimators can be arbitrary.
 %
-%We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
+%Note:
+%   1)We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
+%   2)This is a meta method: the cross-entropy and Shannon entropy estimators can be arbitrary.
 %
 %INPUT:
 %  Y1: Y1(:,t) is the t^th sample from the first distribution.
-%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
-%  co: mutual information estimator object.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution.
+%  co: divergence estimator object.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/DKL_CCE_HShannon_initialization.m

 %Note:
 %   1)The estimator is treated as a cost object (co).
 %   2)We make use of the naming convention 'D<name>_initialization', to ease embedding new divergence estimation methods.
+%   3)This is a meta method: the cross-entropy and Shannon differential entropy estimators can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.

code/H_I_D_C/meta_estimators/HRPensemble_estimation.m

 function [H] = HRPensemble_estimation(Y,co)
-%Estimates entropy (H) from the average of H estimations on RP-ed (random projection) groups of samples; this a "meta" method, the applied H estimator can be arbitrary.
+%Estimates entropy (H) from the average of H estimations on RP-ed (random projection) groups of samples.
 %
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%Note:
+%   1)We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%   2)This a meta method: the applied entorpy estimator can be arbitrary.
 %
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  co: entropy estimator object.
+%
 %REFERENCE: 
-%	Zoltan Szabo, Andras Lorincz: Fast Parallel Estimation of High Dimensional Information Theoretical Quantities with Low Dimensional Random Projection Ensembles. ICA 2009, pages 146-153.
+%   Zoltan Szabo, Andras Lorincz: Fast Parallel Estimation of High Dimensional Information Theoretical Quantities with Low Dimensional Random Projection Ensembles. ICA 2009, pages 146-153.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HRPensemble_initialization.m

 function [co] = HRPensemble_initialization(mult)
-%Initialization of the "meta" entropy estimator. The estimator is:
-%   1)constructed from the average of entropy estimations on groups of RP-ed (random projection) samples.
-%   2)treated as a cost object (co). 
+%Initialization of the "meta" RP-ensemble entropy estimator. The estimator is constructed from the average of entropy estimations on groups of RP-ed (random projection) samples.
 %
-%Here, we make use of the naming convention: 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%Note:
+%    1)The estimator is treated as a cost object (co). 
+%    2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%    3)This is a meta method: the applied entropy estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
 %OUTPUT:
-%   co: object (structure).
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HShannon_DKL_N_estimation.m

 function [H] = HShannon_DKL_N_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using the relation H(Y) = H(G) - D(Y,G), where G is Gaussian [N(E(Y),cov(Y)] and D is the Kullback-Leibler divergence.
-%This is a "meta" method, i.e., the Kullback-Leibler divergence estimator can be arbitrary.
+%Estimates the Shannon entropy (H) of Y using the relation H(Y) = H(G) - D(Y,G), where G is Gaussian [N(E(Y),cov(Y)] and D is the Kullback-Leibler divergence.
 %
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%Note:
+%   1)We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%   2)This is a meta method: the Kullback-Leibler divergence estimator can be arbitrary.
 %
-%REFERENCE: Quing Wang, Sanjeev R. Kulkarni, and Sergio Verdu. Universal estimation of information measures for analog sources. Foundations And Trends In Communications And Information Theory, 5:265-353, 2009.
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
+%
+%REFERENCE: 
+%   Quing Wang, Sanjeev R. Kulkarni, and Sergio Verdu. Universal estimation of information measures for analog sources. Foundations And Trends In Communications And Information Theory, 5:265-353, 2009.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HShannon_DKL_N_initialization.m

 %Note:
 %   1)The estimator is treated as a cost object (co).
 %   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   3)This is a meta method: the Kullback-Leibler divergence estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%OUTPUT:
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HShannon_DKL_U_estimation.m

 function [H] = HShannon_DKL_U_estimation(Y,co)
-%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using the relation H(Y) = -D(Y',U) + log(\prod_i(b_i-a_i)), where Y\in[a,b] = \times_{i=1}^d[a_i,b_i], D is the Kullback-Leibler divergence, Y' = linearly transformed version of Y to [0,1]^d, and U is the uniform distribution on [0,1]^d.
-%This is a "meta" method, i.e., the Kullback-Leibler divergence estimator can be arbitrary.
+%Estimates the Shannon entropy (H) of Y using the relation H(Y) = -D(Y',U) + log(\prod_i(b_i-a_i)), where Y\in[a,b] = \times_{i=1}^d[a_i,b_i], D is the Kullback-Leibler divergence, Y' = linearly transformed version of Y to [0,1]^d, and U is the uniform distribution on [0,1]^d.
 %
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%Note:
+%   1)We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%   2)This is a meta method: the Kullback-Leibler divergence estimator can be arbitrary.
+%
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HShannon_DKL_U_initialization.m

 %Note:
 %   1)The estimator is treated as a cost object (co).
 %   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   3)This is a meta method: the Kullback-Leibler divergence estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%OUTPUT:
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HTsallis_HRenyi_estimation.m

 function [H] = HTsallis_HRenyi_estimation(Y,co)
-%Estimates the Tsallis entropy (H) of Y (Y(:,t) is the t^th sample)
-%using via the Renyi entropy and the H_{T,alpha} = (e^{H_{R,alpha}(1-alpha)} - 1) / (1-alpha) relation. Cost parameters are provided in the cost object co.
+%Estimates the Tsallis entropy (H) of Y via the Renyi entropy and the H_{T,alpha} = (e^{H_{R,alpha}(1-alpha)} - 1) / (1-alpha) relation.
 %
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%Note:
+%   1)We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%   2)This is a method: the Renyi entropy estimator can be arbitrary.
+%
+%INPUT:
+%   Y: Y(:,t) is the t^th sample.
+%  co: entropy estimator object.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/HTsallis_HRenyi_initialization.m

 %Note:
 %   1)The estimator is treated as a cost object (co).
 %   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   3)This is a meta method: the Renyi entropy estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.

code/H_I_D_C/meta_estimators/Hcomplex_estimation.m

 function [H] = Hcomplex_estimation(Y,co)
-%Estimates complex entropy (H) from a real-valued (vector) entropy estimator; this a "meta" method, the applied estimator can be arbitrary.
+%Estimates complex entropy (H) using a real-valued (vector) entropy estimator.
 %
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%Note:
+%   1)We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%   2)This is a meta method: the applied estimator can be arbitrary.
 %
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.

code/H_I_D_C/meta_estimators/Hcomplex_initialization.m

 function [co] = Hcomplex_initialization(mult)
-%Initialization of the "meta" complex entropy estimator. The estimator is
-%   1)constructed from a real-valued (vector) entropy estimator.
-%   2)treated as a cost object (co). 
+%Initialization of the "meta" complex entropy estimator. The estimator is constructed from a real-valued (vector) entropy estimator.
 %
-%Here, we make use of the naming convention: 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%Note:
+%   1)The estimator is treated as a cost object (co). 
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   3)This is a meta method: the real-valued (vector) entropy estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
 %OUTPUT:
-%   co: object (structure).
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/Hensemble_estimation.m

 function [H] = Hensemble_estimation(Y,co)
-%Estimates entropy (H) from the average of entropy estimations on groups of samples; this a "meta" method, the applied entropy estimator can be arbitrary.
+%Estimates entropy (H) from the average of entropy estimations on groups of samples.
 %
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%Note:
+%   1)We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%   2)This a meta method: the applied entropy estimator can be arbitrary.
 %
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  co: entropy estimator object.
+%
 %REFERENCE: 
-%  Jan Kybic: High-dimensional mutual information estimation for image registration. ICIP 2004, pages 17791782.
+%   Jan Kybic: High-dimensional mutual information estimation for image registration. ICIP 2004, pages 1779-1782.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/Hensemble_initialization.m

 function [co] = Hensemble_initialization(mult)
-%Initialization of the "meta" entropy estimator. The
-%estimator is:
-%   1)constructed from the average of entropy estimations on groups of samples.
-%   2)treated as a cost object (co). 
+%Initialization of the "meta" ensemble entropy estimator. The estimator is constructed from the average of entropy estimations on groups of samples.
 %
-%Here, we make use of the naming convention: 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%   3)This is a meta method: the applied entropy estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
 %OUTPUT:
-%   co: object (structure).
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/IL2_DL2_estimation.m

 function [I] = IL2_DL2_estimation(Y,ds,co)
-%Estimates L2 mutual information (I) making use of an(y) estimator for L2 divergence; co is the cost object.
-%This is a  "meta" method, using the relation: I(y^1,...,y^M) = D(f_y,\prod_{m=1}^M f_{y^m}).
+%Estimates L2 mutual information (I) based on L2 divergence. The estimation is carried out according to the relation: I(y^1,...,y^M) = D(f_y,\prod_{m=1}^M f_{y^m}).
 %
-%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%Note:
+%   1)We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%   2)This is a meta method: the L2 divergence estimator can be arbitrary. 
 %
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
 %  co: mutual information estimator object.
+%
 %REFERENCE:
 %   Barnabas Poczos, Zoltan Szabo, Jeff Schneider: Nonparametric divergence estimators for Independent Subspace Analysis. EUSIPCO-2011, pages 1849-1853.
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+
 [Y1,Y2] = div_sample_generation(Y,ds);
-I = D_estimation(Y1,Y2,co.member_co);
+I = D_estimation(Y1,Y2,co.member_co);

code/H_I_D_C/meta_estimators/IL2_DL2_initialization.m

 function [co] = IL2_DL2_initialization(mult)
-%Initialization of the "meta" L2 mutual information estimator, which is 
-%   1)based on an(y) estimator for L2 divergence,
-%   2)is treated as a cost object (co). 
+%Initialization of the "meta" L2 mutual information estimator based on L2 divergence. 
 %Mutual information is estimated using the relation: I(y^1,...,y^M) = D(f_y,\prod_{m=1}^M f_{y^m}).
 %
-%Here, we make use of the naming convention: 'I<name>_initialization', to ease embedding new mutual information estimation methods.
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%   3)This is a meta method: the L2 divergence estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
 %OUTPUT:
-%   co: object (structure).
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/IMMD_DMMD_estimation.m

 function [I] = IMMD_DMMD_estimation(Y,ds,co)
 %Estimates mutual information (I) using the MMD (maximum mean discrepancy)
 %method: I(Y_1,...,Y_d) = MMD(P_Z,P_U), where (i) Z =[F_1(Y_1);...;F_d(Y_d)] is the copula transformation of Y; F_i is the cdf of Y_i, (ii) P_U is the uniform distribution on [0,1]^d, (iii) dim(Y_1) = ... = dim(Y_d) = 1.
-%This is a "meta" method, i.e., the MMD estimator can be arbitrary.
+%
+%Note:
+%   1)We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%   2)This is a meta method: the MMD estimator can be arbitrary.
 %
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
-%  co: initialized mutual information estimator object.
+%  co: mutual information estimator object.
+%
 %REFERENCE:
-%  Barnabas Poczos, Zoubin Ghahramani, Jeff Schneider. Copula-based Kernel Dependency Measures, ICML-2012.
+%   Barnabas Poczos, Zoubin Ghahramani, Jeff Schneider. Copula-based Kernel Dependency Measures, ICML-2012.
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
-if one_dimensional_problem(ds)
-    Z = copula_transformation(Y);
-    U = rand(size(Z));
-    I = D_estimation(Z,U,co.member_co);
-else
-    error('The subspaces must be one-dimensional for this estimator.');
-end
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+    if ~one_dimensional_problem(ds)
+        error('The subspaces must be one-dimensional for this estimator.');
+    end
+
+Z = copula_transformation(Y);
+U = rand(size(Z));
+I = D_estimation(Z,U,co.member_co);
+

code/H_I_D_C/meta_estimators/IMMD_DMMD_initialization.m

 function [co] = IMMD_DMMD_initialization(mult)
-%Initialization of the "meta" mutual information estimator, which is 
-%   1)based on an(y) estimator for MMD (maximum mean discrepancy),
-%   2)is treated as a cost object (co). 
+%Initialization of the "meta" mutual information estimator based on MMD (maximum mean discrepancy).
 %Mutual information is estimated using the relation: I(Y_1,...,Y_d) = MMD(P_Z,P_U), where (i) Z =[F_1(Y_1);...;F_d(Y_d)] is the
 %copula transformation of Y; F_i is the cdf of Y_i, (ii) P_U is the uniform distribution on [0,1]^d, (iii) dim(Y_1) = ... = dim(Y_d) = 1.
 %
-%Here, we make use of the naming convention: 'I<name>_initialization', to ease embedding new mutual information estimation methods.
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention: 'I<name>_initialization', to ease embedding new mutual information estimation methods.
+%   3)This is a meta method: the MMD estimator can be arbitrary. 
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
 %OUTPUT:
-%   co: object (structure).
+%   co: cost object (structure).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_C/meta_estimators/IRenyi_DRenyi_estimation.m

 function [I] = IRenyi_DRenyi_estimation(Y,ds,co)
-%Estimates Renyi mutual information (I) making use of an(y) estimator for Renyi divergence; co is the cost object.
-%This is a  "meta" method, using the relation: I(y^1,...,y^M) = D(f_y,\prod_{m=1}^M f_{y^m}).
+%Estimates Renyi mutual information (I) based on Renyi divergence. The estimation is carried out according to the relation: I(y^1,...,y^M) = D(f_y,\prod_{m=1}^M f_{y^m}).
 %
-%We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%Note:
+%   1)We make use of the naming convention 'I<name>_estimation', to ease embedding new mutual information estimation methods.
+%   2)This is a meta method: the Renyi divergence estimator can be arbitrary. 
 %
 %INPUT:
 %   Y: Y(:,t) is the t^th sample.
 %  ds: subspace dimensions.
 %  co: mutual information estimator object.
+%
 %REFERENCE:
 %   Barnabas Poczos, Zoltan Szabo, Jeff Schneider: Nonparametric divergence estimators for Independent Subspace Analysis. EUSIPCO-2011, pages 1849-1853.
 %   Barnabas Poczos, Jeff Schneider: On the Estimation of alpha-Divergences. AISTATS-2011, pages 609-617.
 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+%verification:
+    if sum(ds) ~= size(Y,1);
+        error('The subspace dimensions are not compatible with Y.');
+    end
+
 [Y1,Y2] = div_sample_generation(Y,ds);
-I = D_estimation(Y1,Y2,co.member_co);
+I = D_estimation(Y1,Y2,co.member_co);

code/H_I_D_C/meta_estimators/IRenyi_DRenyi_initialization.m

 function [co] = IRenyi_DRenyi_initialization(mult)
-%Initialization of the "meta" Renyi mutual information estimator, which is 
-%   1)based on an(y) estimator for Renyi divergence,
-%   2)is treated as a cost object (co). 
+%Initialization of the "meta" Renyi mutual information estimator based on Renyi divergence.
 %Mutual information is estimated using the relation: I(y^1,...,y^M) = D(f_y,\prod_{m=1}^M f_{y^m}).
 %
-%Here, we make use of the naming convention: 'I<name>_initialization', to ease embedding new mutual information estimation methods.
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'I<name>_initialization', to ease embedding new mutual information estimation methods.
+%   3)This is a meta method: the Renyi divergence estimator can be arbitrary.
 %
 %INPUT:
 %   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.