Commits

Zoltan Szabo committed 30943a7

(i) Two Shannon entropy estimators based on the distance (KL divergence) from the uniform/Gaussian distributions, (ii) Shannon entropy estimator based on Voronoi regions: added; see 'HShannon_DShannon_U_initialization.m', 'HShannon_DShannon_U_estimation.m', 'HShannon_DShannon_N_initialization.m', 'HShannon_DShannon_N_estimation.m', 'HShannon_Voronoi_initialization.m', 'HShannon_Voronoi_estimation.m'. The documentation of the ITE package has been banished to 'https://bitbucket.org/szzoli/ite/downloads';: Downloads.

  • Participants
  • Parent commits d6b3e54
  • Tags release-0.20

Comments (0)

Files changed (67)

File CHANGELOG.txt

+v0.20 (Nov 19, 2012):
+-Two Shannon entropy estimators based on the distance (KL divergence) from the uniform/Gaussian distributions: added; see 'HShannon_DShannon_U_initialization.m', 'HShannon_DShannon_U_estimation.m', 'HShannon_DShannon_N_initialization.m', 'HShannon_DShannon_N_estimation.m'.
+-Shannon entropy estimator based on Voronoi regions: added; see 'HShannon_Voronoi_initialization.m', 'HShannon_Voronoi_estimation.m'.
+-The documentation of the ITE package has been banished to 'https://bitbucket.org/szzoli/ite/downloads': Downloads. (Its history unnecessarily increased the size of the Mercurial repository.) For further details, see 'doc:ITE_documentation.txt'.
+-some code refactoring: (i) divergence estimators: notation (X,Y) -> (Y1,Y2); (ii) disp('Error:...')/disp('Warning:...') -> error('...')/warning('...'); (iii) 'Renyi_CDSS' renamed to 'qRenyi_CDSS'.
+
 v0.19 (Nov 16, 2012):
 -2 k-nearest neighbor based Kullback-Leibler divergence estimators: added. See 'DKL_kNN_k_initialization.m', 'DKL_kNN_k_estimation.m', 'DKL_kNN_kiTi_initialization.m', 'DKL_kNN_kiTi_estimation.m'.
 -compute_CDSS.cpp: 'sqrt(T)' -> 'sqrt(double(T))', to increase compatibility with compilers.
 ITE offers solution methods for 
 
 - Independent Subspace Analysis (ISA) and 
-- its extensions to different linear-, controlled-, post nonlinear-, complex valued-, partially observed systems, as well as to systems
-with nonparametric source dynamics. 
+- its extensions to different linear-, controlled-, post nonlinear-, complex valued-, partially observed models, as well as to systems with nonparametric source dynamics. 
 
 Note: 
 
 - become a [Follower](https://bitbucket.org/szzoli/ite/follow) to be always up-to-date with ITE.
 - if you have an entropy, mutual information, divergence estimator/subtask solver with a GPLv3(>=)-compatible license that you would like to be embedded into ITE, feel free to contact me.
 
+**Download** the latest release: 
+
+- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.20_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.20_code.tar.bz2), 
+- [documentation (pdf)](https://bitbucket.org/szzoli/ite/downloads/ITE-0.20_documentation.pdf).
+
+

File code/H_I_D/base_estimators/DBhattacharyya_kNN_k_estimation.m

-function [D] = DBhattacharyya_kNN_k_estimation(X,Y,co)
-%Estimates the Bhattacharyya distance of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DBhattacharyya_kNN_k_estimation(Y1,Y2,co)
+%Estimates the Bhattacharyya distance of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-if size(X,1)~=size(Y,1)
-    disp('Error: the dimension of X and Y must be equal.');
-end
+%verification:
+    if size(Y1,1)~=size(Y2,1)
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
 
 %D_ab (Bhattacharyya coefficient):
 	if co.p %[p(x)dx]
-		D_ab = estimate_Dtemp2(X,Y,co);
+		D_ab = estimate_Dtemp2(Y1,Y2,co);
 	else %[q(x)dx]
-		D_ab = estimate_Dtemp2(Y,X,co);
+		D_ab = estimate_Dtemp2(Y2,Y1,co);
 	end
 
 %D = -log(D_ab);%theoretically

File code/H_I_D/base_estimators/DHellinger_kNN_k_estimation.m

-function [D] = DHellinger_kNN_k_estimation(X,Y,co)
-%Estimates the Hellinger distance of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DHellinger_kNN_k_estimation(Y1,Y2,co)
+%Estimates the Hellinger distance of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-if size(X,1)~=size(Y,1)
-    disp('Error: the dimension of X and Y must be equal.');
-end
+%verification:
+    if size(Y1,1)~=size(Y2,1)
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
 
 %D_ab (Bhattacharyya coefficient):
 	if co.p %[p(x)dx]
-		D_ab = estimate_Dtemp2(X,Y,co);
+		D_ab = estimate_Dtemp2(Y1,Y2,co);
 	else %[q(x)dx]
-		D_ab = estimate_Dtemp2(Y,X,co);
+		D_ab = estimate_Dtemp2(Y2,Y1,co);
 	end
 
 %D = sqrt(1-D_ab);%theoretically

File code/H_I_D/base_estimators/DKL_kNN_k_estimation.m

-function [D] = DKL_kNN_k_estimation(X,Y,co)
-%Estimates the Kullback-Leibler divergence (D) of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DKL_kNN_k_estimation(Y1,Y2,co)
+%Estimates the Kullback-Leibler divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-[dX,num_of_samplesX] = size(X);
-[dY,num_of_samplesY] = size(Y);
+[dY1,num_of_samplesY1] = size(Y1);
+[dY2,num_of_samplesY2] = size(Y2);
 
-if dX~=dY
-    disp('Error: the dimension of X and Y must be equal.');
-else
-    d = dX;
-    squared_distancesXX = kNN_squared_distances(X,X,co,1);
-    squared_distancesYX = kNN_squared_distances(Y,X,co,0);
-    dist_k_XX = sqrt(squared_distancesXX(end,:));
-    dist_k_YX = sqrt(squared_distancesYX(end,:));
-    D = d * mean(log(dist_k_YX./dist_k_XX)) + log(num_of_samplesY/(num_of_samplesX-1));
-end
+%verification:
+    if dY1~=dY2
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
+
+d = dY1;
+squared_distancesY1Y1 = kNN_squared_distances(Y1,Y1,co,1);
+squared_distancesY2Y1 = kNN_squared_distances(Y2,Y1,co,0);
+dist_k_Y1Y1 = sqrt(squared_distancesY1Y1(end,:));
+dist_k_Y2Y1 = sqrt(squared_distancesY2Y1(end,:));
+D = d * mean(log(dist_k_Y2Y1./dist_k_Y1Y1)) + log(num_of_samplesY2/(num_of_samplesY1-1));

File code/H_I_D/base_estimators/DKL_kNN_kiTi_estimation.m

-function [D] = DKL_kNN_kiTi_estimation(X,Y,co)
-%Estimates the Kullback-Leibler divergence (D) of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DKL_kNN_kiTi_estimation(Y1,Y2,co)
+%Estimates the Kullback-Leibler divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-[dX,num_of_samplesX] = size(X);
-[dY,num_of_samplesY] = size(Y);
+[dY1,num_of_samplesY1] = size(Y1);
+[dY2,num_of_samplesY2] = size(Y2);
 
-if dX~=dY
-    disp('Error: the dimension of X and Y must be equal.');
-else
-    d = dX;
-    k1 = floor(sqrt(num_of_samplesX));
-    k2 = floor(sqrt(num_of_samplesY));
+%verification:
+    if dY1~=dY2
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
+
+d = dY1;
+k1 = floor(sqrt(num_of_samplesY1));
+k2 = floor(sqrt(num_of_samplesY2));
     
-    co.k = k1;
-    squared_distancesXX = kNN_squared_distances(X,X,co,1);
+co.k = k1;
+squared_distancesY1Y1 = kNN_squared_distances(Y1,Y1,co,1);
     
-    co.k = k2;
-    squared_distancesYX = kNN_squared_distances(Y,X,co,0);
+co.k = k2;
+squared_distancesY2Y1 = kNN_squared_distances(Y2,Y1,co,0);
     
-    dist_k_XX = sqrt(squared_distancesXX(end,:));
-    dist_k_YX = sqrt(squared_distancesYX(end,:));
-    D = d * mean(log(dist_k_YX./dist_k_XX)) + log( k1/k2 * num_of_samplesY/(num_of_samplesX-1) );
-end
+dist_k_Y1Y1 = sqrt(squared_distancesY1Y1(end,:));
+dist_k_Y2Y1 = sqrt(squared_distancesY2Y1(end,:));
+D = d * mean(log(dist_k_Y2Y1./dist_k_Y1Y1)) + log( k1/k2 * num_of_samplesY2/(num_of_samplesY1-1) );
+

File code/H_I_D/base_estimators/DL2_kNN_k_estimation.m

-function [D] = DL2_kNN_k_estimation(X,Y,co)
-%Estimates the L2 divergence (D) of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DL2_kNN_k_estimation(Y1,Y2,co)
+%Estimates the L2 divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-[dY,num_of_samplesY] = size(Y);
-[dX,num_of_samplesX] = size(X);
+[dY2,num_of_samplesY2] = size(Y2);
+[dY1,num_of_samplesY1] = size(Y1);
 
-if dX~=dY
-    disp('Error: the dimension of X and Y must be equal.');
-else
-    d = dX;
-end
+%verification:
+    if dY1~=dY2
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
 
+d = dY1;
 c = volume_of_the_unit_ball(d);%volume of the d-dimensional unit ball
 
-squared_distancesXX = kNN_squared_distances(X,X,co,1);
-squared_distancesYX = kNN_squared_distances(Y,X,co,0);
-dist_k_XX = sqrt(squared_distancesXX(end,:));
-dist_k_YX = sqrt(squared_distancesYX(end,:));
+squared_distancesY1Y1 = kNN_squared_distances(Y1,Y1,co,1);
+squared_distancesY2Y1 = kNN_squared_distances(Y2,Y1,co,0);
+dist_k_Y1Y1 = sqrt(squared_distancesY1Y1(end,:));
+dist_k_Y2Y1 = sqrt(squared_distancesY2Y1(end,:));
 
-term1 = mean(dist_k_XX.^(-d)) * (co.k-1) / ((num_of_samplesX-1)*c);
-term2 = mean(dist_k_YX.^(-d)) * 2 * (co.k-1) / (num_of_samplesY*c);
-term3 = mean((dist_k_XX.^d) ./ (dist_k_YX.^(2*d))) *  (num_of_samplesX - 1) * (co.k-2) * (co.k-1) / (num_of_samplesY^2*c*co.k);
+term1 = mean(dist_k_Y1Y1.^(-d)) * (co.k-1) / ((num_of_samplesY1-1)*c);
+term2 = mean(dist_k_Y2Y1.^(-d)) * 2 * (co.k-1) / (num_of_samplesY2*c);
+term3 = mean((dist_k_Y1Y1.^d) ./ (dist_k_Y2Y1.^(2*d))) *  (num_of_samplesY1 - 1) * (co.k-2) * (co.k-1) / (num_of_samplesY2^2*c*co.k);
 L2 = term1-term2+term3;
 %D = sqrt(L2);%theoretically
 D = sqrt(abs(L2));%due to the finite number of samples L2 can be negative. In this case: sqrt(L2) is complex; to avoid such values we take abs().

File code/H_I_D/base_estimators/DMMDonline_estimation.m

-function [D] = DMMDonline_estimation(X,Y,co)
-%Estimates divergence (D) of X and Y (X(:,t), Y(:,t) is the t^th sample) using the MMD (maximum mean discrepancy) method, online. The number of samples in X [=size(X,2)] and Y [=size(Y,2)] must be equal. Cost parameters are provided in the cost object co.
+function [D] = DMMDonline_estimation(Y1,Y2,co)
+%Estimates divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample) using the MMD (maximum mean discrepancy) method, online. The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] must be equal. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 %co.mult:OK.
 
 %verification:
-    [dX,num_of_samplesX] = size(X);
-    [dY,num_of_samplesY] = size(Y);
-    %size(X) must be equal to size(Y):
-        if num_of_samplesX~=num_of_samplesY
-            disp('Warning: there must be equal number of samples in X and Y. Minimum of the sample numbers has been taken.');
+    [dY1,num_of_samplesY1] = size(Y1);
+    [dY2,num_of_samplesY2] = size(Y2);
+    %size(Y1) must be equal to size(Y2):
+        if num_of_samplesY1~=num_of_samplesY2
+            warning('There must be equal number of samples in Y1 and Y2. Minimum of the sample numbers has been taken.');
         end
-        if dX~=dY
-            disp('Error: the dimension of X and Y must be equal.');
+        if dY1~=dY2
+            error('The dimension of Y1 and Y2 must be equal.');
         end
-    num_of_samples = min(num_of_samplesX,num_of_samplesY);        
+    num_of_samples = min(num_of_samplesY1,num_of_samplesY2);        
     %Number of samples must be even:
         if ~all_even(num_of_samples)
-            disp('Warning: the number of samples must be even, the last sample is discarded.');
+            warning('The number of samples must be even, the last sample is discarded.');
             num_of_samples = num_of_samples - 1;
         end
         
     odd_indices = [1:2:num_of_samples];
     even_indices = [2:2:num_of_samples];
     
-%Xi,Xj,Yi,Yj:
-    Xi = X(:,odd_indices);
-    Xj = X(:,even_indices);
-    Yi = Y(:,odd_indices);
-    Yj = Y(:,even_indices);
+%Y1i,Y1j,Y2i,Y2j:
+    Y1i = Y1(:,odd_indices);
+    Y1j = Y1(:,even_indices);
+    Y2i = Y2(:,odd_indices);
+    Y2j = Y2(:,even_indices);
 
-D = (K(Xi,Xj,co) + K(Yi,Yj,co) - K(Xi,Yj,co) - K(Xj,Yi,co)) / (num_of_samples/2);
+D = (K(Y1i,Y1j,co) + K(Y2i,Y2j,co) - K(Y1i,Y2j,co) - K(Y1j,Y2i,co)) / (num_of_samples/2);
 
 %-----------------------------
 function [s] = K(U,V,co)

File code/H_I_D/base_estimators/DRenyi_kNN_k_estimation.m

-function [D] = DRenyi_kNN_k_estimation(X,Y,co)
-%Estimates the Renyi divergence (D) of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DRenyi_kNN_k_estimation(Y1,Y2,co)
+%Estimates the Renyi divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-if size(X,1)~=size(Y,1)
-    disp('Error: the dimension of X and Y must be equal.');
-end
+%verification:
+    if size(Y1,1)~=size(Y2,1)
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
 
-Dtemp1 = estimate_Dtemp1(X,Y,co);
+Dtemp1 = estimate_Dtemp1(Y1,Y2,co);
 D = log(Dtemp1) / (co.alpha-1);

File code/H_I_D/base_estimators/DTsallis_kNN_k_estimation.m

-function [D] = DTsallis_kNN_k_estimation(X,Y,co)
-%Estimates the Tsallis divergence (D) of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the kNN method (S={k}). The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D] = DTsallis_kNN_k_estimation(Y1,Y2,co)
+%Estimates the Tsallis divergence (D) of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the kNN method (S={k}). The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-if size(X,1)~=size(Y,1)
-    disp('Error: the dimension of X and Y must be equal.');
-end
+%verification:
+    if size(Y1,1)~=size(Y2,1)
+        error('The dimension of Y1 and Y2 must be equal.');
+    end
 
-Dtemp1 = estimate_Dtemp1(X,Y,co);
+Dtemp1 = estimate_Dtemp1(Y1,Y2,co);
 D = (Dtemp1-1)/(co.alpha-1);

File code/H_I_D/base_estimators/HRenyi_CDSS_estimation.m

-function [H] = HRenyi_CDSS_estimation(Y,co)
-%Estimates the quadratic Renyi entropy (H) of Y (Y(:,t) is the t^th sample) based on continuously differentiable sample spacing. Cost parameters are provided in the cost object co.
-%
-%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods. 
-%
-%REFERENCE: Umut Ozertem, Ismail Uysal, and Deniz Erdogmus. Continuously differentiable sample-spacing entropy estimation. IEEE Transactions on Neural Networks, 19:1978-1984, 2008.
-%
-%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
-%
-%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-d = size(Y,1);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    Y_sorted = sort(Y);
-    H = compute_CDSS(Y_sorted);
-end

File code/H_I_D/base_estimators/HRenyi_CDSS_initialization.m

-function [co] = HRenyi_CDSS_initialization(mult)
-%Initialization of the quadratic (i.e., co.alpha = 2: fixed) Renyi entropy (H) estimator based on continuously differentiable sample spacing.
-%
-%Note:
-%   1)The estimator is treated as a cost object (co).
-%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
-%
-%INPUT:
-%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
-%OUTPUT:
-%   co: cost object (structure).
-%
-%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
-%
-%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-%mandatory fields:
-    co.name = 'Renyi_CDSS';
-    co.mult = mult;   

File code/H_I_D/base_estimators/HRenyi_GSF_estimation.m

             load(FN,'constant');
             H = log( L / (constant*num_of_samples^co.alpha) ) / (1-co.alpha);
         else
-            disp('Error: the file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
+            error('The file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
         end
     else %estimation up to an additive constant
         H = log( L / num_of_samples^co.alpha ) / (1-co.alpha);

File code/H_I_D/base_estimators/HRenyi_MST_estimation.m

             load(FN,'constant');
             H = log( L / (constant*num_of_samples^co.alpha) ) / (1-co.alpha);
         else
-            disp('Error: the file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
+            error('The file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
         end
     else %estimation up to an additive constant
         H = log( L / num_of_samples^co.alpha ) / (1-co.alpha);

File code/H_I_D/base_estimators/HRenyi_kNN_1tok_estimation.m

             load(FN,'constant');
             H = log( L / (constant*num_of_samples^co.alpha) ) / (1-co.alpha);
         else
-            disp('Error: the file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
+            error('The file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
         end
     else %estimation up to an additive constant
         H = log( L / num_of_samples^co.alpha ) / (1-co.alpha);

File code/H_I_D/base_estimators/HRenyi_kNN_S_estimation.m

             load(FN,'constant');
             H = log( L / (constant*num_of_samples^co.alpha) ) / (1-co.alpha);
         else
-            disp('Error: the file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
+            error('The file containing the additive constant does not exist. You can precompute the additive constant via estimate_HRenyi_constant.m.');
         end
     else %estimation up to an additive constant
         H = log( L / num_of_samples^co.alpha ) / (1-co.alpha);

File code/H_I_D/base_estimators/HRenyi_spacing_E_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
-    Y_sorted = sort(Y);
-    Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
-    d1 = diff(Y_sorted)/2; %(y_(k)-y_(k-1))/2
+
+%verification:
+    if d~=1
+        disp('Error: samples must be one-dimensional for this estimator.');
+    end
     
-    diff1 = ev(d1,1,m-1,m);
-    diff2 = ev(d1,0,num_of_samples-m,m) + ev(d1,m,num_of_samples,m);
-    diff3 = ev(d1,num_of_samples+1-m,num_of_samples-1,m);
+m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
+Y_sorted = sort(Y);
+Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
+d1 = diff(Y_sorted)/2; %(y_(k)-y_(k-1))/2
     
-    dm = Y_sorted([1+m:num_of_samples+m]+m) - Y_sorted([1-m:num_of_samples-m]+m);
-    dm_inv = 2./dm;%2/[y_(k+m)-y_(k-m)]
+diff1 = ev(d1,1,m-1,m);
+diff2 = ev(d1,0,num_of_samples-m,m) + ev(d1,m,num_of_samples,m);
+diff3 = ev(d1,num_of_samples+1-m,num_of_samples-1,m);
     
-    idiff1 = cumsum(ev(dm_inv,1-m,-1,m));
-    idiff2_temp = ev(dm_inv,1-m,num_of_samples-m,m);%L-sum
-    idiff2 = Lsum(idiff2_temp,m);
-    idiff3 = fliplr(cumsum(fliplr( ev(dm_inv,num_of_samples+2-2*m,num_of_samples-m,m) )));
+dm = Y_sorted([1+m:num_of_samples+m]+m) - Y_sorted([1-m:num_of_samples-m]+m);
+dm_inv = 2./dm;%2/[y_(k+m)-y_(k-m)]
+    
+idiff1 = cumsum(ev(dm_inv,1-m,-1,m));
+idiff2_temp = ev(dm_inv,1-m,num_of_samples-m,m);%L-sum
+idiff2 = Lsum(idiff2_temp,m);
+idiff3 = fliplr(cumsum(fliplr( ev(dm_inv,num_of_samples+2-2*m,num_of_samples-m,m) )));
 
-    term1 = sum(diff1.*(idiff1.^co.alpha));
-    term2 = sum(diff2.*(idiff2.^co.alpha));
-    term3 = sum(diff3.*(idiff3.^co.alpha));
+term1 = sum(diff1.*(idiff1.^co.alpha));
+term2 = sum(diff2.*(idiff2.^co.alpha));
+term3 = sum(diff3.*(idiff3.^co.alpha));
     
-    H = term1 + term2 + term3;
-    H = log(H/num_of_samples^co.alpha) / (1-co.alpha);
+H = term1 + term2 + term3;
+H = log(H/num_of_samples^co.alpha) / (1-co.alpha);
     
-end
-
 %-------------------
 function [v] = ev(v0,start_idx,end_idx,off)
 %Extraction of the [start_idx:end_idx] coordinates from vector v0, with

File code/H_I_D/base_estimators/HRenyi_spacing_V_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
-    Y_sorted = sort(Y);
-    Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
-    diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
-    H = log(mean((num_of_samples / (2*m) * diffs).^(1-co.alpha))) / (1-co.alpha);
-end
+
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
+
+m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
+Y_sorted = sort(Y);
+Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
+diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
+H = log(mean((num_of_samples / (2*m) * diffs).^(1-co.alpha))) / (1-co.alpha);
+
 
   
 

File code/H_I_D/base_estimators/HShannon_Voronoi_estimation.m

+function [H] = HShannon_Voronoi_estimation(Y,co)
+%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using Voronoi regions. Cost parameters are provided in the cost object co.
+%
+%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%
+%REFERENCE:
+%   Erik Miller. A new class of entropy estimators for multi-dimensional densities. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 297-300, 2003. 
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+[d,num_of_samples] = size(Y);
+%verification:
+    if d==1
+        error('The dimension of the samples must be >=2 for this estimator.');
+    end
+
+[V,C] = voronoin(Y.');
+num_of_unbounded_regions = 0;
+H = 0;
+for k = 1 : length(C)
+    if isempty(find(C{k}==1)) %bounded region
+        %[c,volume] = convhulln(V(C{k},:)); 
+        [c,volume] = convhulln(V(C{k},:),{'Qx'}); 
+        H = H + log(num_of_samples * volume);
+    else %unbounded region
+	num_of_unbounded_regions = num_of_unbounded_regions + 1;
+    end
+end
+H = H / (num_of_samples-num_of_unbounded_regions);  
+

File code/H_I_D/base_estimators/HShannon_Voronoi_initialization.m

+function [co] = HShannon_Voronoi_initialization(mult)
+%Initialization of the Voronoi region based Shannon differential entropy (H) estimator. The method assumes uniform distribution in each Voronoi region.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%OUTPUT:
+%   co: cost object (structure).
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields:
+    co.name = 'Shannon_Voronoi';
+    co.mult = mult;   
+            

File code/H_I_D/base_estimators/HShannon_spacing_LL_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
-    Y_sorted = sort(Y);
-    Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
-    b = colfilt(Y_sorted,[1,2*m+1],'sliding',@locally_linear_regression);
-    b = b(m+1:end-m);%discard the superfluous values
-    H = -mean(log(b/num_of_samples));
-end
 
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
+
+m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
+Y_sorted = sort(Y);
+Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
+b = colfilt(Y_sorted,[1,2*m+1],'sliding',@locally_linear_regression);
+b = b(m+1:end-m);%discard the superfluous values
+H = -mean(log(b/num_of_samples));

File code/H_I_D/base_estimators/HShannon_spacing_V_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
-    Y_sorted = sort(Y);
-    Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
-    diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
-    H = mean(log (num_of_samples / (2*m) * diffs));
-end
+
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
+
+m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
+Y_sorted = sort(Y);
+Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
+diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
+H = mean(log (num_of_samples / (2*m) * diffs));
+
 
   
 

File code/H_I_D/base_estimators/HShannon_spacing_Vb_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m: can also be fixed
-    Y_sorted = sort(Y);
-    diffs = Y_sorted(1+m:num_of_samples) - Y_sorted(1:num_of_samples-m);
-    b = sum(1./[m:num_of_samples]) + log(m/(num_of_samples+1));%bias correction
-    H = mean(log((num_of_samples+1)/m*diffs)) + b;
-end
 
-  
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
 
+m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m: can also be fixed
+Y_sorted = sort(Y);
+diffs = Y_sorted(1+m:num_of_samples) - Y_sorted(1:num_of_samples-m);
+b = sum(1./[m:num_of_samples]) + log(m/(num_of_samples+1));%bias correction
+H = mean(log((num_of_samples+1)/m*diffs)) + b;

File code/H_I_D/base_estimators/HShannon_spacing_Vpconst_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    %as a base estimator:
-        m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty
-        Y_sorted = sort(Y);
-        Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
-        diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
-        c = [ones(1,m),2*ones(1,num_of_samples-2*m),ones(1,m)]; %piecewise constant correction
-        H = mean(log (num_of_samples / m * diffs./c));
-    %as a meta estimator:
-        %HV = H_estimation(Y,co.member_co);
-        %H = HV + 2*m/num_of_samples * log(2);
-end
 
-  
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
+    
+%as a base estimator:
+    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty
+    Y_sorted = sort(Y);
+    Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
+    diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
+    c = [ones(1,m),2*ones(1,num_of_samples-2*m),ones(1,m)]; %piecewise constant correction
+    H = mean(log (num_of_samples / m * diffs./c));
+%as a meta estimator:
+    %HV = H_estimation(Y,co.member_co);
+    %H = HV + 2*m/num_of_samples * log(2);
 

File code/H_I_D/base_estimators/HShannon_spacing_Vplin_estimation.m

 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 [d,num_of_samples] = size(Y);
-if d~=1
-    disp('Error: samples must be one-dimensional for this estimator.');
-else
-    m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
-    Y_sorted = sort(Y);
-    Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
-    diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
-    c = [1+([1:m]-1)/m,2*ones(1,num_of_samples-2*m),1+(num_of_samples-[num_of_samples-m+1:num_of_samples])/m]; %piecewise linear correction
-    H = mean(log (num_of_samples / m * diffs./c));
-end
+
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
+
+m = floor(sqrt(num_of_samples));%m/num_of_samples->0, m,num_of_samples->infty; m<num_of_samples/2
+Y_sorted = sort(Y);
+Y_sorted = [repmat(Y_sorted(1),1,m),Y_sorted,repmat(Y_sorted(end),1,m)];
+diffs = Y_sorted(2*m+1:num_of_samples+2*m) - Y_sorted(1:num_of_samples);
+c = [1+([1:m]-1)/m,2*ones(1,num_of_samples-2*m),1+(num_of_samples-[num_of_samples-m+1:num_of_samples])/m]; %piecewise linear correction
+H = mean(log (num_of_samples / m * diffs./c));
+
 
   
 

File code/H_I_D/base_estimators/HqRenyi_CDSS_estimation.m

+function [H] = HqRenyi_CDSS_estimation(Y,co)
+%Estimates the quadratic Renyi entropy (H) of Y (Y(:,t) is the t^th sample) based on continuously differentiable sample spacing. Cost parameters are provided in the cost object co.
+%
+%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods. 
+%
+%REFERENCE: Umut Ozertem, Ismail Uysal, and Deniz Erdogmus. Continuously differentiable sample-spacing entropy estimation. IEEE Transactions on Neural Networks, 19:1978-1984, 2008.
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+d = size(Y,1);
+
+%verification:
+    if d~=1
+        error('The samples must be one-dimensional for this estimator.');
+    end
+
+Y_sorted = sort(Y);
+H = compute_CDSS(Y_sorted);
+

File code/H_I_D/base_estimators/HqRenyi_CDSS_initialization.m

+function [co] = HqRenyi_CDSS_initialization(mult)
+%Initialization of the quadratic (i.e., co.alpha = 2: fixed) Renyi entropy (H) estimator based on continuously differentiable sample spacing.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%OUTPUT:
+%   co: cost object (structure).
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields:
+    co.name = 'qRenyi_CDSS';
+    co.mult = mult;   

File code/H_I_D/base_estimators/IHoeffding_estimation.m

     end
 	I = sqrt(abs(I));
 else
-    disp('Error: the subspaces must be one-dimensional for this estimator.');
+    error('The subspaces must be one-dimensional for this estimator.');
 end

File code/H_I_D/base_estimators/ISW1_estimation.m

         I = SW_sigma(r,bound,bin_size);
     end    
 else
-    disp('Error: there must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
+    error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
 end

File code/H_I_D/base_estimators/ISWinf_estimation.m

         I = SW_kappa(r,bound,bin_size);
     end    
 else
-    disp('Error: there must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
+    error('There must be 2 pieces of one-dimensional subspaces (coordinates) for this estimator.');
 end

File code/H_I_D/co_initialization.m

     case {'sumH'}
         co = H_initialization(cost_name,mult);
     otherwise
-        disp('Error: cost type=?');
+        error('cost type=?');
 end

File code/H_I_D/meta_estimators/DJdistance_estimation.m

-function [D_J] = DJdistance_estimation(X,Y,co)
-%Estimates the J-distance of X and Y (X(:,t), Y(:,t) is the t^th sample)
-%using the relation: D_J(f_1,f_2) = D(f_1,f_2)+D(f_2,f_1), where D denotes the Kullback-Leibler divergence. The number of samples in X [=size(X,2)] and Y [=size(Y,2)] can be different. Cost parameters are provided in the cost object co.
+function [D_J] = DJdistance_estimation(Y1,Y2,co)
+%Estimates the J-distance of Y1 and Y2 (Y1(:,t), Y2(:,t) is the t^th sample)
+%using the relation: D_J(f_1,f_2) = D(f_1,f_2)+D(f_2,f_1), where D denotes the Kullback-Leibler divergence. The number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different. Cost parameters are provided in the cost object co.
 %
 %We make use of the naming convention 'D<name>_estimation', to ease embedding new divergence estimation methods.
 %
 
 %co.mult:OK.
 
-D_J =  D_estimation(X,Y,co.member_co) + D_estimation(Y,X,co.member_co);
+D_J =  D_estimation(Y1,Y2,co.member_co) + D_estimation(Y2,Y1,co.member_co);

File code/H_I_D/meta_estimators/HShannon_DKL_N_estimation.m

+function [H] = HShannon_DKL_N_estimation(Y,co)
+%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using the relation H(Y) = H(G) - D(Y,G), where G is Gaussian [N(E(Y),cov(Y)] and D is the Kullback-Leibler divergence.
+%This is a "meta" method, i.e., the Kullback-Leibler divergence estimator can be arbitrary.
+%
+%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%
+%REFERENCE: Quing Wang, Sanjeev R. Kulkarni, and Sergio Verdu. Universal estimation of information measures for analog sources. Foundations And Trends In Communications And Information Theory, 5:265-353, 2009.
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%co.mult:OK.
+
+[d,num_of_samples] = size(Y); %dimension, number of samples
+
+%estimate the mean and covariance of Y:
+    m = mean(Y,2);
+    C = cov(Y.');
+    
+%entropy of N(m,C):
+    H_normal = 1/2 * log( (2*pi*exp(1))^d * det(C) );
+
+%generate samples from N(m,C):
+    R = chol(C); %R'*R=C
+    Y_normal = R.' * randn(d,num_of_samples) + m;    
+    
+H = H_normal - D_estimation(Y,Y_normal,co.member_co); 

File code/H_I_D/meta_estimators/HShannon_DKL_N_initialization.m

+function [co] = HShannon_DKL_N_initialization(mult)
+%Initialization of the "meta" Shannon entropy estimator defined according to the relation H(Y) = H(G) - D(Y,G), where G is Gaussian [N(E(Y),cov(Y)] and D is the Kullback-Leibler divergence.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields:
+    co.name = 'Shannon_DKL_N';
+    co.mult = mult;
+    
+%other fields:
+    co.member_name = 'KL_kNN_k'; %you can change it to any Kullback-Leibler divergence estimator
+    co.member_co = D_initialization(co.member_name,mult);

File code/H_I_D/meta_estimators/HShannon_DKL_U_estimation.m

+function [H] = HShannon_DKL_U_estimation(Y,co)
+%Estimates the Shannon entropy (H) of Y (Y(:,t) is the t^th sample) using the relation H(Y) = -D(Y',U) + log(\prod_i(b_i-a_i)), where Y\in[a,b] = \times_{i=1}^d[a_i,b_i], D is the Kullback-Leibler divergence, Y' = linearly transformed version of Y to [0,1]^d, and U is the uniform distribution on [0,1]^d.
+%This is a "meta" method, i.e., the Kullback-Leibler divergence estimator can be arbitrary.
+%
+%We make use of the naming convention 'H<name>_estimation', to ease embedding new entropy estimation methods.
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%co.mult:OK.
+
+[d,num_of_samples] = size(Y); %dimension, number of samples
+
+%estimate the support of y (a,b):
+    a = min(Y,[],2);
+    b = max(Y,[],2);
+    
+%transform y to [0,1]^d:
+    Y2 = bsxfun(@plus,bsxfun(@rdivide,Y,b-a),a./(a-b));
+    
+%generate samples from U[0,1]^d:
+    U = rand(d,num_of_samples);    
+    
+H = -D_estimation(Y2,U,co.member_co) + log(prod(b-a)); 
+

File code/H_I_D/meta_estimators/HShannon_DKL_U_initialization.m

+function [co] = HShannon_DKL_U_initialization(mult)
+%Initialization of the "meta" Shannon entropy estimator defined according to the relation H(Y) = -D(Y',U) + log(\prod_i(b_i-a_i)), where y\in[a,b] = \times_{i=1}^d[a_i,b_i], D is the Kullback-Leibler divergence, Y' = linearly transformed version of Y to [0,1]^d, and U is the uniform distribution on [0,1]^d.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We make use of the naming convention 'H<name>_initialization', to ease embedding new entropy estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields:
+    co.name = 'Shannon_DKL_U';
+    co.mult = mult;
+    
+%other fields:
+    co.member_name = 'KL_kNN_k'; %you can change it to any Kullback-Leibler divergence estimator
+    co.member_co = D_initialization(co.member_name,mult);

File code/H_I_D/meta_estimators/IMMD_DMMD_estimation.m

     U = rand(size(Z));
     I = D_estimation(Z,U,co.member_co);
 else
-    disp('Error: the subspaces must be one-dimensional for this estimator.');
+    error('The subspaces must be one-dimensional for this estimator.');
 end

File code/H_I_D/meta_estimators/IRenyi_HRenyi_estimation.m

     Z = copula_transformation(Y);
     I_alpha = -H_estimation(Z,co.member_co);
 else
-    disp('Error: the subspaces must be one-dimensional for this estimator.');
+    error('The subspaces must be one-dimensional for this estimator.');
 end

File code/H_I_D/utilities/I_similarity_matrix.m

 %Verify that S_ij >=0 (negative values may appear, e.g, in case of (i) an entropy estimator with an additive constant, or (ii) small number of samples.
     nonnegativeS = ( sum(sum(S>=0)) == prod(size(S)) );
     if ~nonnegativeS
-        disp('Warning: similarity matrix S contains negative elements.');
-        disp('Possible reasons: entropy estimator with an additive constant or small number of samples.');
+        warning('Similarity matrix S contains negative elements. Possible reasons: entropy estimator with an additive constant or small number of samples.');
         disp('We apply the correction: S -> S - min_ij(S).');
         S = S - min(min(S));
     end

File code/H_I_D/utilities/compute_MST.m

     case 'pmtk3_Kruskal'
         [temp,L] = minSpanTreeKruskal(W);
     otherwise
-         disp('Error: MST method=?');
+         error('MST method=?');
 end

File code/H_I_D/utilities/div_sample_generation.m

 
 %verification [sum(ds)=size(Z,1)]:
     if sum(ds)~=size(Z,1)
-        disp('Error: sum(ds) should be equal to size(Z,1).');
+        error('sum(ds) should be equal to size(Z,1).');
     end
     
 %initialization:

File code/H_I_D/utilities/estimate_HRenyi_constant.m

         FN = filename_of_HRenyi_constant(d,co);
         save(FN,'constant','num_of_samples','num_of_MC_runs','co');
     otherwise
-        disp('Error: cost name=?');
+        error('cost name=?');
 end

File code/H_I_D/utilities/filename_of_HRenyi_constant.m

     case 'Renyi_GSF'
         FN = strcat('constant_H',co.name,'_d',num2str(d),'_k',num2str(co.k),'_alpha',num2str(co.alpha),'.mat');%co.kNNmethod (and its parameters), co.GSFmethod are not saved to the filename
     otherwise
-        disp('Error: cost name=?');
+        error('cost name=?');
 end
 
 function [str] = Sstr(ks)

File code/H_I_D/utilities/kNN_squared_distances.m

             [indices,squared_distances] = ann_octave(Y,Q,co,Y_equals_to_Q);
         end
     otherwise
-        disp('Error: kNN method=?');
+        error('kNN method=?');
 end

File code/IPA/data_generation/datasets/generate_symbols.m

                    'theta','iota','kappa','lambda','mu','nu','xi','o','pi',...
                    'rho','sigma','tau','upsilon','phi','chi','psi','omega'};  %English(26) + Greek alphabet(24)
     otherwise
-        disp('Error: data type=?');
+        error('data type=?');
 end

File code/IPA/data_generation/datasets/sample_subspaces.m

                     if num_of_comps == length(de)
                         e = sample_subspaces_multiD_geom(num_of_samples,de);
                     else
-                        disp('Error: number of components is not compatible with the given data type.')
+                        error('The number of components is not compatible with the given data type.')
                     end
                 else
                     de = de_separate_multiD_spherical(data_type);%test for 'multiD-spherical' and 'multiD1-D2-...-DM-spherical'
                         if num_of_comps == length(de)
                             e = sample_subspaces_multiD_spherical(num_of_samples,de);
                         else
-                            disp('Error: number of components is not compatible with the given data type.')
+                            error('The number of components is not compatible with the given data type.')
                         end
                     else
-                        disp('Error: data type=?');
+                        error('data type=?');
                     end
                 end
             end

File code/IPA/data_generation/datasets/sample_subspaces_ikeda.m

 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
-if num_of_comps ~= 2
-    disp('Error: number of components must be 2.');
-else
-    e1 = sample_subspaces_ikeda_map(num_of_samples,20,20,0.9994);
-    e2 = sample_subspaces_ikeda_map(num_of_samples,-100,30,0.998);
-    e = [e1;e2];
-    de = ones(num_of_comps,1) * 2;
-end
+%verification:
+    if num_of_comps ~= 2
+        error('The number of components must be 2.');
+    end
+    
+e1 = sample_subspaces_ikeda_map(num_of_samples,20,20,0.9994);
+e2 = sample_subspaces_ikeda_map(num_of_samples,-100,30,0.998);
+e = [e1;e2];
+de = ones(num_of_comps,1) * 2;
 
 %--------------------
 function [e] = sample_subspaces_ikeda_map(num_of_samples,x0,y0,lam)

File code/IPA/data_generation/models/generate_complex_ISA.m

 %
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
+
 %source(e); below [2d_1,...2d_M]-dimensional real subspaces are generated, and then transformed to the complex domain:
     [e_real,de_real] = sample_subspaces(data_type,num_of_comps,num_of_samples);
-    if all_even(de_real)
-        e = R2C_vector(e_real);
-        de = de_real / 2;
-    else
-        disp('Error: the associated real valued subspaces must be even dimensional!');
-    end
+    %verification:
+        if ~all_even(de_real)
+            error('The associated real valued subspaces must be even dimensional.');
+        end
+    e = R2C_vector(e_real);
+    de = de_real / 2;
     
 %mixing matrix(A):
     D = sum(de);%dimension of the hidden source

File code/IPA/demos/estimate_AR.m

                         [x_innovation_hat,Fx_hat] = estimate_AR_NIW(x,L);
                     case {'subspace','subspace-LL','LL'}
                         [x_innovation_hat,Fx_hat] = estimate_AR_E4(x,L,method);
-                    otherwise
-                        disp('Error: ARfit method=?');
                 end
             if num_of_Ls>1%compute SBC, and store the estimation if it is the best seen so far
                 SBC_L = compute_SBC(L,Lmax,num_of_samples,x_innovation_hat);
                 [siglev,x_innovation_hat] = arres(w_temp, Fx_hat, x.');
                 x_innovation_hat = x_innovation_hat.';
         otherwise
-            disp('Error: AR identification method=?');
+            error('AR identification method=?');
     end
 disp('ARfit: ready.');    

File code/IPA/demos/estimate_ARX.m

                 Fx_hat = M(:,1:Dx*Lx);
                 Bx_hat = M(:,Dx*Lx+1:end);
         otherwise
-            disp('Error: ARX method=?');
+            error('ARX method=?');
     end
     disp('ARX identification: ready.');

File code/IPA/demos/estimate_ICA.m

                                      'stabilization','on','maxNumIterations',3000); 
         W_hat = W_ICA * W_whiten;
     otherwise
-        disp('Error: ICA method=?');
+        error('ICA method=?');
 end
 
 disp('ICA estimation: ready.');

File code/IPA/demos/estimate_ISA.m

 %   de_hat: in case of known subspace dimensions ('unknown_dimensions = 0') de_hat = de; else it contains the estimated subspace dimensions; ordered increasingly.
 %REFERENCE:
 %   Zoltan Szabo, Barnabas Poczos, Andras Lorincz: Undercomplete Blind Subspace Deconvolution. Journal of Machine Learning Research 8(May):1063-1095, 2007. (proof; sufficient conditions for the ISA separation theorem)
-%   Jean-Francois Cardoso. Multidimensional independent component analysis. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1941�1944, 1998. (conjecture)
-%   Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. Fetal electrocardiogram extraction by source subspace separation. In IEEE SP/Athos Workshop on Higher-Order Statistics, pages 134�138, 1995. ('conjecture')
+%   Jean-Francois Cardoso. Multidimensional independent component analysis. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1941�1944, 1998. (conjecture)
+%   Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. Fetal electrocardiogram extraction by source subspace separation. In IEEE SP/Athos Workshop on Higher-Order Statistics, pages 134�138, 1995. ('conjecture')
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
                 [de_hat,per] = sort_subspaces_dimensions(de_hat);%sort the subspaces in increasing order with respect to their dimensions.
                 perm_ICA = perm_ICA(per);%apply permutation 'perm_ICA' and then permutation 'per'
             else
-                disp('Error: cost type=?');
+                error('cost type=?');
             end
         otherwise
-            disp('Error: unknown dimensions=?');
+            error('unknown dimensions=?');
     end
 
 %estimated ISA source(e_hat), ISA demixing matrix (W_hat):        

File code/IPA/demos/estimate_clustering_UD1_S.m

 %   ClusterIndicators: cluster indicator matrix; ClusterIndicators(i,j)=1 <=> the i^th coordinate belongs to the j^th group; 0 otherwise.            
 %REFERENCE:
 %   'NCut':
-%       Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (8), 888�905, 2000.
+%       Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (8), 888�905, 2000.
 %       Normalized Cut Segmentation Code, Timothee Cour, Stella Yu, Jianbo Shi. Copyright 2004 University of Pennsylvania, Computer and Information Science Department.
 %   'SP1,2,3': Ulrike von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing 17 (4), 2007.
-%   'SP2': Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (8), 888�905, 2000.
-%   'SP3': Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: analysis and an algorithm. Advances in Neural Information Processing Systems (NIPS), pp. 849 � 856, 2002.
+%   'SP2': Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (8), 888�905, 2000.
+%   'SP3': Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: analysis and an algorithm. Advances in Neural Information Processing Systems (NIPS), pp. 849 � 856, 2002.
 
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
         if environment_Matlab%Matlab
             ClusterIndicators = ncutW(S,num_of_comps);
         else%Octave
-            disp('Error: we are working in Octave environment. => NCut: not available.');
+            error('We are working in Octave environment. => NCut: not available.');
         end
     case {'SP1','SP2','SP3'}%'SP2'='NCut' but they are different implementations ('SP2':purely Matlab)
         method = str2num(opt_type(3));%'SP1'->1; 'SP2'->2; 'SP3'->3
         ClusterIndicators = SpectralClustering(S,num_of_comps,method);
     otherwise
-        disp('Error: optimization type=?');        
+        error('optimization type=?');
 end

File code/IPA/demos/estimate_complex_ICA.m

             [e_hat,W_ICA] = cfastica_public(x,defl);
             W_hat = W_ICA * W_whiten;
     otherwise
-        disp('Error: Complex ICA method=?');
+        error('Complex ICA method=?');
 end
 
 disp('Complex ICA estimation: ready.');

File code/IPA/demos/estimate_complex_ISA_C.m

                 [de_hat,per] = sort_subspaces_dimensions(de_hat);%sort the subspaces in increasing order with respect to their dimensions.
                 perm_ICA = perm_ICA(per);%apply permutation 'perm_ICA' and then permutation 'per'
             else
-                disp('Error: cost type=?');
+                error('cost type=?');
             end
         otherwise
-            disp('Error: unknown dimensions=?');
+            error('unknown dimensions=?');
     end
     
 %estimated ISA source(e_hat), ISA demixing matrix (W_hat):        

File code/IPA/demos/estimate_fAR.m

         V = x(:,L+1:end);
         E = V - recursive_Nadaraya_Watson_estimator(U,V,fARmethod_parameters);
     otherwise
-        disp('Error: fAR fit method=?');
+        error('fAR fit method=?');
 end

File code/IPA/demos/estimate_gaussianization.m

         case 'rank'
             c = gaussianizationmethod_parameters.c;
         otherwise
-            disp('Error: Gaussianization method=?');
+            error('Gaussianization method=?');
     end
 
 %initialization:
 disp('Gaussianization: started.');
 
 for k = 1 : D%gaussianize every row of s separately
-    switch method
-        case 'rank'
-            %r:=rank(s_k):
-                [s2,I] = sort(s(k,:)); [s3,r] = sort(I);
-            s_transformed(k,:) = norminv(c*r/(num_of_samples+1)+(1-c)/2,0,1);
-        otherwise
-            disp('Error: Gaussianization method=?');
-    end
+    %r:=rank(s_k):
+        [s2,I] = sort(s(k,:)); [s3,r] = sort(I);
+        s_transformed(k,:) = norminv(c*r/(num_of_samples+1)+(1-c)/2,0,1);
 end
 
 disp('Gaussianization: ready.');

File code/IPA/demos/estimate_mAR.m

                         [x_innovation_hat,Fx_hat] = estimate_mAR_NIW(x,L);
                     case {'subspace','subspace-LL','LL'}
                         [x_innovation_hat,Fx_hat] = estimate_mAR_E4(x,L,method);
-                    otherwise
-                        disp('Error: mARfit method=?');
                 end
             if num_of_Ls>1%compute SBC, and store the estimation if it is the best seen so far
                 SBC_L = compute_SBC(L,Lmax,num_of_samples,x_innovation_hat);
                 [siglev,x_innovation_hat] = arres(w_temp, Fx_hat, x.');
                 x_innovation_hat = x_innovation_hat.';
         otherwise
-            disp('Error: mAR identification method=?');
+            error('mAR identification method=?');
     end
 disp('mARfit: ready.');    

File code/IPA/optimization/clustering_UD0.m

 %   cost_type: 
 %       i)Values: 'I','sumH', 'sum-I','Irecursive', 'Ipairwise' or 'Ipairwise1d'.
 %       ii)Special cost types/schemes allow for more efficient optimization procedures. 
-%   cost_name: depending on entropy/mutual information based ISA formulation [see below A)-F)], it can take the values:
-%   i)entropy:
-%       a)base methods: 'Shannon_kNN_k', 'Renyi_kNN_k', 'Renyi_kNN_1tok', 'Renyi_kNN_S', 'Renyi_weightedkNN', 'Renyi_MST', 'Renyi_GSF', 'Tsallis_kNN_k', 'Shannon_Edgeworth'.
-%       b)meta methods: 'complex', 'ensemble', 'RPensemble', 'Tsallis_HRenyi'.
-%   ii)mutual information:
-%       a)base methods: 'GV', 'KCCA', 'KGV', 'HSIC', 'Hoeffding', 'SW1', 'SWinf'.
-%       b)meta methods: 
-%           -Hilbert transformation based method: 'complex',
-%           -divergence based techniques: 'L2_DL2', 'Renyi_DRenyi', 'Tsallis_DTsallis', 'MMD_DMMD',
-%           -entropy based techniques (copula): 'Renyi_HRenyi', 
-%           -entropy based techniques (de Morgan's law): 'Shannon_HShannon'. 
+%   cost_name: depends on the applied ISA formulation [see below A)-F)]. For the name (cost_name) of the possible entropy/mutual information estimators, please, see the accompanying documentation.
 %--------
 %   Introduction: The ISA problem consists of the minimization: J(W) = I(y^1,...,y^M) ->
 %   min_W, where W is orthogonal (the latter can be assumed w.l.o.g.=whitening). Provided that the ISA separation theorem holds
                 S = I_similarity_matrix(s_ICA,co); %similarity matrix
                 perm_ICA = clustering_UD0_greedy_pairadditive_wrt_coordinates(S,ds);
             otherwise 
-                disp('Error: cost type=?');
+                error('cost type=?');
         end
     case 'CE'
         switch cost_type
                 S = I_similarity_matrix(s_ICA,co); %similarity matrix
                 perm_ICA = clustering_UD0_CE_pairadditive_wrt_coordinates(S,ds);
             otherwise 
-               disp('Error: optimization type=?');
+               error('optimization type=?');
         end   
     case 'exhaustive' %can be useful for small dimensions (small sum(ds)), or for cost verification
         switch cost_type
                 S = I_similarity_matrix(s_ICA,co); %similarity matrix
                 perm_ICA = clustering_UD0_exhaustive_pairadditive_wrt_coordinates(S,ds);
             otherwise 
-                disp('Error: cost type=?');
+                error('cost type=?');
         end
     otherwise 
-        disp('Error: optimization type=?');
+        error('optimization type=?');
 end

File code/IPA/optimization/cost_additive_wrt_subspaces_one_subspace.m

     case 'sum-I'
         cost = -I_estimation(Y,ones(size(Y,1),1),co); %I(y_1,...,y_d)
     otherwise
-        disp('Error: cost type=?');
+        error('cost type=?');
 end

File code/IPA/optimization/cost_general.m

     case 'Irecursive'
         cost = cost_Irecursive(Y,ds,co);
     otherwise
-        disp('Error: cost type=?');
+        error('cost type=?');
 end

File code/IPA/optimization/set_mult.m

     case 'Irecursive'
         mult = 1;
     otherwise
-        disp('Error: cost type=?');
+        error('cost type=?');
 end
 
 %-----------------------------

File code/ITE_install.m

             if ~exist('shared/downloaded/ARfit')
                 %mkdir('shared/downloaded/ARfit');%Matlab
                 if ~mkdir('shared/downloaded') || ~mkdir('shared/downloaded/ARfit');%Octave
-                    disp('Error: making directory for ARfit failed.');
+                    error('Making directory for ARfit: failed.');
                 end
             end
         unzip(FN,strcat(ITE_code_dir, '/shared/downloaded/ARfit'));
         delete(FN);%delete the .zip file
         disp('ARfit package: downloading, extraction: ready.');
     else
-        disp('Error: ARfit package could not been downloaded.');
+        error('The ARfit package could not been downloaded.');
     end    
 end
 

File code/shared/Amari_index_ISA.m

 %   amari_ind = Amari_index_ISA([5 -3 4; 3 5/2 5/2; 4 5/2 5/2],[1,2],'uniform',2); %result:1.
 %REFERENCE:
 %  	Zoltan Szabo, Barnabas Poczos: Nonparametric Independent Process Analysis. EUSIPCO-2011, pages 1718-1722. (general d_m-dimensional subspaces, ISA).
-%   Shun-ichi Amari, Andrzej Cichocki, and Howard H. Yang. A new learning algorithm for blind signal separation. NIPS '96, pages 757�763. (one-dimensional case, i.e., ICA).
+%   Shun-ichi Amari, Andrzej Cichocki, and Howard H. Yang. A new learning algorithm for blind signal separation. NIPS '96, pages 757�763. (one-dimensional case, i.e., ICA).
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %
         case 'subspace-dim-proportional'
             V = 1./(ds*ds.'); %V_{ij} = 1 / (ds(i)*ds(j))
         otherwise 
-            disp('Error: weighting scheme=?');
+            error('weighting scheme=?');
     end
 G_shrinked_weighted = G_shrinked .* V; %G_shrinked_weighted=[g^{ij}]
 amari_ind = Amari_index(G_shrinked_weighted);

File code/shared/MA_polynomial.m

                     H(:,(k-1)*De+1:k*De) = Hz{k} * H0;
                 end
         else
-            disp('Error: dim(x) must be equal to dim(e), i.e., we are focusing on complete systems!');
+            error('dim(x) must be equal to dim(e), i.e., we are focusing on complete systems!');
         end
     otherwise 
-        disp('Error: MA type=?');
+        error('MA type=?');
 end

File doc/ITE_documentation.pdf

Binary file removed.

File doc/ITE_documentation.txt

+From release 0.20, "https://bitbucket.org/szzoli/ite/downloads": Downloads:
+1)'ITE-<release>_documentation.pdf': documentation of a given release of the toolbox.
+2)'ITE-latest_documentation.pdf': temporary documentation, compatible with the actual source. 
+
+Note: 2) exists only if 1) does not cover the latest source.
+
+