Commits

Zoltan Szabo committed 2c2556b

Jensen-Tsallis divergence and Bregman distance estimation: added; see 'DJensenTsallis_HTsallis_initialization.m', 'DJensenTsallis_HTsallis_estimation.m', 'DBregman_kNN_k_initialization.m' and 'DBregman_kNN_k_estimation.m'.

Comments (0)

Files changed (25)

+v0.38 (June 1, 2013):
+-Jensen-Tsallis divergence estimation: added; see 'DJensenTsallis_HTsallis_initialization.m' and 'DJensenTsallis_HTsallis_estimation.m'.
+-Bregman distance estimation: added; see 'DBregman_kNN_k_initialization.m' and 'DBregman_kNN_k_estimation.m'.
+
 v0.37 (May 12, 2013):
 -K divergence estimation: added; see 'DK_DKL_initialization.m' and 'DK_DKL_estimation.m'.
 -L divergence estimation: added; see 'DL_DKL_initialization.m' and 'DL_DKL_estimation.m'.
 
 - `entropy (H)`: Shannon entropy, Rényi entropy, Tsallis entropy (Havrda and Charvát entropy), complex entropy,
 - `mutual information (I)`: generalized variance, kernel canonical correlation analysis, kernel generalized variance, Hilbert-Schmidt independence criterion, Shannon mutual information, L2 mutual information, Rényi mutual information, Tsallis mutual information, copula-based kernel dependency, multivariate version of Hoeffding's Phi, Schweizer-Wolff's sigma and kappa, complex mutual information, Cauchy-Schwartz quadratic mutual information, Euclidean distance based quadratic mutual information, distance covariance, distance correlation, approximate correntropy independence measure,
-- `divergence (D)`: Kullback-Leibler divergence (relative entropy, I directed divergence), L2 divergence, Rényi divergence, Tsallis divergence, Hellinger distance, Bhattacharyya distance, maximum mean discrepancy (kernel distance, an integral probability metric), J-distance (symmetrised Kullback-Leibler divergence, J divergence), Cauchy-Schwartz divergence, Euclidean distance based divergence, energy distance (specially the Cramer-Von Mises distance), Jensen-Shannon divergence, Jensen-Rényi divergence, K divergence, L divergence,
+- `divergence (D)`: Kullback-Leibler divergence (relative entropy, I directed divergence), L2 divergence, Rényi divergence, Tsallis divergence, Hellinger distance, Bhattacharyya distance, maximum mean discrepancy (kernel distance), J-distance (symmetrised Kullback-Leibler divergence, J divergence), Cauchy-Schwartz divergence, Euclidean distance based divergence, energy distance (specially the Cramer-Von Mises distance), Jensen-Shannon divergence, Jensen-Rényi divergence, K divergence, L divergence, certain f-divergences (Csiszár-Morimoto divergence, Ali-Silvey distance), non-symmetric Bregman distance (Bregman divergence), Jensen-Tsallis divergence,
 - `association measures (A)`, including `measures of concordance`: multivariate extensions of Spearman's rho (Spearman's rank correlation coefficient, grade correlation coefficient), correntropy, centered correntropy, correntropy coefficient, correntropy induced metric, centered correntropy induced metric, multivariate extension of Blomqvist's beta (medial correlation coefficient), multivariate conditional version of Spearman's rho, lower/upper tail dependence via conditional Spearman's rho,
 - `cross quantities (C)`: cross-entropy.
 
 
 **Download** the latest release: 
 
-- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.37_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.37_code.tar.bz2), 
-- [documentation (pdf)](https://bitbucket.org/szzoli/ite/downloads/ITE-0.37_documentation.pdf).
+- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.38_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.38_code.tar.bz2), 
+- [documentation (pdf)](https://bitbucket.org/szzoli/ite/downloads/ITE-0.38_documentation.pdf).
 
 

code/H_I_D_A_C/base_estimators/DBregman_kNN_k_estimation.m

+function [D] = DBregman_kNN_k_estimation(Y1,Y2,co)
+%Estimates the Bregman distance (D) of Y1 and Y2 using the kNN method (S={k}). 
+%
+%We use the naming convention 'D<name>_estimation' to ease embedding new divergence estimation methods.
+%
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
+%REFERENCE: 
+%   Nikolai Leonenko, Luc Pronzato, and Vippal Savani. A class of Renyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153-2182, 2008.
+%   Imre Csiszar. Generalized projections for non-negative functions. Acta Mathematica Hungarica, 68:161-185, 1995.
+%   Lev M. Bregman. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7:200-217, 1967.
+
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%co.mult:OK.
+
+[dY1,num_of_samplesY1] = size(Y1);
+[dY2,num_of_samplesY2] = size(Y2);
+
+%verification:
+    if dY1~=dY2
+        error('The dimension of the samples in Y1 and Y2 must be equal.');
+    end
+
+I_alpha_Y1 = estimate_Ialpha(Y1,co);
+I_alpha_Y2 = estimate_Ialpha(Y2,co);
+D_temp3 = estimate_Dtemp3(Y1,Y2,co);
+    
+D = I_alpha_Y2 + I_alpha_Y1 / (co.alpha-1) - co.alpha/(co.alpha-1) * D_temp3;
+

code/H_I_D_A_C/base_estimators/DBregman_kNN_k_initialization.m

+function [co] = DBregman_kNN_k_initialization(mult)
+%Initialization of the kNN (k-nearest neighbor, S={k}) based Bregman distance estimator.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We use the naming convention 'D<name>_initialization' to ease embedding new divergence estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%OUTPUT:
+%   co: cost object (structure).
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields:
+    co.name = 'Bregman_kNN_k';
+    co.mult = mult;
+    
+%other fields:
+    %Possibilities for 'co.kNNmethod' (see 'kNN_squared_distances.m'): 
+        %I: 'knnFP1': fast pairwise distance computation and C++ partial sort; parameter: co.k.                
+        %II: 'knnFP2': fast pairwise distance computation; parameter: co.k. 						
+        %III: 'knnsearch' (Matlab Statistics Toolbox): parameters: co.k, co.NSmethod ('kdtree' or 'exhaustive').
+        %IV: 'ANN' (approximate nearest neighbor); parameters: co.k, co.epsi. 
+		%I:
+            co.kNNmethod = 'knnFP1';
+            co.k = 3;%k-nearest neighbors				
+		%II:
+            %co.kNNmethod = 'knnFP2';
+            %co.k = 3;%k-nearest neighbors				
+        %III:
+            %co.kNNmethod = 'knnsearch';
+            %co.k = 3;%k-nearest neighbors
+            %co.NSmethod = 'kdtree';
+        %IV:
+            %co.kNNmethod = 'ANN';
+            %co.k = 3;%k-nearest neighbors
+            %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
+    co.alpha = 0.95; %assumption: not equal to 1
+    
+%initialize the ann wrapper in Octave, if needed:
+    initialize_Octave_ann_wrapper_if_needed(co.kNNmethod);

code/H_I_D_A_C/base_estimators/DHellinger_kNN_k_estimation.m

 %  co: divergence estimator object.
 %
 %REFERENCE: 
-%   Barnabas Poczos and Liang Xiong and Dougal Sutherland and Jeff Schneider. Support Distribution Machines. Technical Report, 2012. "http://arxiv.org/abs/1202.0302"
+%   Barnabas Poczos and Liang Xiong and Dougal Sutherland and Jeff Schneider. Support Distribution Machines. Technical Report, 2012. "http://arxiv.org/abs/1202.0302" (estimation)
 %
 %Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
 %

code/H_I_D_A_C/base_estimators/DRenyi_kNN_k_initialization.m

             %co.k = 3;%k-nearest neighbors
             %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
             
-    co.alpha = 0.99; %The Renyi divergence (D_{R,alpha}) equals to the Kullback-Leibler divergence (D) in limit: D_{R,alpha} -> D, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi divergence (D_{R,alpha}) equals to the Kullback-Leibler divergence (D) in limit: D_{R,alpha} -> D, as alpha -> 1.
 
 %initialize the ann wrapper in Octave, if needed:
     initialize_Octave_ann_wrapper_if_needed(co.kNNmethod);

code/H_I_D_A_C/base_estimators/DTsallis_kNN_k_initialization.m

             %co.k = 3;%k-nearest neighbors
             %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
 				
-    co.alpha = 0.99; %The Tsallis divergence (D_{T,alpha}) equals to the Kullback-Leibler divergence (D) in limit: D_{T,alpha} -> D, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Tsallis divergence (D_{T,alpha}) equals to the Kullback-Leibler divergence (D) in limit: D_{T,alpha} -> D, as alpha -> 1.
 
 %initialize the ann wrapper in Octave, if needed:
     initialize_Octave_ann_wrapper_if_needed(co.kNNmethod);

code/H_I_D_A_C/base_estimators/HRenyi_GSF_initialization.m

     %Method to compute the geodesic spanning forest:
         co.GSFmethod = 'MatlabBGL_Kruskal';
 
-    co.alpha = 0.99; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
     co.additive_constant_is_relevant = 0; %1:additive constant is relevant (you can precompute it via 'estimate_HRenyi_constant.m'), 0:not relevant
     
 %initialize the ann wrapper in Octave, if needed:

code/H_I_D_A_C/base_estimators/HRenyi_MST_initialization.m

     co.mult = mult;
     
 %other fields:
-    co.alpha = 0.99; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
     %Possibilites for the MST (minimum spanning tree) method:
         co.MSTmethod = 'MatlabBGL_Prim';
         %co.MSTmethod = 'MatlabBGL_Kruskal';

code/H_I_D_A_C/base_estimators/HRenyi_kNN_1tok_initialization.m

             %co.k = 3;%k-nearest neighbors
             %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
 
-    co.alpha = 0.99; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
     co.additive_constant_is_relevant = 0; %1:additive constant is relevant (you can precompute it via 'estimate_HRenyi_constant.m'), 0:not relevant
     
 %initialize the ann wrapper in Octave, if needed:

code/H_I_D_A_C/base_estimators/HRenyi_kNN_S_initialization.m

             %co.k = [1,2,4];%=S: nearest neighbor set
             %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
 				
-    co.alpha = 0.99; %The Renyi entropy (H_{R,alpha) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi entropy (H_{R,alpha) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
     co.additive_constant_is_relevant = 0; %1:additive constant is relevant (you can precompute it via 'estimate_HRenyi_constant.m'), 0:not relevant    
     
 %initialize the ann wrapper in Octave, if needed:

code/H_I_D_A_C/base_estimators/HRenyi_kNN_k_initialization.m

             %co.k = 3;%k-nearest neighbors
             %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
             
-    co.alpha = 0.95; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.95; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
     
 %initialize the ann wrapper in Octave, if needed:
     initialize_Octave_ann_wrapper_if_needed(co.kNNmethod);

code/H_I_D_A_C/base_estimators/HRenyi_spacing_E_initialization.m

     co.mult = mult;   
 
 %other fields:
-    co.alpha = 0.99; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
             

code/H_I_D_A_C/base_estimators/HRenyi_spacing_V_initialization.m

     co.mult = mult;   
 
 %other fields:
-    co.alpha = 0.99; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.99; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> H, as alpha -> 1.
             

code/H_I_D_A_C/base_estimators/HRenyi_weightedkNN_initialization.m

     co.mult = mult;
     
 %other fields:
-    co.alpha = 0.95; %The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> Shannon=H, as alpha -> 1.
+    co.alpha = 0.95; %alpha \ne 1. The Renyi entropy (H_{R,alpha}) equals to the Shannon differential entropy (H) in limit: H_{R,alpha} -> Shannon=H, as alpha -> 1.
     %Possibilities for 'co.kNNmethod' (see 'kNN_squared_distances.m'): 
         %I: 'knnFP1': fast pairwise distance computation and C++ partial sort; parameter: co.k.
         %II: 'knnFP2': fast pairwise distance computation; parameter: co.k. 												 		

code/H_I_D_A_C/base_estimators/HTsallis_kNN_k_initialization.m

             %co.k = 3;%k-nearest neighbors
             %co.epsi = 0; %=0: exact kNN; >0: approximate kNN, the true (not squared) distances can not exceed the real distance more than a factor of (1+epsi).
             
-    co.alpha = 0.95; %The Tsallis entropy (H_{T,alpha}) equals to the Shannon differential (H) entropy in limit: H_{T,alpha} -> H, as alpha -> 1.
+    co.alpha = 0.95; %alpha \ne 1. The Tsallis entropy (H_{T,alpha}) equals to the Shannon differential (H) entropy in limit: H_{T,alpha} -> H, as alpha -> 1.
     
 %initialize the ann wrapper in Octave, if needed:
     initialize_Octave_ann_wrapper_if_needed(co.kNNmethod);

code/H_I_D_A_C/meta_estimators/DJensenRenyi_HRenyi_estimation.m

 function [D_JR] = DJensenRenyi_HRenyi_estimation(Y1,Y2,co)
 %Estimates the Jensen-Renyi divergence of Y1 and Y2 using the relation: 
-%D_JR(f_1,f_2) = H_{R,alpha}(w1*y^1+w2*y^2) - [w1*H_{R,alpha}(y^1) + w2*H_{R,alpha}(y^2)], where y^i has density f_i (i=1,2), w1*y^1+w2*y^2 is the mixture distribution of y^1 and y^2 with w1,w2 positive weights, and H_{R,alpha} denotes the Renyi entropy.
+%D_JR(f_1,f_2) = D_{JR,alpha}(f_1,f_2) = H_{R,alpha}(w1*y^1+w2*y^2) - [w1*H_{R,alpha}(y^1) + w2*H_{R,alpha}(y^2)], where y^i has density f_i (i=1,2), w1*y^1+w2*y^2 is the mixture distribution of y^1 and y^2 with w1,w2 positive weights, and H_{R,alpha} denotes the Renyi entropy.
 %
 %Note:
 %   1)We use the naming convention 'D<name>_estimation' to ease embedding new divergence estimation methods.

code/H_I_D_A_C/meta_estimators/DJensenRenyi_HRenyi_initialization.m

 function [co] = DJensenRenyi_HRenyi_initialization(mult)
 %Initialization of the Jensen-Renyi divergence estimator, defined according to the relation:
-%D_JR(f_1,f_2) = H_{R,alpha}(w1*y^1+w2*y^2) - [w1*H_{R,alpha}(y^1) + w2*H_{R,alpha}(y^2)], where y^i has density f_i (i=1,2), w1*y^1+w2*y^2 is the mixture distribution of y^1 and y^2 with w1,w2 positive weights, and H_{R,alpha} denotes the Renyi entropy.
+%D_JR(f_1,f_2) = D_{JR,alpha}(f_1,f_2) = H_{R,alpha}(w1*y^1+w2*y^2) - [w1*H_{R,alpha}(y^1) + w2*H_{R,alpha}(y^2)], where y^i has density f_i (i=1,2), w1*y^1+w2*y^2 is the mixture distribution of y^1 and y^2 with w1,w2 positive weights, and H_{R,alpha} denotes the Renyi entropy.
 %
 %Note:
 %   1)The estimator is treated as a cost object (co).
     co.mult = mult;
     
 %other fields:
-    co.alpha = 0.95; 
+    co.alpha = 0.95; %alpha \ne 1. 
     
     co.w = [1/2,1/2]; %assumption: co.w(i)>0, sum(co.w)=1
     co.member_name = 'Renyi_kNN_k'; %you can change it to any Renyi entropy estimator

code/H_I_D_A_C/meta_estimators/DJensenTsallis_HTsallis_estimation.m

+function [D_JT] = DJensenTsallis_HTsallis_estimation(Y1,Y2,co)
+%Estimates the Jensen-Tsallis divergence of Y1 and Y2 using the relation: 
+%D_JT(f_1,f_2) = D_{JT,alpha}(f_1,f_2) = H_{T,alpha}((y^1+y^2)/2) - [1/2*H_{T,alpha}(y^1) + 1/2*H_{T,alpha}(y^2)], where y^i has density f_i (i=1,2), (y^1+y^2)/2 is the mixture distribution of y^1 and y^2 with 1/2-1/2 weights, and H_{T,alpha} denotes the Tsallis entropy.
+%
+%Note:
+%   1)We use the naming convention 'D<name>_estimation' to ease embedding new divergence estimation methods.
+%   2)This is a meta method: the Tsallis entropy estimator can be arbitrary.
+%
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution.
+%  co: divergence estimator object.
+%
+%REFERENCE:
+%  J. Burbea and C.R. Rao. On the convexity of some divergence measures based on entropy functions. IEEE Transactions on Information Theory, 28:489-495, 1982.
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%co.mult:OK.
+
+%verification:
+    if size(Y1,1)~=size(Y2,1)
+        error('The dimension of the samples in Y1 and Y2 must be equal.');
+    end
+
+w = [1/2,1/2];
+mixtureY = mixture_distribution(Y1,Y2,w);
+D_JT =  H_estimation(mixtureY,co.member_co) - (w(1)*H_estimation(Y1,co.member_co) + w(2)*H_estimation(Y2,co.member_co));

code/H_I_D_A_C/meta_estimators/DJensenTsallis_HTsallis_initialization.m

+function [co] = DJensenTsallis_HTsallis_initialization(mult)
+%Initialization of the Jensen-Tsallis divergence estimator, defined according to the relation:
+%D_JT(f_1,f_2) = D_{JT,alpha}(f_1,f_2) = H_{T,alpha}((y^1+y^2)/2) - [1/2*H_{T,alpha}(y^1) + 1/2*H_{T,alpha}(y^2)], where y^i has density f_i (i=1,2), (y^1+y^2)/2 is the mixture distribution of y^1 and y^2 with 1/2-1/2 weights, and H_{T,alpha} denotes the Tsallis entropy.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We use the naming convention 'D<name>_initialization' to ease embedding new divergence estimation methods.
+%   3)This is a meta method: the Tsallis entropy estimator can arbitrary.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%OUTPUT:
+%   co: cost object (structure).
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields:
+    co.name = 'JensenTsallis_HTsallis';
+    co.mult = mult;
+    
+%other fields:
+    co.alpha = 0.95; %alpha \ne 1
+    
+    co.member_name = 'Tsallis_kNN_k'; %you can change it to any Tsallis entropy estimator
+    co.member_co = H_initialization(co.member_name,mult);
+    
+    co.member_co.alpha = co.alpha; %automatism for setting the parameters (co.alpha) of member_co (co_2) in a <=2 deep meta construction (co_1 -> co_2); otherwise, please set the parameters (co.alpha) in the member_co-s.

code/H_I_D_A_C/meta_estimators/HTsallis_HRenyi_initialization.m

     co.mult = mult;
     
 %other fields:
-    co.alpha = 0.95; 
+    co.alpha = 0.95; %alpha \ne 1.
 
     co.member_name = 'Renyi_kNN_k'; %you can change it to any Renyi entropy estimator
     co.member_co = H_initialization(co.member_name,mult);

code/H_I_D_A_C/meta_estimators/IRenyi_DRenyi_initialization.m

 %mandatory fields:
     co.name = 'Renyi_DRenyi';
     co.mult = mult;
-    co.alpha = 0.95;
 	
 %other fields:    
+    co.alpha = 0.95; %alpha \ne 1.
+
     co.member_name = 'Renyi_kNN_k'; %you can change it to any Renyi divergence estimator
     co.member_co = D_initialization(co.member_name,mult);
 

code/H_I_D_A_C/meta_estimators/IRenyi_HRenyi_initialization.m

 %mandatory fields:
     co.name = 'Renyi_HRenyi';
     co.mult = mult;
-    co.alpha = 0.99;
     
 %other fields:    
+    co.alpha = 0.99; %alpha \ne 1.
+
     co.member_name = 'Renyi_kNN_k'; %you can change it to any Renyi entropy estimator 
     co.member_co = H_initialization(co.member_name,mult);
 

code/H_I_D_A_C/meta_estimators/ITsallis_DTsallis_initialization.m

 %mandatory fields:
     co.name = 'Tsallis_DTsallis';
     co.mult = mult;
-    co.alpha = 0.99;
 	
 %other fields:    
+    co.alpha = 0.99; %alpha \ne 1.
+
     co.member_name = 'Tsallis_kNN_k'; %you can change it to any Tsallis divergence estimator
     co.member_co = D_initialization(co.member_name,mult);
 

code/H_I_D_A_C/utilities/estimate_Dtemp3.m

+function [Dtemp3] = estimate_Dtemp3(Y1,Y2,co)
+%Estimates Dtemp3 = \int p(u)q^{a-1}(u)du; used in Bregman distance computation.
+%
+%INPUT:
+%   Y1: Y1(:,t) is the t^th sample from the first distribution (Y1~p).
+%   Y2: Y2(:,t) is the t^th sample from the second distribution (Y2~q).
+%  co: cost object (structure).
+%
+%Copyright (C) 2012 Zoltan Szabo ("http://nipg.inf.elte.hu/szzoli", "szzoli (at) cs (dot) elte (dot) hu")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%initialization:
+    [dY1,num_of_samplesY1] = size(Y1);
+    [dY2,num_of_samplesY2] = size(Y2);
+    
+%verification:
+    if dY1~=dY2
+        error('The dimension of the samples in Y1 and Y2 must be equal.');
+    end
+    
+d = dY1; %=dY2
+a = co.alpha;
+k = co.k;
+
+squared_distancesY1Y2 = kNN_squared_distances(Y2,Y1,co,0);
+V = volume_of_the_unit_ball(d);
+Ca = ( gamma(k)/gamma(k+1-a) ); %C^a
+
+Dtemp3 = num_of_samplesY2^(1-a) * Ca * V^(1-a) * mean(squared_distancesY1Y2(co.k,:).^(d*(1-a)/2)); %/2 <= squared distances
+
+
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.