Commits

Zoltán Szabó committed 1c1d0a1

MMD (maximum mean discrepancy) estimation based on U- and V-statistics: incomplete Cholesky decomposition based accelerations added; see 'DMMD_Ustat_iChol_initialization.m', 'DMMD_Ustat_iChol_estimation.m', 'DMMD_Vstat_iChol_initialization.m', 'DMMD_Vstat_iChol_estimation.m'. 'quick_test_Dequality.m': updated with the new estimators. Refactorization, documentation upgrade: performed (see CHANGELOG.txt).

Comments (0)

Files changed (46)

+v0.49 (Dec 1, 2013):
+
+New:
+====
+-MMD (maximum mean discrepancy) estimation based on U- and V-statistics: incomplete Cholesky decomposition based accelerations added; see 'DMMD_Ustat_iChol_initialization.m', 'DMMD_Ustat_iChol_estimation.m', 'DMMD_Vstat_iChol_initialization.m', 'DMMD_Vstat_iChol_estimation.m'. 'quick_test_Dequality.m': updated with the new estimators.
+
+Refactorization, documentation upgrade:
+=======================================
+-Analytical values in the quick tests: relocated to separate files; see the 'analytical_values' directory.
+-Documentation: 
+--Hyperrefs: activated to improve navigation in the doc; direct jumps to the definitions of the information theoretical quantities: added in Table 3-12 (first columns) and in Fig. 1.
+--Fig. 1: the "I_{dCov} <- I_{HSIC}" and the "I{\chi^2} <- D_{\chi^2}" arrows pointed in the opposite direction; typo corrected.
+--The embedded NCut package has a new homepage, documentation updated accordingly.
+
 v0.48 (Nov 11, 2013):
 -Sharma-Mittal entropy estimation based on (i) k-nearest neighbors (S={k}), and (ii) maximum likehood estimation (MLE) + analytical value in the exponential family: added; see 'HSharmaM_kNN_k_initialization.m', 'HSharmaM_kNN_k_estimation.m', 'HSharmaM_expF_initialization.m', 'HSharmaM_expF_estimation.m'.
 -Quick test for the Sharma-Mittal entropy estimators: added; see 'quick_test_HSharmaM.m'.
 
 **Download** the latest release: 
 
-- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.48_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.48_code.tar.bz2), 
-- [documentation (pdf)](https://bitbucket.org/szzoli/ite/downloads/ITE-0.48_documentation.pdf).
+- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.49_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.49_code.tar.bz2), 
+- [documentation (pdf)](https://bitbucket.org/szzoli/ite/downloads/ITE-0.49_documentation.pdf).
 
 

code/estimators/base_estimators/DMMD_Ustat_iChol_estimation.m

+function [D] = DMMD_Ustat_iChol_estimation(Y1,Y2,co)
+%function [D] = DMMD_Ustat_iChol_estimation(Y1,Y2,co)
+%Estimates divergence (D) of Y1 and Y2 using the MMD (maximum mean discrepancy) method, applying U-statistics and incomplete Cholesky approximation. 
+%
+%We use the naming convention 'D<name>_estimation' to ease embedding new divergence estimation methods.
+%
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
+%REFERENCE: 
+%   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research 13 (2012) 723-773.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%co.mult:OK. The information theoretical quantity of interest can be (and is!) estimated exactly [co.mult=1]; the computational complexity of the estimation is essentially the same as that of the 'up to multiplicative constant' case [co.mult=0].
+
+[dY1,num_of_samplesY1] = size(Y1);
+[dY2,num_of_samplesY2] = size(Y2);
+
+%verification:
+    if dY1~=dY2
+        error('The dimension of the samples in Y1 and Y2 must be equal.');
+    end
+
+switch co.kernel
+    case 'RBF'
+       %incomplete Cholesky decomposition => L:
+          num_of_samples = num_of_samplesY1 + num_of_samplesY2;
+          [L,p] = chol_gauss([Y1,Y2],co.sigma,num_of_samples*co.eta);
+          [temp,p] = sort(p); %p:=inverse of p
+          L = L(p,:);
+          %num_of_samples, size(L,2),
+    
+       %L->D:    
+          e1L1 = sum(L(1:num_of_samplesY1,:),1);%row vector
+          e2L2 = sum(L(num_of_samplesY1+1:end,:),1);%row vector    
+    
+          term1 = e1L1 * e1L1.' / (num_of_samplesY1^2);
+          term2 = e2L2 * e2L2.' / (num_of_samplesY2^2);
+          term3 = -2 * e1L1 * e2L2.' / (num_of_samplesY1*num_of_samplesY2);
+    
+       D = sqrt(abs(term1+term2+term3)); %abs(): to avoid 'sqrt(negative)' values
+   otherwise
+       error('Kernel=?');
+end

code/estimators/base_estimators/DMMD_Ustat_iChol_initialization.m

+function [co] = DMMD_Ustat_iChol_initialization(mult,post_init)
+%function [co] = DMMD_Ustat_iChol_initialization(mult)
+%function [co] = DMMD_Ustat_iChol_initialization(mult,post_init)
+%Initialization of the MMD (maximum mean discrepancy) divergence estimator, applying U-statistics and incomplete Cholesky approximation.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We use the naming convention 'D<name>_initialization' to ease embedding new divergence estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%   post_init: {field_name1,field_value1,field_name2,field_value2,...}; cell array containing the names and the values of the cost object fields that are to be used
+%   (instead of their default values). For further details, see 'post_initialization.m'.
+%OUTPUT:
+%   co: cost object (structure).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields (following the template structure of the estimators to make uniform usage of the estimators possible):
+    co.name = 'MMD_Ustat_iChol';
+    co.mult = mult;
+    
+%other fields:   
+      co.kernel = 'RBF';
+      co.sigma = 0.1; %std
+      co.eta = 1e-1; %precision in the incomplete Cholesky decomposition
+    
+%post initialization (put it _before_ initialization of the members in case of a meta estimator):    
+    if nargin==2 %there are given (name,value) cost object fields
+        co = post_initialization(co,post_init);
+    end            

code/estimators/base_estimators/DMMD_Vstat_iChol_estimation.m

+function [D] = DMMD_Vstat_iChol_estimation(Y1,Y2,co)
+%function [D] = DMMD_Vstat_iChol_estimation(Y1,Y2,co)
+%Estimates divergence (D) of Y1 and Y2 using the MMD (maximum mean discrepancy) method, applying V-statistics and incomplete Cholesky approximation. 
+%
+%We use the naming convention 'D<name>_estimation' to ease embedding new divergence estimation methods.
+%
+%INPUT:
+%  Y1: Y1(:,t) is the t^th sample from the first distribution.
+%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
+%  co: divergence estimator object.
+%
+%REFERENCE: 
+%   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research 13 (2012) 723-773.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%co.mult:OK. The information theoretical quantity of interest can be (and is!) estimated exactly [co.mult=1]; the computational complexity of the estimation is essentially the same as that of the 'up to multiplicative constant' case [co.mult=0].
+
+[dY1,num_of_samplesY1] = size(Y1);
+[dY2,num_of_samplesY2] = size(Y2);
+
+%verification:
+    if dY1~=dY2
+        error('The dimension of the samples in Y1 and Y2 must be equal.');
+    end
+
+switch co.kernel
+    case 'RBF'
+       %incomplete Cholesky decomposition => L:
+          num_of_samples = num_of_samplesY1 + num_of_samplesY2;
+          [L,p] = chol_gauss([Y1,Y2],co.sigma,num_of_samples*co.eta);
+          [temp,p] = sort(p); %p:=inverse of p
+          L = L(p,:);
+          %num_of_samples, size(L,2),
+    
+       %L->D:    
+          L1 = L(1:num_of_samplesY1,:);
+          L2 = L(num_of_samplesY1+1:end,:);
+          e1L1 = sum(L1,1);%row vector
+          e2L2 = sum(L2,1);%row vector    
+    
+          term1 = (e1L1 * e1L1.'  - sum(sum(L1.^2)) ) / (num_of_samplesY1*(num_of_samplesY1-1));
+          term2 = (e2L2 * e2L2.'  - sum(sum(L2.^2)) ) / (num_of_samplesY2*(num_of_samplesY2-1));
+          term3 = -2 * e1L1 * e2L2.' / (num_of_samplesY1*num_of_samplesY2);
+    
+        D = sqrt(abs(term1+term2+term3)); %abs(): to avoid 'sqrt(negative)' values
+   otherwise
+        error('Kernel=?');
+end

code/estimators/base_estimators/DMMD_Vstat_iChol_initialization.m

+function [co] = DMMD_Vstat_iChol_initialization(mult,post_init)
+%function [co] = DMMD_Vstat_iChol_initialization(mult)
+%function [co] = DMMD_Vstat_iChol_initialization(mult,post_init)
+%Initialization of the MMD (maximum mean discrepancy) divergence estimator, applying V-statistics and incomplete Cholesky approximation.
+%
+%Note:
+%   1)The estimator is treated as a cost object (co).
+%   2)We use the naming convention 'D<name>_initialization' to ease embedding new divergence estimation methods.
+%
+%INPUT:
+%   mult: is a multiplicative constant relevant (needed) in the estimation; '=1' means yes, '=0' no.
+%   post_init: {field_name1,field_value1,field_name2,field_value2,...}; cell array containing the names and the values of the cost object fields that are to be used
+%   (instead of their default values). For further details, see 'post_initialization.m'.
+%OUTPUT:
+%   co: cost object (structure).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+%mandatory fields (following the template structure of the estimators to make uniform usage of the estimators possible):
+    co.name = 'MMD_Vstat_iChol';
+    co.mult = mult;
+    
+%other fields:   
+      co.kernel = 'RBF';
+      co.sigma = 0.1; %std
+      co.eta = 1e-1; %precision in the incomplete Cholesky decomposition
+    
+%post initialization (put it _before_ initialization of the members in case of a meta estimator):    
+    if nargin==2 %there are given (name,value) cost object fields
+        co = post_initialization(co,post_init);
+    end            

code/estimators/meta_estimators/DEnergyDist_DMMD_estimation.m

 %
 %REFERENCE:
 %   Dino Sejdinovic, Arthur Gretton, Bharath Sriperumbudur, and Kenji Fukumizu. Hypothesis testing using pairwise distances and associated kernels. International Conference on Machine Learning (ICML), pages 1111-1118, 2012. (semimetric space; energy distance <=> MMD, with a suitable kernel)
-%   Russell Lyons. Distance covariance in metric spaces. Annals of Probability, 2012. (To appear. http://php.indiana.edu/~rdlyons/pdf/dcov.pdf; http://arxiv.org/abs/1106.5758; energy distance, metric space of negative type; pre-equivalence to MMD).
+%   Russell Lyons. Distance covariance in metric spaces. Annals of Probability, 41:3284-3305, 2013. (energy distance, metric space of negative type; pre-equivalence to MMD).
 %   Gabor J. Szekely and Maria L. Rizzo. A new test for multivariate normality. Journal of Multivariate Analysis, 93:58-80, 2005. (energy distance; metric space of negative type)
 %   Gabor J. Szekely and Maria L. Rizzo. Testing for equal distributions in high dimension. InterStat, 5, 2004. (energy distance; R^d)
 

code/estimators/quick_tests/analytical_values/analytical_value_CCE.m

+function [CE] = analytical_value_CCE(distr1,distr2,par1,par2)
+%function [CE] = analytical_value_CCE(distr1,distr2,par1,par2)
+%Analytical value (CCE) of the cross-entropy for the given distributions. See also 'quick_test_CCE.m'.
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'normal'.
+%   distr2 : name of distribution-2; 'normal'.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'normal': par1.e = expectation1, par1.cov = covariance matrix1.
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'normal': par2.e = expectation2, par2.cov = covariance matrix2.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal')
+    %covariance matrices, expectations:
+        cov1 = par1.cov;
+        cov2 = par2.cov;
+        e1 = par1.e;
+        e2 = par2.e;
+    d = size(cov1,1);
+            
+    invC2 = inv(cov2);
+    diffe = e1 - e2;
+    CE = 1/2 * ( d*log(2*pi) + log(det(cov2)) + trace(invC2*cov1) + diffe.'*invC2*diffe );
+else
+    error('Distribution=?');                 
+end        

code/estimators/quick_tests/analytical_values/analytical_value_DBregman.m

+function [D] = analytical_value_DBregman(distr1,distr2,alpha_D,par1,par2)
+%function [D] = analytical_value_DBregman(distr1,distr2,alpha_D,par1,par2)
+%Analytical value (D) of the Bregman divergence for the given distributions. See also 'quick_test_DBregman.m'.
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'uniform'.
+%   distr2 : name of distribution-2; 'uniform'.
+%   alpha_D: parameter of the Bregman divergence.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'uniform': par1.a = a in U[0,a].
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'uniform': par2.a = b in U[0,b].
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'uniform') && strcmp(distr2,'uniform')
+    a = par1.a;
+    b = par2.a;
+    D = -1/(alpha_D-1)*prod(b)^(1-alpha_D) + 1/(alpha_D-1) * prod(a)^(1-alpha_D);
+else
+    error('Distribution=?');                 
+end

code/estimators/quick_tests/analytical_values/analytical_value_DChiSquare.m

+function [D] = analytical_value_DChiSquare(distr1,distr2,par1,par2)
+%function [D] = analytical_value_DChiSquare(distr1,distr2,par1,par2)
+%Analytical value (D) of the chi^2 divergence for the given distributions. See also 'quick_test_DChiSquare.m'
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'uniform'.
+%   distr2 : name of distribution-2; 'uniform'.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'uniform': par1.a = a in U[0,a].
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'uniform': par2.a = b in U[0,b].
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'uniform') && strcmp(distr2,'uniform')
+    a = par1.a;
+    b = par2.a;
+    D = prod(b)/prod(a) - 1;
+else
+    error('Distribution=?'); 
+end
+
+

code/estimators/quick_tests/analytical_values/analytical_value_DJensen_Renyi.m

+function D = analytical_value_DJensen_Renyi(distr1,distr2,w_D,par1,par2)
+%function D = analytical_value_DJensen_Renyi(distr1,distr2,w_D,par1,par2)
+%Analytical value (D) of the Jensen-Renyi divergence for the given distributions. See also 'quick_test_DJensen_Renyi.m'.
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'normal'.
+%   distr2 : name of distribution-2; 'normal'.
+%   w_D    : weight used in the Jensen-Renyi divergence.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'normal': par1.m = mean1, par1.s = covariance matrix1 (s1^2xI).
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'normal': par2.m = mean2, par2.s = covariance matrix2 (s2^2xI).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.
+    m1 = par1.m; s1 = par1.s;
+    m2 = par2.m; s2 = par2.s;
+            
+    ms = [m1,m2];
+	ss = [s1,s2];
+	term1 = compute_H2(w_D,ms,ss);
+	term2 = w_D(1) * compute_H2(1,m1,s1) + w_D(2) * compute_H2(1,m2,s2);
+	D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Renyi entropy
+else
+    error('Distribution=?'); 
+end

code/estimators/quick_tests/analytical_values/analytical_value_DKL.m

+function [D] = analytical_value_DKL(distr1,distr2,par1,par2)
+%function [D] = analytical_value_DKL(distr1,distr2,par1,par2)
+%Analytical value (D) of the Kullback-Leibler divergence for the given distributions. See also 'quick_test_DKL.m'.
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'normal'.
+%   distr2 : name of distribution-2; 'normal'.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'normal': par1.e = expectation1, par1.cov = covariance matrix1.
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'normal': par2.e = expectation2, par2.cov = covariance matrix2.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal')
+    %covariance matrices, expectations:
+        e1 = par1.e; cov1 = par1.cov;
+        e2 = par2.e; cov2 = par2.cov;
+    d = length(e1);
+                
+    diffe = e1 - e2;
+    invSigma2 = inv(cov2);
+    D = 1/2 * ( log(det(cov2)/det(cov1)) + trace(invSigma2*cov1)  + diffe.' * invSigma2 * diffe - d);
+else
+    error('Distribution=?'); 
+end

code/estimators/quick_tests/analytical_values/analytical_value_DL2.m

+function [D] = analytical_value_DL2(distr1,distr2,par1,par2)
+%function [D] = analytical_value_DL2(distr1,distr2,par1,par2)
+%Analytical value (D) of the L2-divergence for the given distributions. See also 'quick_test_DL2.m'.
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'uniform'.
+%   distr2 : name of distribution-2; 'uniform'.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'uniform': par1.a = a in U[0,a].
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'uniform': par2.a = b in U[0,b].
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'uniform') && strcmp(distr2,'uniform')
+    a = par1.a;
+    b = par2.a;
+    
+    D = sqrt(1/prod(b) - 1/prod(a));   
+else
+    error('Distribution=?');                 
+end
+
+
+

code/estimators/quick_tests/analytical_values/analytical_value_DRenyi.m

+function [D] = analytical_value_DRenyi(distr1,distr2,alpha_D,par1,par2)
+%function [D] = analytical_value_DRenyi(distr1,distr2,alpha_D,par1,par2)
+%Analytical value (D) of the Renyi divergence for the given distributions. See also 'quick_test_DRenyi.m'.
+%
+%INPUT:
+%   distr1 : name of distribution-1; 'normal'.
+%   distr2 : name of distribution-2; 'normal'.
+%  alpha_D : parameter of Renyi divergence.
+%   par1   : parameters of distribution-1 (structure),
+%            distr1 = 'normal': par1.e = expectation1, par1.cov = covariance matrix1.
+%   par2   : parameters of distribution-2 (structure),
+%            distr2 = 'normal': par2.e = expectation2, par2.cov = covariance matrix2.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal')
+    %covariance matrices, expectations:
+        e1 = par1.e; cov1 = par1.cov;
+        e2 = par2.e; cov2 = par2.cov;
+        
+    mixcov = alpha_D * cov2 + (1-alpha_D) * cov1;
+    diffe = e1 - e2;
+    %Renyi divergence (Manuel Gil. On Renyi Divergence Measures for Continuous Alphabet Sources. Phd Thesis, Queen’s University, 2011):
+        D = alpha_D * ( 1/2 *diffe.' * inv(mixcov) * diffe  - 1 / (2*alpha_D*(alpha_D-1)) * log( abs(det(mixcov)) / (det(cov1)^(1-alpha_D) * det(cov2)^(alpha_D)) )); 
+    %Kullback-Leibler divergence (verification, in the alpha_D->1 limit: KL div. = Renyi div.):
+        %d = size(cov1,1);
+        %D = 1/2 * ( log(det(cov2)/det(cov1)) + trace(inv(cov2)*cov1)  + diffe.' * inv(cov2) * diffe - d);
+else
+    error('Distribution=?'); 
+end
+
+
+
+

code/estimators/quick_tests/analytical_values/analytical_value_HPhi.m

+function [H] = analytical_value_HPhi(distr,Hpar,par)
+%function [H] = analytical_value_HPhi(distr,Hpar,par)
+%Analytical value (H) of the Phi-entropy for the given distribution. See also 'quick_test_HPhi.m'.
+%
+%INPUT:
+%   distr  : name of the distribution; 'uniform'.
+%   par    : parameters of the distribution (structure),
+%            distr = 'uniform': par.a, par.b <- U[a,b].
+%   Hpar   : parameter of the Phi-entropy. 
+%            In case of Phi = @(t)(t.^c_H): Hpar.c_H.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+switch distr
+    case 'uniform'
+           a = par.a; b = par.b;        
+           c_H = Hpar.c_H;
+           H = 1 / (b-a)^c_H; 
+    otherwise
+        error('Distribution=?');                 
+end

code/estimators/quick_tests/analytical_values/analytical_value_HRenyi.m

+function [H] = analytical_value_HRenyi(distr,alpha_H,par)
+%function [H] = analytical_value_HRenyi(distr,alpha_H,par)
+%Analytical value (H) of the Renyi entropy for the given distribution. See also 'quick_test_HRenyi.m'.
+%
+%INPUT:
+%   distr  : name of the distribution; 'uniform' or 'normal'.
+%   alpha_H: parameter of the Renyi entropy.
+%   par    : parameters of the distribution (structure),
+%            distr = 'uniform': par.a, par.b, par.A <- AxU[a,b],
+%            distr = 'normal': par.cov_mtx = covariance matrix.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+switch distr
+    case 'uniform' %we also applied the transformation rule of Renyi entropy
+        a = par.a; b = par.b; A = par.A;
+        H = log(prod(b-a))  + log(abs(det(A)));
+    case 'normal' %Kai-Sheng Song. Renyi information, loglikelihood and an intrinsic distribution measure. Journal of Statistical Planning and Inference 93 (2001) 51-69.
+        cov_mtx = par.cov_mtx;
+        d = size(cov_mtx,1);
+        
+        H = log( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) ) - d * log(alpha_H) / 2 / (1-alpha_H);
+    otherwise
+        error('Distribution=?');                 
+end            
+

code/estimators/quick_tests/analytical_values/analytical_value_HShannon.m

+function [H] = analytical_value_HShannon(distr,par)
+%Analytical value (H) of the Shannon entropy for the given distribution. See also 'quick_test_HShannon.m'.
+%
+%INPUT:
+%   distr  : name of the distribution; 'normal'.
+%   par    : parameters of the distribution (structure),
+%            distr = 'uniform': par.a, par.b, par.A <- AxU[a,b],
+%            distr = 'normal': par.cov_mtx = covariance matrix.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+switch distr 
+    case 'uniform' %we also applied the transformation rule of Shannon entropy
+        a = par.a; b = par.b; A = par.A;
+        H = log(prod(b-a)) + log(abs(det(A))); 
+    case 'normal'   
+        e = par.e; cov_mtx = par.cov_mtx;
+        d = size(cov_mtx,1);
+        
+        H =  1/2 * log( (2*pi*exp(1))^d * det(cov_mtx) ); %equals to: H = 1/2 * log(det(cov_mtx)) + d/2*log(2*pi) + d/2
+    otherwise
+        error('Distribution=?');
+end  
+        

code/estimators/quick_tests/analytical_values/analytical_value_HSharmaM.m

 function [H] = analytical_value_HSharmaM(distr,alpha_H,beta_H,par)
 %function [H] = analytical_value_HSharmaM(distr,alpha_H,beta_H,par)
-%Analytical value (H) of the Sharma-Mittal entropy for the given distribution.
+%Analytical value (H) of the Sharma-Mittal entropy for the given distribution. See also 'quick_test_HSharmaM.m'.
 %
 %INPUT:
 %   distr  : name of the distribution; 'normal'.
 %You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
 
 switch distr
-    case 'normal' %[Frank Nielsen and Richard Nock. A closed-form expression for the Sharma-Mittal entropy of exponential families. Journal of Physics A: Mathematical and Theoretical, 45:032003, 2012.]
+    case 'normal' %Frank Nielsen and Richard Nock. A closed-form expression for the Sharma-Mittal entropy of exponential families. Journal of Physics A: Mathematical and Theoretical, 45:032003, 2012.
         cov_mtx = par.cov_mtx;
         d = size(cov_mtx,1); %=size(cov_mtx,2)
         H = ( ( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) )^(1-beta_H) / alpha_H^( d*(1-beta_H) / (2*(1-alpha_H)) )  - 1 ) / (1-beta_H);

code/estimators/quick_tests/analytical_values/analytical_value_HTsallis.m

+function [H] = analytical_value_HTsallis(distr,alpha_H,par)
+%function [H] = analytical_value_HTsallis(distr,alpha_H,par)
+%Analytical value (H) of the Tsallis entropy for the given distribution. See also 'quick_test_HTsallis.m'.
+%
+%INPUT:
+%   distr  : name of the distribution; 'uniform' or 'normal'.
+%   alpha_H: parameter of the Tsallis entropy.
+%   par    : parameters of the distribution (structure),
+%            distr = 'uniform': par.a, par.b, par.A <- AxU[a,b],
+%            distr = 'normal': par.cov_mtx = covariance matrix.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+   
+switch distr
+    case 'uniform'
+        a = par.a; b = par.b; A = par.A;
+        
+        H = log(prod(b-a))  + log(abs(det(A))); %Renyi entropy; we also applied the transformation rule of Renyi entropy
+        H = ( exp((1-alpha_H)*H) - 1 ) / (1-alpha_H); %Renyi entropy -> Tsallis entropy
+        %II (assumption: A=I):
+           %H = (prod(b-a)^(1-alpha_H) - 1) / (1-alpha_H);
+    case 'normal'
+        cov_mtx = par.cov_mtx;
+        d = size(cov_mtx,1);
+        
+        %I:
+            H = log( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) ) - d * log(alpha_H) / 2 / (1-alpha_H); %Renyi entropy: Kai-Sheng Song. Renyi information, loglikelihood and an intrinsic distribution measure. Journal of Statistical Planning and Inference 93 (2001) 51-69.
+            H = ( exp((1-alpha_H)*H) - 1 ) / (1-alpha_H); %Renyi entropy -> Tsallis entropy
+        %II:
+            %H = ( ( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) )^(1-alpha_H)  / (alpha_H^(d/2)) - 1 ) / (1-alpha_H);
+    otherwise
+        error('Distribution=?');                 
+end            
+        

code/estimators/quick_tests/analytical_values/analytical_value_IRenyi.m

+function [I] = analytical_value_IRenyi(distr,alpha_I,par)
+%function [I] = analytical_value_IRenyi(distr,alpha_I,par)
+%Analytical value (I) of the Renyi mutual information for the given distribution. See also 'quick_test_IRenyi.m'.
+%
+%INPUT:
+%   distr  : name of the distribution; 'normal'.
+%   alpha_I: parameter of the Renyi mutual information.
+%   par    : parameters of the distribution (structure),
+%            distr = 'normal': par.cov_mtx = covariance matrix.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+    
+switch distr
+    case 'normal'
+        cov_mtx = par.cov_mtx;
+        t1 = -alpha_I / 2 * log(det(cov_mtx));
+        t2 = -(1-alpha_I) / 2 * log(prod(diag(cov_mtx)));
+        t3 = log( det( alpha_I*inv(cov_mtx) + (1-alpha_I)*diag(1./diag(cov_mtx)) ) ) / 2;
+        I = 1 / (alpha_I-1) * (t1+t2-t3);            
+    otherwise
+        error('Distribution=?');            
+end

code/estimators/quick_tests/analytical_values/analytical_value_KEJR1.m

+function [K] = analytical_value_KEJR1(distr1,distr2,u_K,par1,par2)
+%function [K] = analytical_value_KEJR1(distr1,distr2,u_K,par1,par2)
+%Analytical value (K) of the Jensen-Renyi kernel-1 for the given distributions. See also 'quick_test_KEJR1.m'.
+%
+%INPUT:
+%   distr1 : name of the distribution-1; 'normal'.
+%   distr2 : name of the distribution-2; 'normal'.
+%   u_K    : parameter of the Jensen-Renyi kernel-1 (alpha_K = 2: fixed).
+%   par1   : parameters of the distribution (structure),
+%            distr1 = 'normal': par1.m, par1.s <- N(m1,s1^2xI).
+%   par2   : parameters of the distribution (structure),
+%            distr2 = 'normal': par2.m, par2.s <- N(m2,s2^2xI).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
+    m1 = par1.m; s1 = par1.s;
+    m2 = par2.m; s2 = par2.s;
+                
+    ms = [m1,m2];
+    ss = [s1,s2];
+    H = compute_H2([1/2,1/2],ms,ss);%quadratic Renyi entropy
+    K = exp(-u_K*H);
+else
+    error('Distribution=?');                 
+end
+            

code/estimators/quick_tests/analytical_values/analytical_value_KEJR2.m

+function [K] = analytical_value_KEJR2(distr1,distr2,u_K,par1,par2)
+%function [K] = analytical_value_KEJR2(distr1,distr2,u_K,par1,par2)
+%Analytical value (K) of the Jensen-Renyi kernel-2 for the given distributions. See also 'quick_test_KEJR2.m'.
+%
+%INPUT:
+%   distr1 : name of the distribution-1; 'normal'.
+%   distr2 : name of the distribution-2; 'normal'.
+%   u_K    : parameter of the Jensen-Renyi kernel-2 (alpha_K = 2: fixed).
+%   par1   : parameters of the distribution (structure),
+%            distr1 = 'normal': par1.m, par1.s <- N(m1,s1^2xI).
+%   par2   : parameters of the distribution (structure),
+%            distr2 = 'normal': par2.m, par2.s <- N(m2,s2^2xI).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009. 
+    m1 = par1.m; s1 = par1.s;
+    m2 = par2.m; s2 = par2.s;
+    
+    w = [1/2;1/2];%parameter of Jensen-Renyi divergence, fixed
+    ms = [m1,m2];
+    ss = [s1,s2];
+    term1 = compute_H2(w,ms,ss);
+	term2 = w(1) * compute_H2(1,m1,s1) + w(2) * compute_H2(1,m2,s2);
+	D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Renyi entropy
+    
+    K = exp(-u_K*D);
+else
+    error('Distribution=?');                 
+end
+            

code/estimators/quick_tests/analytical_values/analytical_value_KEJT1.m

+function [K] = analytical_value_KEJT1(distr1,distr2,u_K,par1,par2)
+%function [K] = analytical_value_KEJT1(distr1,distr2,u_K,par1,par2)
+%Analytical value (K) of the Jensen-Tsallis kernel-1 for the given distributions. See also 'quick_test_KEJT1.m'.
+%
+%INPUT:
+%   distr1 : name of the distribution-1; 'normal'.
+%   distr2 : name of the distribution-2; 'normal'.
+%   u_K    : parameter of the Jensen-Tsallis kernel-1 ((w = [1/2;1/2]: fixed).
+%   par1   : parameters of the distribution (structure),
+%            distr1 = 'normal': par1.m, par1.s <- N(m1,s1^2xI).
+%   par2   : parameters of the distribution (structure),
+%            distr2 = 'normal': par2.m, par2.s <- N(m2,s2^2xI).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal')
+    m1 = par1.m; s1 = par1.s;
+    m2 = par2.m; s2 = par2.s;
+    %analytical value of Renyi entropy (Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.):
+        ms = [m1,m2];
+        ss = [s1,s2];
+        HR = compute_H2([1/2,1/2],ms,ss);%quadratic Renyi entropy
+    %quadratic Renyi entropy -> quadratic Tsallis entropy:
+        HT = 1 - exp(-HR);
+    K = exp(-u_K*HT);
+else
+    error('Distribution=?');                 
+end
+
+    
+

code/estimators/quick_tests/analytical_values/analytical_value_KEJT2.m

+function [K] = analytical_value_KEJT2(distr1,distr2,u_K,par1,par2)
+%function [K] = analytical_value_KEJT2(distr1,distr2,u_K,par1,par2)
+%Analytical value (K) of the Jensen-Tsallis kernel-2 for the given distributions. See also 'quick_test_KEJT2.m'.
+%
+%INPUT:
+%   distr1 : name of the distribution-1; 'normal'.
+%   distr2 : name of the distribution-2; 'normal'.
+%   u_K    : parameter of the Jensen-Tsallis kernel-2 (w = [1/2;1/2]: fixed).
+%   par1   : parameters of the distribution (structure),
+%            distr1 = 'normal': par1.m, par1.s <- N(m1,s1^2xI).
+%   par2   : parameters of the distribution (structure),
+%            distr2 = 'normal': par2.m, par2.s <- N(m2,s2^2xI).
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal')
+    m1 = par1.m; s1 = par1.s;
+    m2 = par2.m; s2 = par2.s;
+    %analytical value of Jensen-Renyi divergence (Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009):
+        w = [1/2;1/2];%parameter of Jensen-Tsallis divergence, fixed
+	    ms = [m1,m2];
+	    ss = [s1,s2];
+	    term1 = 1 - exp(-compute_H2(w,ms,ss));%quadratic Renyi entropy -> quadratic Tsallis entropy
+	    term2 = w(1) * (1-exp(-compute_H2(1,m1,s1))) + w(2) * (1-exp(-compute_H2(1,m2,s2)));%quadratic Renyi entropy -> quadratic Tsallis entropy
+	    D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Tsallis entropy
+    %analytical value of Jensen-Tsallis kernel-2:
+        K = exp(-u_K*D);
+else
+    error('Distribution=?');                 
+end

code/estimators/quick_tests/analytical_values/analytical_value_Kexpected.m

+function [K] = analytical_value_Kexpected(distr1,distr2,sigma_K,par1,par2)
+%function [K] = analytical_value_Kexpected(distr1,distr2,sigma_K,par1,par2)
+%Analytical value (K) of the expected kernel for the given distributions. See also 'quick_test_Kexpected.m'.
+%
+%INPUT:
+%   distr1 : name of the distribution-1; 'normal'.
+%   distr2 : name of the distribution-2; 'normal'.
+%   sigma_K: parameter of the expected kernel.
+%   par1   : parameters of the distribution (structure),
+%            distr1 = 'normal': par1.e = expectation1, par1.cov = covariance matrix1.
+%   par2   : parameters of the distribution (structure),
+%            distr2 = 'normal': par2.e = expectation2, par2.cov = covariance matrix2.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Scholkopf. Learning from distributions via support measure machines. In Advances in Neural Information Processing Systems (NIPS), pages 10-18, 2011.
+    e1 = par1.e; cov1 = par1.cov;
+    e2 = par2.e; cov2 = par2.cov;
+    d = length(e1);
+    
+    gam = 1/sigma_K^2;
+    K = exp(-1/2*((e1-e2).'*inv(cov1+cov2+eye(d)/gam)*(e1-e2))) / sqrt(abs(det(gam*cov1+gam*cov2+eye(d))));
+else
+    error('Distribution=?');                 
+end

code/estimators/quick_tests/analytical_values/analytical_values_KPP.m

+function [K] = analytical_values_KPP(distr1,distr2,rho_K,par1,par2)
+%function [K] = analytical_values_KPP(distr1,distr2,rho_K,par1,par2)
+%Analytical value (K) of the probability product kernel for the given distributions. See also 'quick_test_KPP.m'.
+%
+%INPUT:
+%   distr1 : name of the distribution-1; 'normal'.
+%   distr2 : name of the distribution-2; 'normal'.
+%   rho_K  : parameter of the probability product kernel.
+%   par1   : parameters of the distribution (structure),
+%            distr1 = 'normal': par1.e = expectation1, par1.cov = covariance matrix1.
+%   par2   : parameters of the distribution (structure),
+%            distr2 = 'normal': par2.e = expectation2, par2.cov = covariance matrix2.
+
+%Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
+%
+%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
+%
+%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
+%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+%
+%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
+%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+%
+%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
+
+if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Tony Jebara, Risi Kondor, and Andrew Howard. Probability product kernels. Journal of Machine Learning Research, 5:819-844, 2004
+    e1 = par1.e; cov1 = par1.cov;
+    e2 = par2.e; cov2 = par2.cov;
+    d = length(e1);
+    
+    %inv12:
+        inv1 = inv(cov1);
+        inv2 = inv(cov2);
+        inv12 = inv(inv1+inv2);
+    e12 = inv1 * e1 + inv2 * e2;
+    K = (2*pi)^((1-2*rho_K)*d/2) * rho_K^(-d/2) * abs(det(inv12))^(1/2) * abs(det(cov1))^(-rho_K/2) * abs(det(cov2))^(-rho_K/2) * exp(-rho_K/2*(e1.'*inv1*e1 + e2.'*inv2*e2 - e12.'*inv12*e12));
+else
+    error('Distribution=?');                 
+end

code/estimators/quick_tests/quick_test_CCE.m

 %function [] = quick_test_CCE()
-%Quick test for cross-entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for cross-entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_CCE.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
                 Y1 = A1 * randn(d,num_of_samples_max) + repmat(e1,1,num_of_samples_max); %A1xN(0,I)+e1
                 Y2 = A2 * randn(d,num_of_samples_max) + repmat(e2,1,num_of_samples_max); %A2xN(0,I)+e2
             %analytical value of cross-entropy:
-                invC2 = inv(cov2);
-                diffe = e1 - e2;
-                CE = 1/2 * ( d*log(2*pi) + log(det(cov2)) + trace(invC2*cov1) + diffe.'*invC2*diffe );
+                par1.e = e1; par1.cov = cov1;
+                par2.e = e2; par2.cov = cov2;
+                CE = analytical_value_CCE(distr,distr,par1,par2);
         otherwise
             error('Distribution=?');
     end   

code/estimators/quick_tests/quick_test_DBregman.m

 %function [] = quick_test_DBregman()
-%Quick test for Bregman divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered.
+%Quick test for Bregman divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered. See also 'analytical_value_DBregman.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
             a = b.*rand(d,1);
             Y1 = rand(d,num_of_samples_max) .* repmat(a,1,num_of_samples_max); %~U[0,a]
             Y2 = rand(d,num_of_samples_max) .* repmat(b,1,num_of_samples_max); %U[0,b], a<=b (coordinate-wise) => Y1<<Y2
-            %analytical value (of Bregman divergence):      
-                D = -1/(alpha_D-1)*prod(b)^(1-alpha_D) + 1/(alpha_D-1) * prod(a)^(1-alpha_D);
+            %analytical value (of Bregman divergence):  
+                par1.a = a;
+                par2.a = b;
+                D = analytical_value_DBregman(distr,distr,alpha_D,par1,par2);
         otherwise
             error('Distribution=?');                 
    end

code/estimators/quick_tests/quick_test_DChiSquare.m

 %function [] = quick_test_DChiSquare()
-%Quick test for (Pearson) chi^2 divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered.
+%Quick test for (Pearson) chi^2 divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered. See also 'analytical_value_DChiSquare.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
             Y1 = rand(d,num_of_samples_max) .* repmat(a,1,num_of_samples_max); %~U[0,a]
             Y2 = rand(d,num_of_samples_max) .* repmat(b,1,num_of_samples_max); %U[0,b], a<=b (coordinate-wise) => Y1<<Y2
             %analytical value (of chi^2 divergence):
-                D = prod(b)/prod(a) - 1;
+                par1.a = a;
+                par2.a = b;
+                D = analytical_value_DChiSquare(distr,distr,par1,par2);
         otherwise
             error('Distribution=?');                 
    end

code/estimators/quick_tests/quick_test_DJensen_Renyi.m

 %function [] = quick_test_DJensen_Renyi()
-%Quick test for Jensen-Renyi divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Jensen-Renyi divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_DJensen_Renyi.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     distr = 'normal'; %fixed
     d = 2; %dimension of the distribution
     num_of_samples_v = [100:500:12*1000]; %sample numbers used for estimation
-    w = [1/3,2/3]; %weight used in the Jensen-Renyi divergence
+    w_D = [1/3,2/3]; %weight used in the Jensen-Renyi divergence
     %method, meta:    
         cost_name = 'JensenRenyi_HRenyi';
     
 %initialization:
     L = length(num_of_samples_v);
-    co = D_initialization(cost_name,1,{'alpha',2,'w',w}); %{'alpha',2}:set the 'alpha' field to 2, and w
+    co = D_initialization(cost_name,1,{'alpha',2,'w',w_D}); %{'alpha',2}:set the 'alpha' field to 2, and w
     num_of_samples_max = num_of_samples_v(end);
     D_hat_v = zeros(L,1); %vector of the estimated Jensen-Renyi divergences
 
                 Y1 = randn(d,num_of_samples_max) * s1 + repmat(m1,1,num_of_samples_max);
                 Y2 = randn(d,num_of_samples_max) * s2 + repmat(m2,1,num_of_samples_max);
 
-  	   %analytical value of Jensen-Renyi divergence (ref.: Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
-	        ms = [m1,m2];
-	        ss = [s1,s2];
-	        term1 = compute_H2(w,ms,ss);
-	        term2 = w(1) * compute_H2(1,m1,s1) + w(2) * compute_H2(1,m2,s2);
-	        D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Renyi entropy
+  	   %analytical value of Jensen-Renyi divergence:
+            par1.m = m1; par1.s = s1;
+            par2.m = m2; par2.s = s2;
+            D = analytical_value_DJensen_Renyi(distr,distr,w_D,par1,par2);
         otherwise
            error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_DKL.m

 %function [] = quick_test_DKL()
-%Quick test for Kullback-Leibler divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Kullback-Leibler divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_DKL.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
                 Y1 = A1 * randn(d,num_of_samples_max) + repmat(e1,1,num_of_samples_max); %A1xN(0,I)+e1
                 Y2 = A2 * randn(d,num_of_samples_max) + repmat(e2,1,num_of_samples_max); %A2xN(0,I)+e2
             %analytical value of Kullback-Leibler divergence:
-                diffe = e1 - e2;
-                invSigma2 = inv(cov2);
-                D = 1/2 * ( log(det(cov2)/det(cov1)) + trace(invSigma2*cov1)  + diffe.' * invSigma2 * diffe - d);
+                par1.e = e1; par1.cov = cov1;
+                par2.e = e2; par2.cov = cov2;
+                D = analytical_value_DKL(distr,distr,par1,par2);
         otherwise
             error('Distribution=?');            
     end

code/estimators/quick_tests/quick_test_DL2.m

 %function [] = quick_test_DL2()
-%Quick test for L2-divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered.
+%Quick test for L2-divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered. See also 'analytical_value_DL2.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
             b = a .* rand(d,1);
             Y1 = rand(d,num_of_samples_max) .* repmat(a,1,num_of_samples_max); %~U[0,a]
             Y2 = rand(d,num_of_samples_max) .* repmat(b,1,num_of_samples_max); %U[0,b], b<=a (coordinate-wise) => Y2<<Y1
-            %analytical value:    
-                D = sqrt(1/prod(b) - 1/prod(a));
+            %analytical value: 
+                par1.a = a; par2.a = b;
+                D = analytical_value_DL2(distr,distr,par1,par2);
         otherwise
             error('Distribution=?');                
     end

code/estimators/quick_tests/quick_test_DRenyi.m

 %function [] = quick_test_DRenyi()
-%Quick test for Renyi divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Renyi divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_DRenyi.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
                 Y1 = A1 * randn(d,num_of_samples_max) + repmat(e1,1,num_of_samples_max); %A1xN(0,I)+e1
                 Y2 = A2 * randn(d,num_of_samples_max) + repmat(e2,1,num_of_samples_max); %A2xN(0,I)+e2
             %analytical value of Renyi divergence:
-                mixcov = alpha_D * cov2 + (1-alpha_D) * cov1;
-                diffe = e1 - e2;
-                %Renyi divergence (Ref.: Manuel Gil. On Renyi Divergence Measures for Continuous Alphabet Sources. Phd Thesis, Queen’s University, 2011.):
-                    D = alpha_D * ( 1/2 *diffe.' * inv(mixcov) * diffe  - 1 / (2*alpha_D*(alpha_D-1)) * log( abs(det(mixcov)) / (det(cov1)^(1-alpha_D) * det(cov2)^(alpha_D)) )); 
-                %Kullback-Leibler divergence (verification, in the alpha_D->1 limit: KL div. = Renyi div.):
-                    %D = 1/2 * ( log(det(cov2)/det(cov1)) + trace(inv(cov2)*cov1)  + diffe.' * inv(cov2) * diffe - d),
+                par1.e = e1; par1.cov = cov1;
+                par2.e = e2; par2.cov = cov2;
+                D = analytical_value_DRenyi(distr,distr,alpha_D,par1,par2);
         otherwise
             error('Distribution=?');            
     end

code/estimators/quick_tests/quick_test_Dequality.m

              %cost_name = 'Bregman_kNN_k';
              %cost_name = 'symBregman_kNN_k';
              %cost_name = 'ChiSquare_kNN_k';
+             %cost_name = 'MMD_Ustat_iChol';
+             %cost_name = 'MMD_Vstat_iChol';
+
         %meta:
              %cost_name = 'Jdistance';        
              %cost_name = 'KL_CCE_HShannon';

code/estimators/quick_tests/quick_test_HPhi.m

 %function [] = quick_test_HPhi()
-%Quick test for Phi-entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered.
+%Quick test for Phi-entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered. See also 'analytical_value_HPhi.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
 %parameters:
     distr = 'uniform'; %fixed
     %Phi (in the Phi-entropy):
-        c = 2; %>=1; c is also used in the analytical expression: 'H = ...'
-        Phi = @(t)(t.^c);
+        c_H = 2; %>=1; c_H is also used in the analytical expression: 'H = ...'
+        Phi_H = @(t)(t.^c_H);
     num_of_samples_v = [1000:1000:100*1000]; %sample numbers used for estimation
     %estimator (of Phi-entropy), base:
         %base:
     d = 1;%dimension of the distribution
     num_of_samples_max = num_of_samples_v(end);
     L = length(num_of_samples_v);
-    co = H_initialization(cost_name,1,{'Phi',Phi}); %{'Phi',Phi}: we also set the Phi function here
+    co = H_initialization(cost_name,1,{'Phi',Phi_H}); %{'Phi',Phi_H}: we also set the Phi function here
     H_hat_v = zeros(L,1);%vector of estimated entropies
     
 %distr -> samples (Y), analytical formula for the entropy (H):
             %generate samples:
                 Y =  (b-a) * rand(1,num_of_samples_max) + a;
             %analytical value of the Phi-entropy:
-                H = 1 / (b-a)^c; 
+                par.a = a; par.b = b;
+                Hpar.c_H = c_H;
+                H = analytical_value_HPhi(distr,Hpar,par);
         otherwise
             error('Distribution=?');
     end  

code/estimators/quick_tests/quick_test_HRenyi.m

 %function [] = quick_test_HRenyi()
-%Quick test for Renyi entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal/uniform variables are considered.
+%Quick test for Renyi entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal/uniform variables are considered. See also 'analytical_value_HRenyi.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
                 %A = eye(d);%do not transform the data
             %generate samples:
                 Y =  A * (rand(d,num_of_samples_max) .* repmat(b-a,1,num_of_samples_max) + repmat(a,1,num_of_samples_max));
-             %analytical value of the Renyi entropy; we also applied the transformation rule of Renyi entropy:
-                H = log(prod(b-a))  + log(abs(det(A)));
+             %analytical value of the Renyi entropy:
+                par.a = a; par.b = b; par.A = A;
+                H = analytical_value_HRenyi(distr,alpha_H,par);
         case 'normal'
             %expectation:
                 e = rand(d,1);
                 cov_mtx = A * A.';
             %generate samples:
                 Y = A * randn(d,num_of_samples_max) + repmat(e,1,num_of_samples_max); %AxN(0,I)+e
-            %analytical value of Renyi entropy [Kai-Sheng Song. Renyi information, loglikelihood and an intrinsic distribution measure. Journal of Statistical Planning and Inference 93 (2001) 51-69.]:
-                H = log( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) ) - d * log(alpha_H) / 2 / (1-alpha_H); 
+            %analytical value of Renyi entropy:
+                par.cov_mtx = cov_mtx;
+                H = analytical_value_HRenyi(distr,alpha_H,par);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_HShannon.m

 %function [] = quick_test_HShannon()
-%Quick test for Shannon entropy estimators (over the real field): analytical expression vs estimated value as a function of the sample number. In the test, normal/uniform variables are considered.
+%Quick test for Shannon entropy estimators (over the real field): analytical expression vs estimated value as a function of the sample number. In the test, normal/uniform variables are considered. See also 'analytical_value_HShannon.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
                 %A = eye(d);%do not transform the data
             %generate samples:
                 Y =  A * (rand(d,num_of_samples_max) .* repmat(b-a,1,num_of_samples_max) + repmat(a,1,num_of_samples_max));
-            %analytical value of the Shannon entropy; we also applied the transformation rule of Shannon entropy:
-                H = log(prod(b-a)) + log(abs(det(A))); 
+            %analytical value of the Shannon entropy:
+                par.a = a; par.b = b; par.A = A;
+                H = analytical_value_HShannon(distr,par);
         case 'normal'
             %expectation:
                 e = rand(d,1);
             %generate samples:
                 Y = A * randn(d,num_of_samples_max) + repmat(e,1,num_of_samples_max); %AxN(0,I)+e
             %analytical value of Shannon entropy:
+                par.e = e; par.cov_mtx = cov_mtx;
+                H = analytical_value_HShannon(distr,par);
                 H =  1/2 * log( (2*pi*exp(1))^d * det(cov_mtx) ); %equals to: H = 1/2 * log(det(cov_mtx)) + d/2*log(2*pi) + d/2
         otherwise
             error('Distribution=?');

code/estimators/quick_tests/quick_test_HSharmaM.m

 %function [] = quick_test_HSharmaM()
-%Quick test for Sharma-Mittal entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Sharma-Mittal entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_HSharmaM.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %

code/estimators/quick_tests/quick_test_HTsallis.m

 %function [] = quick_test_HTsallis()
-%Quick test for Tsallis entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal/uniform variables are considered.
+%Quick test for Tsallis entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal/uniform variables are considered. See also 'analytical_value_HTsallis.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
             %generate samples:
                 Y =  A * (rand(d,num_of_samples_max) .* repmat(b-a,1,num_of_samples_max) + repmat(a,1,num_of_samples_max));
              %analytical value of the Tsallis entropy:
-                %I:
-                    H = log(prod(b-a))  + log(abs(det(A))); %Renyi entropy; we also applied the transformation rule of Renyi entropy
-                    H = ( exp((1-alpha_H)*H) - 1 ) / (1-alpha_H); %Renyi entropy -> Tsallis entropy
-                %II (assumption: A=I):
-                    %H = (prod(b-a)^(1-alpha_H) - 1) / (1-alpha_H);
+                par.a = a; par.b = b; par.A = A;
+                H = analytical_value_HTsallis(distr,alpha_H,par);
         case 'normal'
             %expectation:
                 e = rand(d,1);
             %generate samples:
                 Y = A * randn(d,num_of_samples_max) + repmat(e,1,num_of_samples_max); %AxN(0,I)+e
             %analytical value of Tsallis entropy:
-                %I:
-                    H = log( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) ) - d * log(alpha_H) / 2 / (1-alpha_H); %Renyi entropy: Kai-Sheng Song. Renyi information, loglikelihood and an intrinsic distribution measure. Journal of Statistical Planning and Inference 93 (2001) 51-69.
-                    H = ( exp((1-alpha_H)*H) - 1 ) / (1-alpha_H); %Renyi entropy -> Tsallis entropy
-                %II:
-                    %H = ( ( (2*pi)^(d/2) * sqrt(abs(det(cov_mtx))) )^(1-alpha_H)  / (alpha_H^(d/2)) - 1 ) / (1-alpha_H);
+                par.cov_mtx = cov_mtx;
+                H = analytical_value_HTsallis(distr,alpha_H,par);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_IRenyi.m

 %function [] = quick_test_IRenyi()
-%Quick test for Renyi mutual information estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Renyi mutual information estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_IRenyi.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     
 %parameters:
     distr = 'normal'; %fixed
-    alpha = 0.7; %parameter of Renyi mutual information, \ne 1
+    alpha_I = 0.7; %parameter of Renyi mutual information, \ne 1
     d = 2; %>=2; dimension of the distribution
     num_of_samples_v = [100:500:10*1000]; %sample numbers used for estimation
     %estimator (of Renyi mutual information), meta:
 %initialization:
     num_of_samples_max = num_of_samples_v(end);
     L = length(num_of_samples_v);
-    co = I_initialization(cost_name,1,{'alpha',alpha}); %{'alpha',alpha_H}: set the 'alpha' field
+    co = I_initialization(cost_name,1,{'alpha',alpha_I}); %{'alpha',alpha_I}: set the 'alpha' field
     I_hat_v = zeros(L,1); %vector of estimated Renyi mutual information
     
 %distr, d -> samples (Y), analytical formula for the Renyi mutual information (I):
             %generate samples:
                 Y = A * randn(d,num_of_samples_max) + repmat(e,1,num_of_samples_max); %AxN(0,I)+e
             %analytical value of Renyi mutual information:
-                t1 = -alpha / 2 * log(det(cov_mtx));
-                t2 = -(1-alpha) / 2 * log(prod(diag(cov_mtx)));
-                t3 = log( det( alpha*inv(cov_mtx) + (1-alpha)*diag(1./diag(cov_mtx)) ) ) / 2;
-                I = 1 / (alpha-1) * (t1+t2-t3);            
+                par.cov_mtx = cov_mtx;
+                I = analytical_value_IRenyi(distr,alpha_I,par);
         otherwise
             error('Distribution=?');            
     end

code/estimators/quick_tests/quick_test_KEJR1.m

 %function [] = quick_test_KEJR1()
-%Quick test for Jensen-Renyi kernel-1 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Jensen-Renyi kernel-1 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_KEJR1.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     num_of_samples_max = num_of_samples_v(end);
     L = length(num_of_samples_v);
     %initialize cost object, set the alpha and the  parameter:
-        co = K_initialization(cost_name,1,{'alpha',alpha_K,'u',u_K}); %{'alpha',alpha_K,'u',u_K}: set the 'alpha' field
+        co = K_initialization(cost_name,1,{'alpha',alpha_K,'u',u_K}); %{'alpha',alpha_K,'u',u_K}: set the 'alpha' and 'u' fields
     K_hat_v = zeros(L,1); %vector of estimated kernel values
     
 %distr, d -> samples (Y1,Y2), analytical formula for the Jensen-Renyi kernel (K):
                    m2 = randn(d,1);  s2 = rand;
                 Y1 = randn(d,num_of_samples_max) * s1 + repmat(m1,1,num_of_samples_max);
                 Y2 = randn(d,num_of_samples_max) * s2 + repmat(m2,1,num_of_samples_max);
-            %analytical value of Renyi entropy (ref.: Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
-    	        ms = [m1,m2];
-    	        ss = [s1,s2];
-                H = compute_H2([1/2,1/2],ms,ss);%quadratic Renyi entropy
-            K = exp(-u_K*H);
+            %analytical value of Jensen-Renyi kernel-1:
+                par1.m = m1; par1.s = s1;
+                par2.m = m2; par2.s = s2;
+                K = analytical_value_KEJR1(distr,distr,u_K,par1,par2);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_KEJR2.m

 %function [] = quick_test_KEJR2()
-%Quick test for exponentiated Jensen-Renyi kernel-2 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for exponentiated Jensen-Renyi kernel-2 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_KEJR2.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     distr = 'normal'; %fixed
     d = 1; %dimension of the distribution
     num_of_samples_v = [100:500:12*1000]; %sample numbers used for estimation
-    u = 1; %>0, parameter of the Jensen-Renyi kernel
+    u_K = 1; %>0, parameter of the Jensen-Renyi kernel
     %method, meta:    
     	cost_name = 'EJR2_DJR';
         
 %initialization:
     L = length(num_of_samples_v);
-    co = K_initialization(cost_name,1,{'alpha',2,'u',u}); %{'alpha',2,'u',u}:set the 'alpha' and 'u' fields. Note: there exist explicit formula in case of alpha = 2 for Jensen-Renyi divergence => to the Jensen-Renyi kernel too.
+    co = K_initialization(cost_name,1,{'alpha',2,'u',u_K}); %{'alpha',2,'u',u_K}:set the 'alpha' and 'u' fields. Note: there exist explicit formula in case of alpha = 2 for Jensen-Renyi divergence => to the Jensen-Renyi kernel too.
     num_of_samples_max = num_of_samples_v(end);
     K_hat_v = zeros(L,1); %vector of the estimated Jensen-Renyi kernel
     
                    m2 = randn(d,1);  s2 = rand;
                 Y1 = randn(d,num_of_samples_max) * s1 + repmat(m1,1,num_of_samples_max);
                 Y2 = randn(d,num_of_samples_max) * s2 + repmat(m2,1,num_of_samples_max);
-
-  	   %analytical value of Jensen-Renyi divergence (ref.: Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
-                w = [1/2;1/2];%parameter of Jensen-Renyi divergence, fixed
-	        ms = [m1,m2];
-	        ss = [s1,s2];
-	        term1 = compute_H2(w,ms,ss);
-	        term2 = w(1) * compute_H2(1,m1,s1) + w(2) * compute_H2(1,m2,s2);
-	        D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Renyi entropy
-        %analytical value of Jensen-Renyi kernel-2:
-            K = exp(-u*D);
+            %analytical value of Jensen-Renyi divergence:
+                par1.m = m1; par1.s = s1;
+                par2.m = m2; par2.s = s2;
+                K = analytical_value_KEJR2(distr,distr,u_K,par1,par2);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_KEJT1.m

 %function [] = quick_test_KEJT1()
-%Quick test for Jensen-Tsallis kernel-1 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for Jensen-Tsallis kernel-1 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_KEJT1.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     L = length(num_of_samples_v);
     alpha_K = 2; %\in [0,2]\{1}; parameter of the Jensen-Tsallis kernel; fixed; for alpha_K = 2 we have explicit formula for the Renyi-entropy => for the Jensen-Tsallis kernel    
     %initialize cost object, set the alpha and the  parameter:
-        co = K_initialization(cost_name,1,{'alpha',alpha_K,'u',u_K}); %{'alpha',alpha_K,'u',u_K}: set the 'alpha' field
+        co = K_initialization(cost_name,1,{'alpha',alpha_K,'u',u_K}); %{'alpha',alpha_K,'u',u_K}: set the 'alpha' and 'u' fields
     K_hat_v = zeros(L,1); %vector of estimated kernel values
     
 %distr, d -> samples (Y1,Y2), analytical formula for the Jensen-Tsallis kernel (K):
                    m2 = randn(d,1);  s2 = rand;
                 Y1 = randn(d,num_of_samples_max) * s1 + repmat(m1,1,num_of_samples_max);
                 Y2 = randn(d,num_of_samples_max) * s2 + repmat(m2,1,num_of_samples_max);
-            %analytical value of Renyi entropy (ref.: Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
-    	        ms = [m1,m2];
-    	        ss = [s1,s2];
-                HR = compute_H2([1/2,1/2],ms,ss);%quadratic Renyi entropy
-            %quadratic Renyi entropy -> quadratic Tsallis entropy:
-                HT = 1- exp(-HR);
-            K = exp(-u_K*HT);
+            %analytical value of Renyi entropy:
+                par1.m = m1; par1.s = s1;
+                par2.m = m2; par2.s = s2;
+                K = analytical_value_KEJT1(distr,distr,u_K,par1,par2);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_KEJT2.m

 %function [] = quick_test_KEJT2()
-%Quick test for exponentiated Jensen-Tsallis kernel-2 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for exponentiated Jensen-Tsallis kernel-2 estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_KEJT2.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     distr = 'normal'; %fixed
     d = 1; %dimension of the distribution
     num_of_samples_v = [100:500:12*1000]; %sample numbers used for estimation
-    u = 0.8; %>0, parameter of the Jensen-Tsallis kernel
+    u_K = 0.8; %>0, parameter of the Jensen-Tsallis kernel
     %estimator, meta:    
     	cost_name = 'EJT2_DJT';
         
 %initialization:
     L = length(num_of_samples_v);
-    co = K_initialization(cost_name,1,{'alpha',2,'u',u}); %{'alpha',2,'u',u}:set the 'alpha' and 'u' fields. Note: there exist explicit formula in case of alpha = 2 for Jensen-Renyi divergence => to the Jensen-Renyi kernel too.
+    co = K_initialization(cost_name,1,{'alpha',2,'u',u_K}); %{'alpha',2,'u',u_K}:set the 'alpha' and 'u' fields. Note: there exist explicit formula in case of alpha = 2 for Jensen-Renyi divergence => to the Jensen-Renyi kernel too.
     num_of_samples_max = num_of_samples_v(end);
     K_hat_v = zeros(L,1); %vector of the estimated Jensen-Tsallis kernel
     
                    m2 = randn(d,1);  s2 = rand;
                 Y1 = randn(d,num_of_samples_max) * s1 + repmat(m1,1,num_of_samples_max);
                 Y2 = randn(d,num_of_samples_max) * s2 + repmat(m2,1,num_of_samples_max);
-
-  	   %analytical value of Jensen-Renyi divergence (ref.: Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
-            w = [1/2;1/2];%parameter of Jensen-Tsallis divergence, fixed
-	        ms = [m1,m2];
-	        ss = [s1,s2];
-	        term1 = 1 - exp(-compute_H2(w,ms,ss));%quadratic Renyi entropy -> quadratic Tsallis entropy
-	        term2 = w(1) * (1-exp(-compute_H2(1,m1,s1))) + w(2) * (1-exp(-compute_H2(1,m2,s2)));%quadratic Renyi entropy -> quadratic Tsallis entropy
-	        D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Tsallis entropy
-        %analytical value of Jensen-Tsallis kernel-2:
-            K = exp(-u*D);
+       	    %analytical value of Jensen-Tsallis kernel-2:
+                par1.m = m1; par1.s = s1;
+                par2.m = m2; par2.s = s2;
+                K = analytical_value_KEJT2(distr,distr,u_K,par1,par2);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_KPP.m

 %function [] = quick_test_KPP()
-%Quick test for probability product kernel estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. Note: specially, for rho=1/2 we get the Bhattacharyya kernel.
+%Quick test for probability product kernel estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. Note: specially, for rho=1/2 we get the Bhattacharyya kernel. See also 'analytical_values_KPP.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
     num_of_samples_v = [1000:1000:50*1000]; %sample numbers used for estimation 
     %estimator, base:
         cost_name = 'PP_kNN_k';
-    rho = 0.9; %parameter of the distribution kernel, >0
+    rho_K = 0.9; %parameter of the distribution kernel, >0
     
 %initialization:    
     L = length(num_of_samples_v);    
-    co = K_initialization(cost_name,1,{'rho',rho,'a',rho-1,'b',rho}); %{'rho',rho,'a',rho-1,'b','rho'}: set the 'rho', 'a' and 'b' fields
+    co = K_initialization(cost_name,1,{'rho',rho_K,'a',rho_K-1,'b',rho_K}); %{'rho',rho_K,'a',rho_K-1,'b',rho_K}: set the 'rho', 'a' and 'b' fields
     num_of_samples_max = num_of_samples_v(end);  
     K_hat_v = zeros(L,1); %vector of estimated probability kernel values
 
             %generate samples:
                 Y1 = A1 * randn(d,num_of_samples_max) + repmat(e1,1,num_of_samples_max); %A1xN(0,I)+e1
                 Y2 = A2 * randn(d,num_of_samples_max) + repmat(e2,1,num_of_samples_max); %A2xN(0,I)+e2            
-            %analytical value of probability product kernel (Ref.: Tony Jebara, Risi Kondor, and Andrew Howard. Probability product kernels. Journal of Machine Learning Research, 5:819-844, 2004).
-                %inv12:
-                    inv1 = inv(cov1);
-                    inv2 = inv(cov2);
-                    inv12 = inv(inv1+inv2);
-                e12 = inv1 * e1 + inv2 * e2;
-                K = (2*pi)^((1-2*rho)*d/2) * rho^(-d/2) * abs(det(inv12))^(1/2) * abs(det(cov1))^(-rho/2) * abs(det(cov2))^(-rho/2) * exp(-rho/2*(e1.'*inv1*e1 + e2.'*inv2*e2 - e12.'*inv12*e12));
+            %analytical value of probability product kernel:
+                par1.e = e1; par1.cov = cov1;
+                par2.e = e2; par2.cov = cov2;
+                K = analytical_values_KPP(distr,distr,rho_K,par1,par2);
         otherwise
             error('Distribution=?');                 
     end

code/estimators/quick_tests/quick_test_Kexpected.m

 %function [] = quick_test_Kexpected()
-%Quick test for expected kernel estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered.
+%Quick test for expected kernel estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_Kexpected.m'.
 
 %Copyright (C) 2013 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
 %parameters:  
     distr = 'normal'; %fixed
     d = 1; %dimension of the distribution
+    sigma_K = 1;
     num_of_samples_v = [1000:1000:15*1000]; %sample numbers used for estimation
     %estimator (of the expected kernel), base:
         cost_name = 'expected';
 %initialization:    
     L = length(num_of_samples_v);    
     num_of_samples_max = num_of_samples_v(end);   
-    co = K_initialization(cost_name,1);
+    co = K_initialization(cost_name,1,{'sigma',sigma_K}); %{'sigma',sigma_K}: set the sigma field
     K_hat_v = zeros(L,1); %vector of the estimated expected kernel values
     
 %distr, d -> samples (Y1,Y2), analytical formula for the expected kernel:
                 Y1 = A1 * randn(d,num_of_samples_max) + repmat(e1,1,num_of_samples_max); %A1xN(0,I)+e1
                 Y2 = A2 * randn(d,num_of_samples_max) + repmat(e2,1,num_of_samples_max); %A2xN(0,I)+e2
             %analytical value for the expected kernel:
-            %Ref.:  Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Scholkopf. Learning from distributions via support measure machines. In Advances in Neural Information Processing Systems (NIPS), pages 10-18, 2011.
-                gam = 1/co.sigma^2;
-                K = exp(-1/2*((e1-e2).'*inv(cov1+cov2+eye(d)/gam)*(e1-e2))) / sqrt(abs(det(gam*cov1+gam*cov2+eye(d))));
+                par1.e = e1; par1.cov = cov1;
+                par2.e = e2; par2.cov = cov2;
+                K = analytical_value_Kexpected(distr,distr,sigma_K,par1,par2);
         otherwise
             error('Distribution=?');
     end