1. Zoltán Szabó
  2. ITE

Commits

Zoltán Szabó  committed f556188

Distribution regression (supervised entropy learning, aerosol optical depth prediction based on satellite images): added. MMD distance computation based on U-statistics, expected kernel: upgraded to cover 7 new kernel families.

  • Participants
  • Parent commits 27e44d4
  • Branches default
  • Tags release-0.56

Comments (0)

Files changed (150)

File CHANGELOG.txt

View file
  • Ignore whitespace
+v0.56 (Marc 27, 2014):
+-Distribution regression (supervised entropy learning, aerosol optical depth prediction based on satellite images): added; see the new directory 'tests_distribution_regression' in the 'quick_tests' folder.
+-MMD distance computation based on U-statistics, expected kernel: upgraded to cover new kernels (exponential, Cauchy, Matern, polynomial, rational quadratic, inverse multiquadratic), see 'DMMD_Ustat_initialization.m', 'DMMD_Ustat_estimation.m', 'Kexpected_initialization.m', 'Kexpected_estimation.m'.
+-New references: added; see 'Kexpected_estimation.m', 'DMMD_online_estimation.m', ' DMMD_Ustat_estimation.m', 'DMMD_Ustat_iChol_estimation.m', 'DMMD_Vstat_estimation.m', 'DMMD_Vstat_iChol_estimation.m', 'IHSIC_estimation.m'.
+-Quick tests: grouped; see the new 'tests_analytical_vs_estimation', 'tests_other_consistency' and 'tests_image_registration' directories in the 'quick_tests' folder.
+
 v0.55 (Marc 7, 2014):
-
 -Shannon entropy and cross-entropy estimation based on maximum likelihood estimation + analytical formula in the chosen exponential family: added; see 'HShannon_expF_initialization.m', 'HShannon_expF_estimation.m', 'CCE_expF_initialization', 'CCE_expF_estimation.m'.
 -Quick tests: updated with the new estimators, see 'quick_test_HShannon.m', 'quick_test_Himreg.m' and 'quick_test_CCE.m'.
 -Citing information (JMLR page numbers): updated in the documentation.
 
 v0.54 (Feb 24, 2014):
-
 -Renyi entropy estimation based on maximum likelihood estimation + analytical formula in the exponential family: added; see 'HRenyi_expF_initialization.m', 'HRenyi_expF_estimation.m'.
 -Tsallis entropy estimation based on MLE + analytical formula in the exponential family: added; see 'HTsallis_expF_initialization.m', 'HRenyi_Tsallis_estimation.m'.
 -Quick tests: updated according to the new estimators; see 'quick_test_HRenyi.m', 'quick_test_HTsallis.m', 'quick_test_Himreg.m'.
 
 v0.53 (Feb 2, 2014):
-
 -f-divergence estimation based on second-order Taylor expansion + Pearson chi square divergence: added; see 'Df_DChiSquare_initialization.m', 'Df_DChiSquare_estimation.m'.
 -Shannon mutual information estimation based on KL divergence: added; see 'IShannon_DKL_initialization.m', 'IShannon_DKL_estimation.m'. 
 -Quick tests updated with the new estimators; see 'quick_test_IShannon.m', 'quick_test_Iimreg.m', 'quick_test_Iindependence.m', 'quick_test_Dequality.m'.
 -ARfit download: updated to the new 'http://clidyn.ethz.ch/arfit/arfit.zip' url, see 'ITE_install.m'.
 
 v0.52 (Jan 9, 2014):
-
 -Sharma-Mittal divergence estimation: 
 i)added using (i) maximum likelihood + analytical formula in the exponential family, (ii) k-nearest neighbors; see 'DSharmaM_expF_initialization.m', 'DSharmaM_expF_estimation.m', 'DSharmaM_kNN_k_initialization.m', 'DSharmaM_kNN_k_estimation.m'. 
 ii)quick test: added, see 'quick_test_DSharmaM.m'.

File README.md

View file
  • Ignore whitespace
 ITE offers 
 
 - solution methods for (i) Independent Subspace Analysis (ISA), and (ii) its extensions to different linear-, controlled-, post nonlinear-, complex valued-, partially observed models, as well as to systems with nonparametric source dynamics, 
-- several consistency tests (analytical vs estimated value), and
-- a further demonstration in image registration.
+- several consistency tests (analytical vs estimated value),
+- illustrations for information theoretical image registration, and
+- distribution regression including (i) supervised entropy learning and (ii) aerosol optical depth prediction based on satellite images.
 
 * * *
 
 
 **Download** the latest release: 
 
-- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.55_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.55_code.tar.bz2), 
-- [documentation (pdf)](https://bitbucket.org/szzoli/ite/downloads/ITE-0.55_documentation.pdf).
+- code: [zip](https://bitbucket.org/szzoli/ite/downloads/ITE-0.56_code.zip), [tar.bz2](https://bitbucket.org/szzoli/ite/downloads/ITE-0.56_code.tar.bz2), 
+- documentation: [pdf](https://bitbucket.org/szzoli/ite/downloads/ITE-0.56_documentation.pdf).

File code/ITE_install.m

View file
  • Ignore whitespace
         compile_KDP = 1; %1=compile, 0=do not compile  
     %download:
 		download_ARfit = 1;%1=download+extract,0=do not download
+        download_MISR1 = 1;%1=download+extract,0=do not download, aerosol optical depth (AOD) prediction: MISR1 dataset
 	delete_Ncut_in_Octave = 1; %1=yes, 0=no
 
 disp('Installation: started.');
             disp('Directory ARfit: already exists.');
         end
     end
-
+    
+%download and extract the AOD: MISR1 dataset to '/shared/downloaded/MISR1' (and create the directory), if needed:  
+    if download_MISR1 %create 'shared/downloaded/MISR1', if needed; download MIR_datasets.zip, extract, delete .zip:
+        if ~exist('shared/downloaded/MISR1')
+            %mkdir:
+                if ~exist('shared/downloaded')
+                    if ~mkdir('shared/downloaded')
+                        error('Making directory for AOD:MISR1 failed.');
+                    end
+                end
+                if ~mkdir('shared/downloaded/MISR1');
+                    error('Making directory for AOD:MISR1 failed.');
+                end
+            %download MIR_datasets.zip, extract, delete .zip:
+                disp('AOD:MISR1 dataset: downloading, extraction: started.');
+                    [FN,status] = urlwrite('http://www.dabi.temple.edu/~vucetic/data/MIR_datasets.zip','MIR_datasets.zip');
+                    if status %downloading: successful
+                        unzip(FN,strcat(ITE_code_dir, '/shared/downloaded/MISR1'));
+                        delete(FN);%delete the .zip file    
+                        %delete the not used parts ('MISR1.mat', 'README.txt'):
+                            delete(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/Gaussian.m'));
+                            delete(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/MISR2.mat'));
+                            delete(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/MODIS.mat'));
+                            delete(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/Outlier1.m'));
+                            delete(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/Outlier2.m'));
+                        %copy 'MISR1.mat' and 'README.txt' to '/MISR1', delete 'MIR datasets/':
+                            movefile(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/MISR1.mat'),strcat(ITE_code_dir, '/shared/downloaded/MISR1/'));
+                            movefile(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/README.txt'),strcat(ITE_code_dir, '/shared/downloaded/MISR1/'));
+                            rmdir(strcat(ITE_code_dir, '/shared/downloaded/MISR1/MIR datasets/'));
+                    else
+                        error('The AOD:MISR1 dataset could not been downloaded.');
+                    end    
+                disp('AOD:MISR1 dataset: downloading, extraction: ready.');   
+        else
+            disp('Directory MISR1: already exists.');
+        end
+    end
+    
 %add 'ITE_code_dir' with its subfolders to the Matlab PATH:
     addpath(genpath(ITE_code_dir));
     if environment_Matlab

File code/estimators/base_estimators/DMMD_Ustat_estimation.m

View file
  • Ignore whitespace
 %
 %REFERENCE: 
 %   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine  Learning Research 13 (2012) 723-773.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
                 kY1Y1 = exp(-kY1Y1/(2*co.sigma^2));
                 kY2Y2 = exp(-kY2Y2/(2*co.sigma^2));
                 kY1Y2 = exp(-kY1Y2/(2*co.sigma^2));
-        case 'linear'
-            kY1Y1 = Y1.' * Y1;
-            kY2Y2 = Y2.' * Y2;
-            kY1Y2 = Y1.' * Y2;
+        case 'exponential'
+                kY1Y2 = sqdistance(Y1,Y2);
+                kY1Y1 = sqdistance(Y1,Y1);
+                kY2Y2 = sqdistance(Y2,Y2);
+                kY1Y2 = exp(-sqrt(kY1Y2)/(2*co.sigma^2)); %compared to RBF: ||.||_2^2 -> ||.||_2
+                kY1Y1 = exp(-sqrt(kY1Y1)/(2*co.sigma^2)); %compared to RBF: ||.||_2^2 -> ||.||_2
+                kY2Y2 = exp(-sqrt(kY2Y2)/(2*co.sigma^2)); %compared to RBF: ||.||_2^2 -> ||.||_2
+        case 'Cauchy'
+                kY1Y2 = sqdistance(Y1,Y2);
+                kY1Y1 = sqdistance(Y1,Y1);
+                kY2Y2 = sqdistance(Y2,Y2);
+                kY1Y2 = 1./(1+kY1Y2/co.sigma^2); 
+                kY1Y1 = 1./(1+kY1Y1/co.sigma^2); 
+                kY2Y2 = 1./(1+kY2Y2/co.sigma^2); 
+        case 'student'
+                kY1Y2 = sqdistance(Y1,Y2);
+                kY1Y1 = sqdistance(Y1,Y1);
+                kY2Y2 = sqdistance(Y2,Y2);
+                kY1Y2 = 1./(1+kY1Y2.^(co.d/2)); 
+                kY1Y1 = 1./(1+kY1Y1.^(co.d/2)); 
+                kY2Y2 = 1./(1+kY2Y2.^(co.d/2)); 
+        case 'Matern3p2'
+                kY1Y2 = sqrt(sqdistance(Y1,Y2));
+                kY1Y1 = sqrt(sqdistance(Y1,Y1));
+                kY2Y2 = sqrt(sqdistance(Y2,Y2));
+                kY1Y2 = ( 1 + sqrt(3) * kY1Y2 / co.l ) .* exp( -sqrt(3) * kY1Y2 / co.l );
+                kY1Y1 = ( 1 + sqrt(3) * kY1Y1 / co.l ) .* exp( -sqrt(3) * kY1Y1 / co.l );
+                kY2Y2 = ( 1 + sqrt(3) * kY2Y2 / co.l ) .* exp( -sqrt(3) * kY2Y2 / co.l );
+        case 'Matern5p2'
+                kY1Y2 = sqdistance(Y1,Y2);
+                kY1Y1 = sqdistance(Y1,Y1);
+                kY2Y2 = sqdistance(Y2,Y2);
+                kY1Y2 = ( 1 + sqrt(5) * sqrt(kY1Y2) / co.l + 5 * kY1Y2 / (3*co.l^2) ) .* exp( -sqrt(5) * sqrt(kY1Y2) / co.l );
+                kY1Y1 = ( 1 + sqrt(5) * sqrt(kY1Y1) / co.l + 5 * kY1Y1 / (3*co.l^2) ) .* exp( -sqrt(5) * sqrt(kY1Y1) / co.l );
+                kY2Y2 = ( 1 + sqrt(5) * sqrt(kY2Y2) / co.l + 5 * kY2Y2 / (3*co.l^2) ) .* exp( -sqrt(5) * sqrt(kY2Y2) / co.l );
+        case 'poly2'
+                kY1Y2 = (Y1.' * Y2 + co.c).^2;
+                kY1Y1 = (Y1.' * Y1 + co.c).^2;
+                kY2Y2 = (Y2.' * Y2 + co.c).^2;
+        case 'poly3'
+                kY1Y2 = (Y1.' * Y2 + co.c).^3;
+                kY1Y1 = (Y1.' * Y1 + co.c).^3;
+                kY2Y2 = (Y2.' * Y2 + co.c).^3;
+        case 'ratquadr' 
+                kY1Y2 = sqdistance(Y1,Y2);
+                kY1Y1 = sqdistance(Y1,Y1);
+                kY2Y2 = sqdistance(Y2,Y2);
+                kY1Y2 = 1 - kY1Y2 ./ (kY1Y2 + co.c);
+                kY1Y1 = 1 - kY1Y1 ./ (kY1Y1 + co.c);
+                kY2Y2 = 1 - kY2Y2 ./ (kY2Y2 + co.c);
+        case 'invmquadr'
+                kY1Y2 = sqdistance(Y1,Y2);
+                kY1Y1 = sqdistance(Y1,Y1);
+                kY2Y2 = sqdistance(Y2,Y2);
+                kY1Y2 = 1 ./ sqrt(kY1Y2 + co.c^2);
+                kY1Y1 = 1 ./ sqrt(kY1Y1 + co.c^2);
+                kY2Y2 = 1 ./ sqrt(kY2Y2 + co.c^2);
         otherwise
             error('Kernel=?');
     end
 term2 = sum(sum(kY2Y2)) / (num_of_samplesY2*(num_of_samplesY2-1));
 term3 = -2 * sum(sum(kY1Y2)) / (num_of_samplesY1*num_of_samplesY2);
 
-D = sqrt(abs(term1+term2+term3)); %abs(): to avoid 'sqrt(negative)' values
+D = sqrt(abs(term1+term2+term3)); %abs(): to avoid 'sqrt(negative)' values

File code/estimators/base_estimators/DMMD_Ustat_iChol_estimation.m

View file
  • Ignore whitespace
 %
 %REFERENCE: 
 %   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research 13 (2012) 723-773.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %

File code/estimators/base_estimators/DMMD_Ustat_initialization.m

View file
  • Ignore whitespace
     co.mult = mult;
     
 %other fields:
-    %Kernel choice; possibilities: 'linear', 'RBF' (Gaussian):
-        %I (RBF):
-            co.kernel = 'RBF';
-            co.sigma = 0.01; %std in the RBF kernel
-        %II (linear):
-            %co.kernel = 'linear';
-            
+    %I (RBF):
+        co.kernel = 'RBF';
+        co.sigma = 0.01; %std in the RBF kernel
+    %II (exponential kernel):
+        %co.kernel = 'exponential'; 
+        %co.sigma = 1;  %std in the exponential kernel
+    %III (Cauchy):
+        %co.kernel = 'Cauchy'; 
+        %co.sigma = 1; %scale in the Cauchy kernel
+    %IV (student kernel):
+        %co.kernel = 'student'; 
+        %co.d = 1;  %exponent in the student kernel
+     %V (Matern kernel):
+        %co.kernel = 'Matern3p2';%3p2 = 3/2
+        %co.l = 1; %l>0
+     %VI (Matern kernel):
+        %co.kernel = 'Matern5p2';%5p2 = 5/2
+        %co.l = 1; %l>0
+     %VI (poly2):
+        %co.kernel = 'poly2';
+        %co.c = 1;
+     %VII (poly3):
+        %co.kernel = 'poly3';
+        %co.c = 1;
+     %VIII (rational quadratic):
+        %co.kernel = 'ratquadr';
+        %co.c = 1;
+     %IX (inverse multiquadr):
+        %co.kernel = 'invmquadr';
+        %co.c = 1;
+        
 %post initialization (put it _before_ initialization of the members in case of a meta estimator):    
     if nargin==2 %there are given (name,value) cost object fields
         co = post_initialization(co,post_init);

File code/estimators/base_estimators/DMMD_Vstat_estimation.m

View file
  • Ignore whitespace
 %
 %REFERENCE: 
 %   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine  Learning Research 13 (2012) 723-773.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %

File code/estimators/base_estimators/DMMD_Vstat_iChol_estimation.m

View file
  • Ignore whitespace
 %
 %REFERENCE: 
 %   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research 13 (2012) 723-773.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %

File code/estimators/base_estimators/DMMD_online_estimation.m

View file
  • Ignore whitespace
 %
 %REFERENCE: 
 %   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf and Alexander Smola. A Kernel Two-Sample Test. Journal of Machine  Learning Research 13 (2012) 723-773. See Lemma 14.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
 function [s] = K_linear(U,V)
 %Computes \sum_i kernel(U(:,i),V(:,i)) in case of a linear kernel
 
-s = sum(dot(U,V));
+s = sum(dot(U,V));

File code/estimators/base_estimators/IHSIC_estimation.m

View file
  • Ignore whitespace
 %
 %REFERENCE: 
 %   Arthur Gretton, Olivier Bousquet, Alexander Smola and Bernhard Scholkopf. Measuring Statistical Dependence with Hilbert-Schmidt Norms. International Conference on Algorithmic Learnng Theory (ALT), 63-78, 2005.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %

File code/estimators/base_estimators/Kexpected_estimation.m

View file
  • Ignore whitespace
 %  co: estimator object of a kernel on distributions.
 %
 %REFERENCE: 
+%   Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723–773, 2012.
 %   Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Scholkopf. Learning from distributions via support measure machines. In Advances in Neural Information Processing Systems (NIPS), pages 10-18, 2011.
+%   Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, 2004. (mean embedding)
+%   Thomas Gartner, Peter A. Flach, Adam Kowalczyk, and Alexander Smola. Multi-instance kernels. In International Conference on Machine Learning (ICML), pages 179–186, 2002. (multi-instance/set/ensemble kernel)
 
 %Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
 %
             kY1Y2 = sqdistance(Y1,Y2);
         %distance(i,j) ->  kernel(i,j):
             kY1Y2 = exp(-kY1Y2/(2*co.sigma^2));
+    case 'exponential'
+        %pairwise distances:
+            kY1Y2 = sqdistance(Y1,Y2);
+        %distance(i,j) ->  kernel(i,j):
+            kY1Y2 = exp(-sqrt(kY1Y2)/(2*co.sigma^2)); %compared to RBF: ||.||_2^2 -> ||.||_2
+    case 'Cauchy'
+        %pairwise distances:
+            kY1Y2 = sqdistance(Y1,Y2);
+        %distance(i,j) ->  kernel(i,j):
+            kY1Y2 = 1./(1+kY1Y2/co.sigma^2); 
+    case 'student'
+        %pairwise distances:
+            kY1Y2 = sqdistance(Y1,Y2);
+        %distance(i,j) ->  kernel(i,j):
+            kY1Y2 = 1./(1+kY1Y2.^(co.d/2)); 
+    case 'Matern3p2'
+        %pairwise distances:
+            kY1Y2 = sqrt(sqdistance(Y1,Y2));
+        %distance(i,j) ->  kernel(i,j):
+            kY1Y2 = ( 1 + sqrt(3) * kY1Y2 / co.l ) .* exp( -sqrt(3) * kY1Y2 / co.l );
+    case 'Matern5p2'
+        %pairwise distances:
+            kY1Y2 = sqdistance(Y1,Y2);
+        %distance(i,j) ->  kernel(i,j):
+            kY1Y2 = ( 1 + sqrt(5) * sqrt(kY1Y2) / co.l + 5 * kY1Y2 / (3*co.l^2) ) .* exp( -sqrt(5) * sqrt(kY1Y2) / co.l );
+    case 'poly2'
+            kY1Y2 = (Y1.' * Y2 + co.c).^2;
+    case 'poly3'
+            kY1Y2 = (Y1.' * Y2 + co.c).^3;
+    case 'ratquadr' 
+            %pairwise distances:
+                kY1Y2 = sqdistance(Y1,Y2);
+            kY1Y2 = 1 - kY1Y2 ./ (kY1Y2 + co.c);
+    case 'invmquadr'
+            %pairwise distances:
+                kY1Y2 = sqdistance(Y1,Y2);
+            kY1Y2 = 1 ./ sqrt(kY1Y2 + co.c^2);
     otherwise
         error('Kernel=?');
 end

File code/estimators/base_estimators/Kexpected_initialization.m

View file
  • Ignore whitespace
     co.mult = mult;
     
 %other fields:
-   co.kernel = 'RBF'; %Gaussian
-   co.sigma = 1; %std in the RBF kernel
-   
+   %I (RBF kernel):
+       co.kernel = 'RBF'; %Gaussian
+       co.sigma = 1; %std in the RBF kernel
+    %II (exponential kernel):
+       %co.kernel = 'exponential'; 
+       %co.sigma = 1;  %std in the exponential kernel
+    %III (Cauchy):
+       %co.kernel = 'Cauchy'; 
+       %co.sigma = 1; %scale in the Cauchy kernel
+    %IV (student kernel):
+       %co.kernel = 'student'; 
+       %co.d = 1;  %exponent in the student kernel
+    %V (Matern kernel):
+        %co.kernel = 'Matern3p2';%3p2 = 3/2
+        %co.l = 1; %l>0
+    %VI (Matern kernel):
+        %co.kernel = 'Matern5p2';%5p2 = 5/2
+        %co.l = 1; %l>0
+    %VI (poly2):
+        %co.kernel = 'poly2';
+        %co.c = 1;
+    %VII (poly3):
+        %co.kernel = 'poly3';
+        %co.c = 1;
+    %VIII (rational quadratic):
+        %co.kernel = 'ratquadr';
+        %co.c = 1;
+    %IX (inverse multiquadr):
+        %co.kernel = 'invmquadr';
+        %co.c = 1;
 %post initialization (put it _before_ initialization of the members in case of a meta estimator):    
     if nargin==2 %there are given (name,value) cost object fields
         co = post_initialization(co,post_init);

File code/estimators/quick_tests/analytical_values/analytical_value_CCE.m

  • Ignore whitespace
-function [CE] = analytical_value_CCE(distr1,distr2,par1,par2)
-%function [CE] = analytical_value_CCE(distr1,distr2,par1,par2)
-%Analytical value (CCE) of the cross-entropy for the given distributions. See also 'quick_test_CCE.m'.
-%
-%INPUT:
-%   distr1 : name of distribution-1.
-%   distr2 : name of distribution-2.
-%   par1   : parameters of distribution-1 (structure).
-%   par2   : parameters of distribution-2 (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean = mean1, par1.cov = covariance matrix1; par2.mean = mean2, par2.cov = covariance matrix2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal')
-    %covariance matrices, expectations:
-        C1 = par1.cov;
-        C2 = par2.cov;
-        m1 = par1.mean;
-        m2 = par2.mean;
-    d = size(C1,1);
-            
-    invC2 = inv(C2);
-    diffm = m1 - m2;
-    CE = 1/2 * ( d*log(2*pi) + log(det(C2)) + trace(invC2*C1) + diffm.'*invC2*diffm );
-else
-    error('Distribution=?');                 
-end        

File code/estimators/quick_tests/analytical_values/analytical_value_DBregman.m

  • Ignore whitespace
-function [D] = analytical_value_DBregman(distr1,distr2,alpha_D,par1,par2)
-%function [D] = analytical_value_DBregman(distr1,distr2,alpha_D,par1,par2)
-%Analytical value (D) of the Bregman divergence for the given distributions. See also 'quick_test_DBregman.m'.
-%
-%INPUT:
-%   distr1 : name of distribution-1.
-%   distr2 : name of distribution-2.
-%   alpha_D: parameter of the Bregman divergence.
-%   par1   : parameters of distribution-1 (structure).
-%   par2   : parameters of distribution-2 (structure).
-%
-%If (distr1,distr2) = ('uniform','uniform'): par1.a = a in U[0,a], par2.a = b in U[0,b].
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'uniform') && strcmp(distr2,'uniform')
-    a = par1.a;
-    b = par2.a;
-    D = -1/(alpha_D-1)*prod(b)^(1-alpha_D) + 1/(alpha_D-1) * prod(a)^(1-alpha_D);
-else
-    error('Distribution=?');                 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_DChiSquare.m

  • Ignore whitespace
-function [D] = analytical_value_DChiSquare(distr1,distr2,par1,par2)
-%function [D] = analytical_value_DChiSquare(distr1,distr2,par1,par2)
-%Analytical value (D) of the chi^2 divergence for the given distributions. See also 'quick_test_DChiSquare.m'.
-%
-%INPUT:
-%   distr1 : name of distribution1.
-%   distr2 : name of distribution2.
-%   par1   : parameters of distribution1 (structure).
-%   par2   : parameters of distribution2 (structure).
-%
-%If (distr1,distr2) = ('uniform', 'uniform'): par1.a = a in U[0,a], par2.a = b in U[0,b],
-%                     ('normalI', 'normalI')  : par1.mean = mean1, par2.mean = mean2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'uniform') && strcmp(distr2,'uniform')
-    a = par1.a;
-    b = par2.a;
-    D = prod(b)/prod(a) - 1;
-elseif strcmp(distr1,'normalI') && strcmp(distr2,'normalI') %Frank Nielsen. Closed-form information-theoretic divergences for statistical mixtures. In International Conference on Pattern Recognition (ICPR), pages 1723-1726, 2012.
-    m1 = par1.mean;
-    m2 = par2.mean;
-    diffm = m2 - m1;
-    D = exp(diffm.' * diffm) - 1;  
-
-else
-    error('Distribution=?'); 
-end
-
-

File code/estimators/quick_tests/analytical_values/analytical_value_DJensen_Renyi.m

  • Ignore whitespace
-function D = analytical_value_DJensen_Renyi(distr1,distr2,w_D,par1,par2)
-%function D = analytical_value_DJensen_Renyi(distr1,distr2,w_D,par1,par2)
-%Analytical value (D) of the Jensen-Renyi divergence for the given distributions. See also 'quick_test_DJensen_Renyi.m'.
-%
-%INPUT:
-%   distr1 : name of distribution-1.
-%   distr2 : name of distribution-2.
-%   w_D    : weight used in the Jensen-Renyi divergence.
-%   par1   : parameters of distribution-1 (structure).
-%   par2   : parameters of distribution-2 (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean = mean1, par1.std = covariance matrix1 (s1^2xI); par2.mean = mean2, par2.std = covariance matrix2 (s2^2xI).
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.
-    m1 = par1.mean; s1 = par1.std;
-    m2 = par2.mean; s2 = par2.std;
-            
-    ms = [m1,m2];
-	ss = [s1,s2];
-	term1 = compute_H2(w_D,ms,ss);
-	term2 = w_D(1) * compute_H2(1,m1,s1) + w_D(2) * compute_H2(1,m2,s2);
-	D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Renyi entropy
-else
-    error('Distribution=?'); 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_DKL.m

  • Ignore whitespace
-function [D] = analytical_value_DKL(distr1,distr2,par1,par2)
-%function [D] = analytical_value_DKL(distr1,distr2,par1,par2)
-%Analytical value (D) of the Kullback-Leibler divergence for the given distributions. See also 'quick_test_DKL.m'.
-%
-%INPUT:
-%   distr1 : name of distribution-1.
-%   distr2 : name of distribution-2.
-%   par1   : parameters of distribution-1 (structure).
-%   par2   : parameters of distribution-2 (structure).
-%
-%If (distr1,distr2) = ('normal', 'normal'): par1.mean = mean1, par1.cov = covariance matrix1; par2.mean = mean2, par2.cov = covariance matrix2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal')
-    %covariance matrices, expectations:
-        m1 = par1.mean; C1 = par1.cov;
-        m2 = par2.mean; C2 = par2.cov;
-    d = length(m1);
-                
-    diffm = m1 - m2;
-    invSigma2 = inv(C2);
-    D = 1/2 * ( log(det(C2)/det(C1)) + trace(invSigma2*C1)  + diffm.' * invSigma2 * diffm - d);
-else
-    error('Distribution=?'); 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_DL2.m

  • Ignore whitespace
-function [D] = analytical_value_DL2(distr1,distr2,par1,par2)
-%function [D] = analytical_value_DL2(distr1,distr2,par1,par2)
-%Analytical value (D) of the L2-divergence for the given distributions. See also 'quick_test_DL2.m'.
-%
-%INPUT:
-%   distr1 : name of distribution-1.
-%   distr2 : name of distribution-2.
-%   par1   : parameters of distribution-1 (structure).
-%   par2   : parameters of distribution-2 (structure).
-%
-%If (distr1,distr2) = ('uniform','uniform'): par1.a = a in U[0,a], par2.a = b in U[0,b].
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'uniform') && strcmp(distr2,'uniform')
-    a = par1.a;
-    b = par2.a;
-    
-    D = sqrt(1/prod(b) - 1/prod(a));   
-else
-    error('Distribution=?');                 
-end
-
-
-

File code/estimators/quick_tests/analytical_values/analytical_value_DRenyi.m

  • Ignore whitespace
-function [D] = analytical_value_DRenyi(distr1,distr2,alpha_D,par1,par2)
-%function [D] = analytical_value_DRenyi(distr1,distr2,alpha_D,par1,par2)
-%Analytical value (D) of the Renyi divergence for the given distributions. See also 'quick_test_DRenyi.m'.
-%
-%INPUT:
-%   distr1 : name of distribution-1.
-%   distr2 : name of distribution-2.
-%  alpha_D : parameter of Renyi divergence.
-%   par1   : parameters of distribution-1 (structure).
-%   par2   : parameters of distribution-2 (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean = mean1, par1.cov = covariance matrix1; par2.mean = mean2, par2.cov = covariance matrix2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal')
-    %covariance matrices, expectations:
-        m1 = par1.mean; C1 = par1.cov;
-        m2 = par2.mean; C2 = par2.cov;
-        
-    mixC = alpha_D * C2 + (1-alpha_D) * C1;
-    diffm = m1 - m2;
-    %Renyi divergence (Manuel Gil. On Renyi Divergence Measures for Continuous Alphabet Sources. Phd Thesis, Queen’s University, 2011):
-        D = alpha_D * ( 1/2 *diffm.' * inv(mixC) * diffm  - 1 / (2*alpha_D*(alpha_D-1)) * log( abs(det(mixC)) / (det(C1)^(1-alpha_D) * det(C2)^(alpha_D)) )); 
-    %Kullback-Leibler divergence (verification, in the alpha_D->1 limit: KL div. = Renyi div.):
-        %d = size(C1,1);
-        %D = 1/2 * ( log(det(C2)/det(C1)) + trace(inv(C2)*C1)  + diffm.' * inv(C2) * diffm - d);
-else
-    error('Distribution=?'); 
-end
-
-
-
-

File code/estimators/quick_tests/analytical_values/analytical_value_DSharmaM.m

  • Ignore whitespace
-function [D] = analytical_value_DSharmaM(distr1,distr2,alpha_D,beta_D,par1,par2)
-%function [D] = analytical_value_DSharmaM(distr1,distr2,alpha_D,beta_D,par1,par2)
-%Analytical value (D) of the Sharma-Mittal divergence for the given distributions. See also 'quick_test_DSharmaM.m'.
-%
-%INPUT:
-%   distr1  : name of the distribution1.
-%   distr2  : name of the distribution2.
-%   par1    : parameters of distribution1 (structure).
-%   par2    : parameters of distribution2 (structure).
-%   alpha_D: parameter of the Sharma-Mittal divergence; > 0, not 1.
-%   beta_D : parameter of the Sharma-Mittal divergence; not 1.
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean = mean1, par1.cov = covariance matrix1; par2.mean = mean2, par2.cov = covariance matrix2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Frank Nielsen and Richard Nock. A closed-form expression for the Sharma-Mittal entropy of exponential families. Journal of Physics A: Mathematical and Theoretical, 45:032003, 2012.
-    %covariance matrices, expectations:
-        m1 = par1.mean; C1 = par1.cov;
-        m2 = par2.mean; C2 = par2.cov;
-    C = inv(alpha_D * inv(C1) + (1-alpha_D) * inv(C2));
-    diffm = m1 - m2;
-    
-    %Jensen difference divergence, C:
-        J = ( log( abs(det(C1))^alpha_D * abs(det(C2))^(1-alpha_D) / abs(det(C)) ) + alpha_D * (1-alpha_D) * diffm.' * inv(C) * diffm ) / 2;
-        C = exp(-J);
-        
-    D = ( C^((1-beta_D)/(1-alpha_D)) - 1 )/ (beta_D - 1);
-    
- else
-    error('Distribution=?'); 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_HPhi.m

  • Ignore whitespace
-function [H] = analytical_value_HPhi(distr,Hpar,par)
-%function [H] = analytical_value_HPhi(distr,Hpar,par)
-%Analytical value (H) of the Phi-entropy for the given distribution. See also 'quick_test_HPhi.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   par    : parameters of the distribution (structure).
-%   Hpar   : parameter of the Phi-entropy. 
-%            In case of Phi = @(t)(t.^c_H): Hpar.c_H.
-%
-%If distr = 'uniform': par.a, par.b in U[a,b].
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-switch distr
-    case 'uniform'
-           a = par.a; b = par.b;        
-           c_H = Hpar.c_H;
-           H = 1 / (b-a)^c_H; 
-    otherwise
-        error('Distribution=?');                 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_HRenyi.m

  • Ignore whitespace
-function [H] = analytical_value_HRenyi(distr,alpha_H,par)
-%function [H] = analytical_value_HRenyi(distr,alpha_H,par)
-%Analytical value (H) of the Renyi entropy for the given distribution. See also 'quick_test_HRenyi.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   alpha_H: parameter of the Renyi entropy.
-%   par    : parameters of the distribution (structure).
-%
-%If distr = 'uniform': par.a, par.b, par.A <- AxU[a,b],
-%           'normal' : par.cov = covariance matrix.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-switch distr
-    case 'uniform' %we also applied the transformation rule of Renyi entropy
-        a = par.a; b = par.b; A = par.A;
-        H = log(prod(b-a))  + log(abs(det(A)));
-    case 'normal' %Kai-Sheng Song. Renyi information, loglikelihood and an intrinsic distribution measure. Journal of Statistical Planning and Inference 93 (2001) 51-69.
-        C = par.cov;
-        d = size(C,1);
-        
-        H = log( (2*pi)^(d/2) * sqrt(abs(det(C))) ) - d * log(alpha_H) / 2 / (1-alpha_H);
-    otherwise
-        error('Distribution=?');                 
-end            
-

File code/estimators/quick_tests/analytical_values/analytical_value_HShannon.m

  • Ignore whitespace
-function [H] = analytical_value_HShannon(distr,par)
-%Analytical value (H) of the Shannon entropy for the given distribution. See also 'quick_test_HShannon.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   par    : parameters of the distribution (structure).
-%
-%If distr = 'uniform': par.a, par.b, par.A <- AxU[a,b],
-%           'normal' : par.cov = covariance matrix.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-switch distr 
-    case 'uniform' %we also applied the transformation rule of Shannon entropy
-        a = par.a; b = par.b; A = par.A;
-        H = log(prod(b-a)) + log(abs(det(A))); 
-    case 'normal'   
-        C = par.cov;
-        d = size(C,1);
-        
-        H =  1/2 * log( (2*pi*exp(1))^d * det(C) ); %equals to: H = 1/2 * log(det(C)) + d/2*log(2*pi) + d/2
-    otherwise
-        error('Distribution=?');
-end  
-        

File code/estimators/quick_tests/analytical_values/analytical_value_HSharmaM.m

  • Ignore whitespace
-function [H] = analytical_value_HSharmaM(distr,alpha_H,beta_H,par)
-%function [H] = analytical_value_HSharmaM(distr,alpha_H,beta_H,par)
-%Analytical value (H) of the Sharma-Mittal entropy for the given distribution. See also 'quick_test_HSharmaM.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   par    : parameters of the distribution (structure).
-%   alpha_H: parameter of the Sharma-Mittal entropy; alpha: >0, not 1.
-%   beta_H : parameter of the Sharma-Mittal entropy; beta: not 1.
-%
-%If distr = 'normal': par.cov = covariance matrix.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-switch distr
-    case 'normal' %Frank Nielsen and Richard Nock. A closed-form expression for the Sharma-Mittal entropy of exponential families. Journal of Physics A: Mathematical and Theoretical, 45:032003, 2012.
-        C = par.cov;
-        d = size(C,1); %=size(C,2)
-        H = ( ( (2*pi)^(d/2) * sqrt(abs(det(C))) )^(1-beta_H) / alpha_H^( d*(1-beta_H) / (2*(1-alpha_H)) )  - 1 ) / (1-beta_H);
-    otherwise
-        error('Distribution=?');                 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_HTsallis.m

  • Ignore whitespace
-function [H] = analytical_value_HTsallis(distr,alpha_H,par)
-%function [H] = analytical_value_HTsallis(distr,alpha_H,par)
-%Analytical value (H) of the Tsallis entropy for the given distribution. See also 'quick_test_HTsallis.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   alpha_H: parameter of the Tsallis entropy.
-%   par    : parameters of the distribution (structure).
-%
-%If distr = 'uniform': par.a, par.b, par.A <- AxU[a,b],
-%           'normal' : par.cov = covariance matrix.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-   
-switch distr
-    case 'uniform'
-        a = par.a; b = par.b; A = par.A;
-        
-        H = log(prod(b-a))  + log(abs(det(A))); %Renyi entropy; we also applied the transformation rule of Renyi entropy
-        H = ( exp((1-alpha_H)*H) - 1 ) / (1-alpha_H); %Renyi entropy -> Tsallis entropy
-        %II (assumption: A=I):
-           %H = (prod(b-a)^(1-alpha_H) - 1) / (1-alpha_H);
-    case 'normal'
-        C = par.cov;
-        d = size(C,1);
-        
-        %I:
-            H = log( (2*pi)^(d/2) * sqrt(abs(det(C))) ) - d * log(alpha_H) / 2 / (1-alpha_H); %Renyi entropy: Kai-Sheng Song. Renyi information, loglikelihood and an intrinsic distribution measure. Journal of Statistical Planning and Inference 93 (2001) 51-69.
-            H = ( exp((1-alpha_H)*H) - 1 ) / (1-alpha_H); %Renyi entropy -> Tsallis entropy
-        %II:
-            %H = ( ( (2*pi)^(d/2) * sqrt(abs(det(C))) )^(1-alpha_H)  / (alpha_H^(d/2)) - 1 ) / (1-alpha_H);
-    otherwise
-        error('Distribution=?');                 
-end            
-        

File code/estimators/quick_tests/analytical_values/analytical_value_IRenyi.m

  • Ignore whitespace
-function [I] = analytical_value_IRenyi(distr,alpha_I,par)
-%function [I] = analytical_value_IRenyi(distr,alpha_I,par)
-%Analytical value (I) of the Renyi mutual information for the given distribution. See also 'quick_test_IRenyi.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   alpha_I: parameter of the Renyi mutual information.
-%   par    : parameters of the distribution (structure).
-%
-%If distr = 'normal': par.cov = covariance matrix.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-    
-switch distr
-    case 'normal'
-        C = par.cov;
-        t1 = -alpha_I / 2 * log(det(C));
-        t2 = -(1-alpha_I) / 2 * log(prod(diag(C)));
-        t3 = log( det( alpha_I*inv(C) + (1-alpha_I)*diag(1./diag(C)) ) ) / 2;
-        I = 1 / (alpha_I-1) * (t1+t2-t3);            
-    otherwise
-        error('Distribution=?');            
-end

File code/estimators/quick_tests/analytical_values/analytical_value_IShannon.m

  • Ignore whitespace
-function [I] = analytical_value_IShannon(distr,par)
-%function [I] = analytical_value_IShannon(distr,par)
-%Analytical value (I) of the Shannon mutual information for the given distribution. See also 'quick_test_IShannon.m'.
-%
-%INPUT:
-%   distr  : name of the distribution.
-%   par    : parameters of the distribution (structure).
-%
-%If distr = 'normal': par.ds = vector of component dimensions; par.cov = (joint) covariance matrix.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-    
-switch distr
-    case 'normal'
-        ds = par.ds;
-        C = par.cov;
-        
-        cum_ds = cumsum([1;ds(1:end-1)]);%1,d_1+1,d_1+d_2+1,...,d_1+...+d_{M-1}+1 = starting indices of the subspaces (M=number of subspaces).
-        I = 1;
-        for m = 1 : length(ds)
-            iv = [cum_ds(m):cum_ds(m)+ds(m)-1];
-            I = I * det(C(iv,iv));
-        end
-        I = log(I/det(C)) / 2;        
-    otherwise
-        error('Distribution=?');            
-end

File code/estimators/quick_tests/analytical_values/analytical_value_KEJR1.m

  • Ignore whitespace
-function [K] = analytical_value_KEJR1(distr1,distr2,u_K,par1,par2)
-%function [K] = analytical_value_KEJR1(distr1,distr2,u_K,par1,par2)
-%Analytical value (K) of the Jensen-Renyi kernel-1 for the given distributions. See also 'quick_test_KEJR1.m'.
-%
-%INPUT:
-%   distr1 : name of the distribution-1.
-%   distr2 : name of the distribution-2.
-%   u_K    : parameter of the Jensen-Renyi kernel-1 (alpha_K = 2: fixed).
-%   par1   : parameters of the distribution (structure).
-%   par2   : parameters of the distribution (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean, par1.std <- N(m1,s1^2xI); par2.mean, par2.std <- N(m2,s2^2xI).
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.)
-    m1 = par1.mean; s1 = par1.std;
-    m2 = par2.mean; s2 = par2.std;
-                
-    ms = [m1,m2];
-    ss = [s1,s2];
-    H = compute_H2([1/2,1/2],ms,ss);%quadratic Renyi entropy
-    K = exp(-u_K*H);
-else
-    error('Distribution=?');                 
-end
-            

File code/estimators/quick_tests/analytical_values/analytical_value_KEJR2.m

  • Ignore whitespace
-function [K] = analytical_value_KEJR2(distr1,distr2,u_K,par1,par2)
-%function [K] = analytical_value_KEJR2(distr1,distr2,u_K,par1,par2)
-%Analytical value (K) of the Jensen-Renyi kernel-2 for the given distributions. See also 'quick_test_KEJR2.m'.
-%
-%INPUT:
-%   distr1 : name of the distribution-1.
-%   distr2 : name of the distribution-2.
-%   u_K    : parameter of the Jensen-Renyi kernel-2 (alpha_K = 2: fixed).
-%   par1   : parameters of the distribution (structure).
-%   par2   : parameters of the distribution (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean, par1.std <- N(m1,s1^2xI); par2.mean, par2.std <- N(m2,s2^2xI).
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009. 
-    m1 = par1.mean; s1 = par1.std;
-    m2 = par2.mean; s2 = par2.std;
-    
-    w = [1/2;1/2];%parameter of Jensen-Renyi divergence, fixed
-    ms = [m1,m2];
-    ss = [s1,s2];
-    term1 = compute_H2(w,ms,ss);
-	term2 = w(1) * compute_H2(1,m1,s1) + w(2) * compute_H2(1,m2,s2);
-	D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Renyi entropy
-    
-    K = exp(-u_K*D);
-else
-    error('Distribution=?');                 
-end
-            

File code/estimators/quick_tests/analytical_values/analytical_value_KEJT1.m

  • Ignore whitespace
-function [K] = analytical_value_KEJT1(distr1,distr2,u_K,par1,par2)
-%function [K] = analytical_value_KEJT1(distr1,distr2,u_K,par1,par2)
-%Analytical value (K) of the Jensen-Tsallis kernel-1 for the given distributions. See also 'quick_test_KEJT1.m'.
-%
-%INPUT:
-%   distr1 : name of the distribution-1.
-%   distr2 : name of the distribution-2.
-%   u_K    : parameter of the Jensen-Tsallis kernel-1 ((w = [1/2;1/2]: fixed).
-%   par1   : parameters of the distribution (structure).
-%   par2   : parameters of the distribution (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean, par1.std <- N(m1,s1^2xI); par2.mean, par2.std <- N(m2,s2^2xI).
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal')
-    m1 = par1.mean; s1 = par1.std;
-    m2 = par2.mean; s2 = par2.std;
-    %analytical value of Renyi entropy (Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009.):
-        ms = [m1,m2];
-        ss = [s1,s2];
-        HR = compute_H2([1/2,1/2],ms,ss);%quadratic Renyi entropy
-    %quadratic Renyi entropy -> quadratic Tsallis entropy:
-        HT = 1 - exp(-HR);
-    K = exp(-u_K*HT);
-else
-    error('Distribution=?');                 
-end
-
-    
-

File code/estimators/quick_tests/analytical_values/analytical_value_KEJT2.m

  • Ignore whitespace
-function [K] = analytical_value_KEJT2(distr1,distr2,u_K,par1,par2)
-%function [K] = analytical_value_KEJT2(distr1,distr2,u_K,par1,par2)
-%Analytical value (K) of the Jensen-Tsallis kernel-2 for the given distributions. See also 'quick_test_KEJT2.m'.
-%
-%INPUT:
-%   distr1 : name of the distribution-1.
-%   distr2 : name of the distribution-2.
-%   u_K    : parameter of the Jensen-Tsallis kernel-2 (w = [1/2;1/2]: fixed).
-%   par1   : parameters of the distribution (structure).
-%   par2   : parameters of the distribution (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean, par1.std <- N(m1,s1^2xI); par2.mean, par2.std <- N(m2,s2^2xI).
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal')
-    m1 = par1.mean; s1 = par1.std;
-    m2 = par2.mean; s2 = par2.std;
-    %analytical value of Jensen-Renyi divergence (Fei Wang, Tanveer Syeda-Mahmood, Baba C. Vemuri, David Beymer, and Anand Rangarajan. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Medical Image Computing and Computer-Assisted Intervention, 12: 648–655, 2009):
-        w = [1/2;1/2];%parameter of Jensen-Tsallis divergence, fixed
-	    ms = [m1,m2];
-	    ss = [s1,s2];
-	    term1 = 1 - exp(-compute_H2(w,ms,ss));%quadratic Renyi entropy -> quadratic Tsallis entropy
-	    term2 = w(1) * (1-exp(-compute_H2(1,m1,s1))) + w(2) * (1-exp(-compute_H2(1,m2,s2)));%quadratic Renyi entropy -> quadratic Tsallis entropy
-	    D  = term1 - term2; %H2(\sum_i wi Yi) - \sum_i w_i H2(Yi), where H2 is the quadratic Tsallis entropy
-    %analytical value of Jensen-Tsallis kernel-2:
-        K = exp(-u_K*D);
-else
-    error('Distribution=?');                 
-end

File code/estimators/quick_tests/analytical_values/analytical_value_Kexpected.m

  • Ignore whitespace
-function [K] = analytical_value_Kexpected(distr1,distr2,sigma_K,par1,par2)
-%function [K] = analytical_value_Kexpected(distr1,distr2,sigma_K,par1,par2)
-%Analytical value (K) of the expected kernel for the given distributions. See also 'quick_test_Kexpected.m'.
-%
-%INPUT:
-%   distr1 : name of the distribution-1.
-%   distr2 : name of the distribution-2.
-%   sigma_K: parameter of the expected kernel.
-%   par1   : parameters of the distribution (structure).
-%   par2   : parameters of the distribution (structure).
-%
-%If (distr1,distr2) = ('normal','normal'): par1.mean = mean1, par1.cov = covariance matrix1; par2.mean = mean2, par2.cov = covariance matrix2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Scholkopf. Learning from distributions via support measure machines. In Advances in Neural Information Processing Systems (NIPS), pages 10-18, 2011.
-    m1 = par1.mean; C1 = par1.cov;
-    m2 = par2.mean; C2 = par2.cov;
-    d = length(m1);
-    
-    gam = 1/sigma_K^2;
-    diffm = m1 - m2; 
-    K = exp(-1/2*(diffm.'*inv(C1+C2+eye(d)/gam)*diffm)) / sqrt(abs(det(gam*C1+gam*C2+eye(d))));
-else
-    error('Distribution=?');                 
-end

File code/estimators/quick_tests/analytical_values/analytical_values_KPP.m

  • Ignore whitespace
-function [K] = analytical_values_KPP(distr1,distr2,rho_K,par1,par2)
-%function [K] = analytical_values_KPP(distr1,distr2,rho_K,par1,par2)
-%Analytical value (K) of the probability product kernel for the given distributions. See also 'quick_test_KPP.m'.
-%
-%INPUT:
-%   distr1 : name of the distribution-1.
-%   distr2 : name of the distribution-2.
-%   rho_K  : parameter of the probability product kernel.
-%   par1   : parameters of the distribution (structure).
-%   par2   : parameters of the distribution (structure).
-%
-%If (distr1,distr2) = ('normal','normal'). par1.mean = mean1, par1.cov = covariance matrix1; par2.mean = mean2, par2.cov = covariance matrix2.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-if strcmp(distr1,'normal') && strcmp(distr2,'normal') %Tony Jebara, Risi Kondor, and Andrew Howard. Probability product kernels. Journal of Machine Learning Research, 5:819-844, 2004
-    m1 = par1.mean; C1 = par1.cov;
-    m2 = par2.mean; C2 = par2.cov;
-    d = length(m1);
-    
-    %inv12:
-        inv1 = inv(C1);
-        inv2 = inv(C2);
-        inv12 = inv(inv1+inv2);
-    m12 = inv1 * m1 + inv2 * m2;
-    K = (2*pi)^((1-2*rho_K)*d/2) * rho_K^(-d/2) * abs(det(inv12))^(1/2) * abs(det(C1))^(-rho_K/2) * abs(det(C2))^(-rho_K/2) * exp(-rho_K/2*(m1.'*inv1*m1 + m2.'*inv2*m2 - m12.'*inv12*m12));
-else
-    error('Distribution=?');                 
-end

File code/estimators/quick_tests/quick_test_Aequality.m

  • Ignore whitespace
-%function [] = quick_test_Aequality()
-%Quick test for 'A(y^1,y^2)=0/1 if y^1=y^2'. Here 'y^1=y^2'is meant in distribution/realization. See also 'quick_test_Aindependence.m'. In the test, normal/uniform variables are considered.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-%clear start:
-    clear all; close all;
-
-%parameters:    
-    distr = 'uniform'; %possibilities: 'uniform', 'normal'
-    num_of_samples_v = [100:500:3*1000]; %sample numbers used for estimation
-    %estimator (of association):
-        %base:
-            cost_name = 'CorrEntr_KDE_direct';        
-            %cost_name = 'CorrEntrCoeff_KDE_direct'; %computationally intensive
-            %cost_name = 'CorrEntrCoeff_KDE_iChol';
-        %meta:
-            %cost_name = 'CCIM'; %metric
-            %cost_name = 'CIM';  %metric
-
-%initialization:    
-    num_of_samples_max = num_of_samples_v(end);
-    L = length(num_of_samples_v);
-    ds = [1;1];
-    co = A_initialization(cost_name,1);
-    A_hat_v = zeros(L,1);%estimated association values
-
-%generate samples (Y=[Y1,Y2]), analytical value (A):
-    %Y1:
-        switch distr
-            case 'uniform'
-                Y1 = rand(1,num_of_samples_max);
-            case 'normal'
-                Y1 = randn(1,num_of_samples_max);
-            otherwise
-                error('Distribution=?');
-        end
-    %Y2:
-        switch cost_name
-            case {'CIM','CCIM'}
-                Y2 = 0.99 * Y1;
-                %Y2 = 1 * Y1;
-                A = 0; %if Y1=Y2
-            case 'CorrEntr_KDE_direct'
-                Y2 = 0.9 *Y1;
-                %Y2 = Y1;
-                A = 1; %if Y1=Y2 
-            case {'CorrEntrCoeff_KDE_direct','CorrEntrCoeff_KDE_iChol'}
-                switch distr
-                    case 'uniform'
-                        Y2 = rand(1,num_of_samples_max);
-                    case 'normal'
-                        Y2 = randn(1,num_of_samples_max);
-                end
-                A = 0; %if Y1=distr=Y2
-            otherwise
-                error('Method=?');
-        end
-    Y = [Y1;Y2];
-    
-%estimation:
-    Tk = 0;%index of the sample number examined
-    for num_of_samples = num_of_samples_v
-        Tk = Tk + 1;
-        A_hat_v(Tk) = A_estimation(Y(:,1:num_of_samples),ds,co);
-        disp(strcat('Tk=',num2str(Tk),'/',num2str(L)));
-    end
-
-%plot:
-    plot(num_of_samples_v,A_hat_v,'r',num_of_samples_v,A*ones(L,1),'g');
-    legend({'estimation','analytical value'});
-    xlabel('Number of samples');
-    ylabel('Association');   

File code/estimators/quick_tests/quick_test_Aindependence.m

  • Ignore whitespace
-%function [] = quick_test_Aindependence()
-%Quick test for 'A(y^1,...,y^M)=0 if y^m-s are independent'. See also 'quick_test_Aequality.m'. In the test, normal/uniform variables are considered.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-%clear start:
-    clear all; close all;
-
-%parameters:   
-    distr = 'normal'; %possibilities: 'uniform', 'normal' 
-    M = 2; %number of components
-    num_of_samples_v = [1000:1000:30*1000]; %sample numbers used for estimation
-    %estimator (of association):
-        %base:
-            cost_name = 'Spearman1';  %M>=2
-            %cost_name = 'Spearman2'; %M>=2
-            %cost_name = 'Spearman3'; %M>=2   
-            %cost_name = 'Spearman4'; %M>=2          
-            %cost_name = 'CCorrEntr_KDE_iChol'; %M=2
-            %cost_name = 'CCorrEntr_KDE_Lapl';  %M=2
-            %cost_name = 'Blomqvist';   %M>=2
-            %cost_name = 'Spearman_lt'; %M>=2
-            %cost_name = 'Spearman_ut'; %M>=2
-        %meta:
-            %cost_name = 'Spearman_L'; %M>=2
-            %cost_name = 'Spearman_U'; %M>=2
-            
-%initialization:
-    num_of_samples_max = num_of_samples_v(end);
-    L = length(num_of_samples_v);
-    co = A_initialization(cost_name,1);
-    A_hat_v = zeros(L,1); %estimated association values
-    ds = ones(M,1);
-    
-%generate samples (Y), analytical value (A): 
-    %Y:
-       switch distr
-           case 'uniform'
-               Y = rand(M,num_of_samples_max);
-           case 'normal'
-               Y = randn(M,num_of_samples_max);
-           otherwise
-               error('Distribution=?');            
-       end
-    A = 0;
-
-%estimation:
-    Tk = 0;%index of the sample number examined
-    for num_of_samples = num_of_samples_v
-        Tk = Tk + 1;
-        A_hat_v(Tk) = A_estimation(Y(:,1:num_of_samples),ds,co);
-        disp(strcat('Tk=',num2str(Tk),'/',num2str(L)));        
-    end
-
-%plot:
-    plot(num_of_samples_v,A_hat_v,'r',num_of_samples_v,A*ones(L,1),'g');
-    legend({'estimation','analytical value'});
-    xlabel('Number of samples');
-    ylabel('Association');    
-

File code/estimators/quick_tests/quick_test_CCE.m

  • Ignore whitespace
-%function [] = quick_test_CCE()
-%Quick test for cross-entropy estimators: analytical expression vs estimated value as a function of the sample number. In the test, normal variables are considered. See also 'analytical_value_CCE.m'.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.
-%
-%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
-%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
-%
-%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-%
-%You should have received a copy of the GNU General Public License along with ITE. If not, see <http://www.gnu.org/licenses/>.
-
-%clear start:
-    clear all; close all;
-    
-%parameters:
-    distr = 'normal'; %fixed
-    d = 1; %dimension of the distribution
-    num_of_samples_v = [100:500:12*1000]; %sample numbers used for estimation
-    %estimator (of cross-entropy), base:
-        cost_name = 'CE_kNN_k'; %d>=1
-        %cost_name = 'CE_expF'; %d>=1
-    
-%initialization:
-    num_of_samples_max = num_of_samples_v(end);
-    L = length(num_of_samples_v);
-    co = C_initialization(cost_name,1);    
-    CE_hat_v = zeros(L,1); %vector of estimated cross-entropies
-
-%distr, d -> samples (Y1,Y2), analytical formula for the cross-entropy (CE):
-    switch distr
-        case 'normal'
-            %mean:
-                m2 = rand(d,1);
-                m1 = m2;            
-            %(random) linear transformation applied to the data:
-                A2 = rand(d);
-                A1 = rand * A2; %(e2,A2) => (e1,A1) choice guarantees Y1<<Y2 (in practise, too)                
-            %covariance matrix:
-                C1 = A1 * A1.';
-                C2 = A2 * A2.';
-            %generate samples:
-                Y1 = A1 * randn(d,num_of_samples_max) + repmat(m1,1,num_of_samples_max); %A1xN(0,I)+m1
-                Y2 = A2 * randn(d,num_of_samples_max) + repmat(m2,1,num_of_samples_max); %A2xN(0,I)+m2
-            %analytical value of cross-entropy:
-                par1.mean = m1; par1.cov = C1;
-                par2.mean = m2; par2.cov = C2;
-                CE = analytical_value_CCE(distr,distr,par1,par2);
-        otherwise
-            error('Distribution=?');
-    end   
-    
-%estimation:
-    Tk = 0;%index of the sample number examined
-    for num_of_samples = num_of_samples_v
-        Tk = Tk + 1;
-        CE_hat_v(Tk) = C_estimation(Y1(:,1:num_of_samples),Y2(:,1:num_of_samples),co);
-        disp(strcat('Tk=',num2str(Tk),'/',num2str(L)));
-    end
-    
-%plot:
-    plot(num_of_samples_v,CE_hat_v,'r',num_of_samples_v,CE*ones(L,1),'g');
-    legend({'estimation','analytical value'});
-    xlabel('Number of samples');
-    ylabel('Cross-entropy');  
-

File code/estimators/quick_tests/quick_test_DBregman.m

  • Ignore whitespace
-%function [] = quick_test_DBregman()
-%Quick test for Bregman divergence estimators: analytical expression vs estimated value as a function of the sample number. In the test, uniform variables are considered. See also 'analytical_value_DBregman.m'.
-
-%Copyright (C) 2012-2014 Zoltan Szabo ("http://www.gatsby.ucl.ac.uk/~szabo/", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
-%
-%This file is part of the ITE (Information Theoretical Estimators) toolbox.