ITE / code / estimators / meta_estimators / KJT_HJT_estimation.m

function [K] = KJT_HJT_estimation(Y1,Y2,co)
%function [K] = KJT_HJT_estimation(Y1,Y2,co)
%Estimates the Jensen-Tsallis kernel of two distributions from which we have samples (Y1 and Y2) using the relation: K_JT(f_1,f_2) = log_{alpha}(2) - T_alpha(f_1,f_2), where (i) log_{alpha} is the alpha-logarithm, (ii) T_alpha is the Jensen-Tsallis alpha-difference (that can be expressed in terms of the Tsallis entropy).
%   1)We use the naming convention 'K<name>_estimation' to ease embedding new kernels on distributions.
%   2)This is a meta method: the Tsallis entropy estimator can be arbitrary.
%  Y1: Y1(:,t) is the t^th sample from the first distribution.
%  Y2: Y2(:,t) is the t^th sample from the second distribution. Note: the number of samples in Y1 [=size(Y1,2)] and Y2 [=size(Y2,2)] can be different.
%  co: estimator object of a kernel on distributions.
%   Andre F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and Mario A. T. Figueiredo. Nonextensive information theoretical kernels on measures. Journal of Machine Learning Research, 10:935-975, 2009.

%Copyright (C) 2013 Zoltan Szabo ("", "zoltan (dot) szabo (at) gatsby (dot) ucl (dot) ac (dot) uk")
%This file is part of the ITE (Information Theoretical Estimators) Matlab/Octave toolbox.
%ITE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by
%the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
%This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
%MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
%You should have received a copy of the GNU General Public License along with ITE. If not, see <>.

%co.mult:OK. The information theoretical quantity of interest can be (and is!) estimated exactly [co.mult=1]; the computational complexity of the estimation is essentially the same as that of the 'up to multiplicative constant' case [co.mult=0].

    if size(Y1,1)~=size(Y2,1)
        error('The dimension of the samples in Y1 and Y2 must be equal.');

a = co.alpha; %alpha

%alpha-logarithm of 2:
    log_alpha_2 = (2^(1-a)-1) / (1-a);

%T (=Jensen-Tsallis alpha-difference):
        w = [1/2,1/2];%fixed
        mixtureY = mixture_distribution(Y1,Y2,w);
    T = H_estimation(mixtureY,co.member_co) - (w(1)^a * H_estimation(Y1,co.member_co) + w(2)^a * H_estimation(Y2,co.member_co)); 
K = log_alpha_2 - T;