Can you please confirm my understanding that there are 2 differences between GCCA/Diablo and Multi-Block PLS :
In GCCA/Diablo the sum of covariances which is maximized includes covariances between the Xk scores/components (besides covariances between the Xk scores/components and Y scores/components) whereas this is not the case with MB-PLS
In GCCA there’s a design matrix to be inputted by the user whereas in MB-PLS there are “super-weights” or “block weights” which are parameters estimated by the algorithm in order to maximize the weighted (the weights being the previous “block weights”) sum of covariances between the Xk scores/components and Y scores/components
If this is correct, can you please explain what are the advantages of GCCA versus MB-PLS and why you implemented GCCA rather than MB-PLS in mixOmics ?
The reason I ask is because I think that an obvious drawback - for GCCA - of the differences above is that the Xk scores/components you get with GCCA should explain/predict Y less well than those you get with MB-PLS. And by the way this is probably the reason why in the Diablo article the classification performance of Diablo is not as good as that of the ElasticNet, isn’t it ?
Thanks in advance, Arnaud