Looks ok to me. Seems to work (from looking at this and my workstation and datura) for configurations using MKL for fftw3, debian and self-buiding.
Should one add MPI_LIBS to FFTW3_LIBS? HDF5 which can also optionally use MPI does so (added by me at one point). This is required for utilities I think that want to link against FFTW3 and which don't benefit from the thorn dependency tracking but can only use FFTW3_LIBS.
Assuming the thorn detects an external fftw library, that was not build with mpi - what should happen then? It seems that with the current patch libfftw3_mpi(.so) would be added as library without checking that a) this is needed (it was not so far) and b) it actually exists.
In fact, I do get the expected linker error with the patch: cannot find -lfftw3_mpi
mpi bindings for fftw3 might need to be installed separately, e.g., I have libfftw3-dev  installed, but not libfftw3-mpi-dev .
We would need to warn users if we would the ET require MPI bindings for fftw (which we would if we assume them to be there if MPI is used).
Another issue might be that the installed MPI bindings for fftw might not be linked against the version of MPI that Cactus is configured to use. We would have to check for that. libfftw3-mpi on my system seems to use openmpi, which happens to be the MPI I also use for Cactus. That fftw3-mpi package for Ubuntu also uses openmpi, while the simfactory configuration for ubuntu advises users to install mpich2 (but I checked that using openmpi would also work).
We could of course only add the MPI bindings for the case when FFTW is build by Cactus, but then thorns shouldn't rely on them being present, which would defeat the purpose I would guess.
All these complications make me wonder if this change is worth the trouble. They might, but what would they be used for?
For simplicity I would go the same route that we are taking for HDF5:
- enable using MPI features if they are there
- if we build FFTW3 ourselves and we have MPI, enable MPI features (I think we don't do this for HDF5, but we should)
- allow linking against a non-MPI FFTW3 system install for simplicity, which will break if a thorn needs a parallel FFT
Yes, we should add the MPI libraries explicitly to the FFTW3 libraries, as Roland describes.
If FFTW3 uses the wrong MPI version -- that's tough, difficult to detect, such inconsistencies can exist for all libraries that Cactus uses (not just for MPI); the way out is to require users to remedy this in their option list.
This change allows parallel FFTs. Without this, only process-local FFTs are possible, which is limiting. I'd assume that most times you want an FFT, you'll have a uniform grid, and if you are running on multiple processes you will need the parallel version.
We currently have thorns in the ET that use FFTW3 (PITTNullCode/SphericalHarmonicRecon). So whatever we set up should not break them. My experience with codes uses FFTW (SphericalHarmonicRecon, SpEC) was that they did process local FFT's of fairly small size and not a large multi-process FFT, so I would not force all FFT implemenations to offer fftw_mpi to be able to support current use to the thorn plus future use on large arrays. Possibly a switch like HDF5_ENABLE_CXX named FFTW3_ENABLE_MPI would make sense? If this is set then we either build fftw3 with MPI enabled or add the MPI libs to its linking libraries?