example is failing?
The example quoted in the readme should be identified. I have assumed this is DATACUBE_HDFS_v1p0.fits, however, using
ln -s DATACUBE_HDFS_v1p0.fits datacube.fits
cutting and pasting the example fails....
lsd_cc_spatial.py --input=datacube.fits --SHDU=1 --NHDU=4 --gaussian --lambda0=7050 -p0=0.836 -p1=-4.4295e-3 --output=spat_c_datacube.fits
lsd_cc_spatial.py version 1.0.2
lsd_cc_spatial.py run on datacubes in inputfile: datacube.fits (flux in HDU: 1, variance in HDU: 4)
lsd_cc_spatial.py: Using no mask cube!
lsd_cc_spatial.py: PSF shape model = Gaussian
lsd_cc_spatial.py: PSF lambda dependence via polynomial approximation using the coefficients [p_0,p_1,p_2] [0.836, -0.0044295, 0.0]
lsd_cc_spatial.py: Zero-Wavelength in Polynomial: 7050.0 Angstrom
lsd_cc_spatial.py: Using 8 parallel threads
lsd_cc_spatial.py: Spatial covolution using Fast Fourier transform!
datacube.fits: Reading in the Data Cube... (HDU1) (0.006s)
No mask is given... Performing all operations on umasked cube. (File datacube.fits, HDU 1)... (20.062s)
datacube.fits: Threaded Filtering starts...(20.062s)
datacube.fits ... Filter window PSFs are Gaussians ...
datacube.fits: Creating the wavelength dependent PSF filter for 3641 datacube layers. (20.062s)
datacube.fits: Using polynomial fit to approximate wavelength dependence of PSF FWHM ...
datacube.fits: Coefficients of the polynomial [p_0,p_1,p_2]=[0.836, -0.0044295, 0.0] (plate scale: 0.2 arcsec^2 per spaxel)
datacube.fits: Average size of the filter windows 35^2 px. (28.865s)
datacube.fits: Thread 1: Working on wavelength layers from #1 to #455
datacube.fits: Thread 2: Working on wavelength layers from #456 to #910
datacube.fits: Thread 3: Working on wavelength layers from #911 to #1365
datacube.fits: Thread 4: Working on wavelength layers from #1366 to #1820
datacube.fits: Thread 5: Working on wavelength layers from #1821 to #2275
datacube.fits: Thread 6: Working on wavelength layers from #2276 to #2730
datacube.fits: Thread 7: Working on wavelength layers from #2731 to #3185
datacube.fits: Thread 8: Working on wavelength layers from #3186 to #3642
Traceback (most recent call last):
File "/data2/teuben/LSDCat/lsdcat/lsd_cc_spatial.py", line 332, in <module>
method='fft')
File "/data2/teuben/LSDCat/lsdcat/lib/spatial_smooth_lib.py", line 76, in spatial_filter_parallel
result.append(r.get())
File "/usr/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
ValueError: could not broadcast input array from shape (0) into shape (331,326)
(trying --threads 4 failed in the same way, --threads 1 failed in another way)
Comments (8)
-
repo owner -
reporter i tried triple quote.... how do you get the text to not be formatted?
-
reporter so yes, this is the datacube from the HDFS that is available on http://data.muse-vlt.eu/HDFS/v1.0/DATACUBE_HDFS_v1p0.fits.gz
-
repo owner comment deleted
-
repo owner So it turns out that in the example there is a typo. In a previous version, the coefficients had to be given in units
arcsec
andarcsec/nm
- but now they are supplied inarcsec
andarcsec/Angstrom
. Hence the example call should be modified:lsd_cc_spatial.py --input=datacube.fits --SHDU=1 --NHDU=2 --gaussian --lambda0=7050 -p0=0.836 -p1=-4.4295e-5 --output=spat_c_datacube.fits
. -
reporter yes, i confirm now it runs for me. By default it seems to take over my whole machine, in my case 8 threads, bringing the whole machine to a grinding halt. Perhaps a better default would be to use Nthreadmax - 1. I don't know what the industry standard is here. Also, on the webpage where the example is described it would be useful to tell users that on a typical 2016 vintage i7 this would take about blabla. On my laptop [i7-3630QM CPU @ 2.40GHz ] it took 7.5 mins.
-
reporter -
repo owner - changed status to resolved
OK.
- Log in to comment
As I can not edit the post above here is the output in pre-formatted (use three backticks pre- and post text that needs to be pre-formatted:
I will investigate, assuming that
DATACUBE_HDFS_v1p0.fits datacube.fits
is from http://muse-vlt.eu/science/hdfs-v1-0/ . Thanks for reporting.