HTTPS SSH
# LSDCat - *Line Source Detection and Cataloguing Tool* # ![LSDCat Logo](./doc/lsd_cat.jpg) The manual is work in progress. While all the basics of LSDcat are covered, some of the more advanced features are not yet documented. Feedback is welcome. **Table of Contents** [TOC] ## Requirements ## LSDCat runs on Python 2.7. In order to use LSDCat you need the following 3rd party Python libraries installed on your system. * astropy (>= 1.0.1) - http://www.astropy.org/ * NumPy (>= 1.10) - http://www.numpy.org/ * SciPy (>= 0.17) - http://www.scipy.org/ ## Install ## In your `$HOME` (or wherever you want to have LSDCat installed) do: `git clone https://Knusper2000@bitbucket.org/Knusper2000/lsdcat.git` Then set up your `PYTHONPATH` to contain the LSDCat library directory `lsdcat/lib`. For example if you have installed LSDCat in `$HOME` and use bash put in your `.bashrc` `export PYTHONPATH=${PATH}:${HOME}/lsdcat/lib/` To have the executables available anywhere on your platform, configure the systems `$PATH` variable accordingly. Again, assuming you have installed LSDCat in your `$HOME` and you are using `bash` add to your `.bashrc`: `export PATH=${PATH}:${HOME}/lsdcat/` If you want to use the additional tools that are shipped with LSDCat (see below), put `export PATH=${PATH}:${HOME}/lsdcat/tools/` in your `.bashrc`. Users of other shells (e.g., `csh` or `tcsh`) have to follow [a different procedure](https://kb.iu.edu/d/acar). ## License ## LSDCat is licensed under a [three-clause BSD license](http://choosealicense.com/licenses/bsd-3-clause/). For details see the file `LICENSE` in the LSDCat repository. ## Acknowledging / Citing LSDCat ## If your research benefits from the use of LSDCat we ask you to cite the LSDCat paper: E. C. Herenz & L. Wisotzki 2016, A&A 602, A111 ADS: https://ui.adsabs.harvard.edu/#abs/2017A%26A...602A.111H DOI: https://doi.org/10.1051/0004-6361/201629507 (open access) We also have a record in the Astrophysics Source Code Library: http://ascl.net/1703.011 ## Contact ## For bug reports or feature request, please use the [issue tracker provided by Bitbucket](https://bitbucket.org/Knusper2000/lsdcat/issues?status=new&status=open). Other questions via email to `christian.herenz <at> astro.su.se`. ## Documentation ## ### Overview ### LSDCat is a conceptually simple but robust and efficient detection package for emission lines in wide-field IFS datacubes. The detection utilises a 3D matched-filtering approach for compact single emission line objects. Furthermore, the software measures fluxes and extents of detected lines. LSDCat is implemented in Python, with a focus on fast processing of the large data-volumes of typical wide-field IFS datacubes. The following flowchart illustrates the processing steps of LSDCat from an input datacube to a catalogue of positions, shape parameters and fluxes of emission line sources. ![LSDCat Flowchart](./doc/lsd_cat_flow.png) Each of the processing steps has an associated LSDCat routine: - Spatial filtering: `lsd_cc_spatial.py` - Spectral filtering: `lsd_cc_spectral.py` - Thresholding: `lsd_cat.py` - Measurements: `lsd_cat_measure.py` A complete description of the algorithms can be found in the LSDCat paper (Herenz & Wisotzki, in prep.). Here we will describe how to use these tools on wide-field IFS data. Moreover, we also provide some tools for working with IFS datacubes in the LSDCat context. These tools are in the folder `./tools/`. A brief overview of the functionality of these tools is given at the end of the documentation in the section "Additional Tools". ### Input data format ### LSDCat works with IFS datacubes stored as FITS files. A FITS file storing a datacube is assumed to contain two header-data units (HDUs), one HDU for the flux values and another one for the associated variances. ### Matched filtering ### We utilise a 3D matched filtering approach in LSDCat to obtain a robust detection statistic for isolated emission line sources in wide-field IFS datacubes. Matched-filtering transforms the input datacube by convolving it with a template that matches the expected signal of an emission line in the datacube. LSDCat is primarily designed for the search of faint compact emission line sources. For those sources it is a reasonable assumption that their spatial and spectral properties are independent. Therefore, we can perform the 3D convolution as two successive convolutions, one in eachs spectral layer and one along the spectral direction for each spaxel. #### Spatial filtering #### For the convolution in each spectral LSDCat offers the choice between utilising a circular Gaussian profile or a Moffat profile. Both functions are commonly used as an approximation of the seeing induced point spread function (PSF) in ground based optical and near-IR observations. The parameter used to characterise the PSF is its full width at half maximum (FWHM). The PSF FWHM depends on wavelength. In LSDCat the wavelength dependency has to be supplied via the coefficients of a polynomial `FWHM(lambda) = p0 + p1 * (lambda - lambda0) + p2 * (lambda - lambda0)^2` Here the unit of the wavelength is Angstrom, and FWHM is in arcseconds. The polynomial coefficients are thus in units of arcseconds/(Angstrom)^n - where n is the order of the coefficient. You need to specify these coefficients, as well as the zero-point `lambda0` in order to run LSDCat. In the LSDCat paper we describe several ways on how to determine suitable coefficients for your datacubes. Additionally, the Moffat function includes a second parameter, &beta;, that parameterises the kurtosis of the PSF. Usually, &beta; does not depend on wavelength. If the Moffat function is chosen to characterise the PSF, you also have to specify beta. Spatial filtering in LSDCat is performed by the routine `lsd_cc_spatial.py`. In principle, the spatial- and spectral filtering operations can be carried out in no particular order without changing the final result. However, if you want to mask out regions for the matched filtering, you have to begin with `lsd_cc_spatial.py`, since this routine offers the option to utilise a mask (see below). ##### Usage ##### The routine `lsd_cc_spatial.py` has the following call signature: lsd_cc_spatial.py [-h] -i INPUT [-o OUTPUT] [-S SHDU] [-N NHDU] [--ignorenoise] [-m MASK] [-M MHDU] [-P PIXSCALE] [-t THREADS] [-b BETA] [--gaussian] [-p0 P0] [-p1 P1] [-p2 P2] [-p3 P3] [--lambda0 LAMBDA0] [-T TRUNCCONSTANT] [--memmap] All parameters and switches within brackets are optional. If they are not supplied, they are set to a default value. - `-h`: Shows a long help message. - `-i INPUT`, or `--input INPUT`: Name of the input FITS file containing the flux (and variance) datacube. - `-o OUTPUT`, or `--output OUTPUT`: Name of the output FITS file. The output FITS file will contain 2 HDUs: In HDU 0 the filtered signal is stored and HDU 1 contains the propagated variances. [Default: `spatial_smoothed_+INPUT`, i.e. `spatial_smoothed_` will be appended to the input file name.] - `-S SHDU, --SHDU SHDU`: HDU number (0-indexed) or name in the input FITS file containing the flux data. [Default: 0] - `-N NHDU`, or `--NHDU NHDU`: HDU number (0-indexed) or name in the input FITS file containing the variance data. [Default: 1] - `--std`: Some noise cubes are std not variance (e.g. in KMOS). If set the input noise cube will be squared to make it variance. - `--ignorenoise`: Switch to not propagate the variance. If set the output FITS file will contain only 1 HDU that stores the filtered signal. - `-m MASK`, or `--mask MASK`: Name of a FITS file containing a mask. [Default: none] - `-M MHDU,` or `--MHDU MHDU`: HDU number (0-indexed) or name of the mask within MASK-file. The mask is supposed to be a 2D array with the same spatial dimensions containing only ones and zeros. Spaxels corresponding to zero-valued pixels in the mask will be set to zero prior and post the spatial convolution operation. [Default: 1] - `-P PIXSCALE`, or `--pixscale PIXSCALE`: Size of a spaxel in arcseconds. [Default: 0.2] - `-t THREADS`, or `--threads THREADS`: Number of CPU cores used in parallel operation. [Default: all available cpu cores] - `-b BETA`, `--beta BETA`: &beta; parameter of the Moffat profile. [Default: 3.5]. - `--gaussian`: Switch to use a Gaussian profile instead of the default Moffat profile as spatial filter profile. The &beta; parameter will be ignored in that case. - `-p0 P0`: 0th order coefficient (in arcseconds) for polynomial approximation for PSF FWHM-lambda dependency. [Default: 0.8] - `-p1 P1`: 1st order polynomial coefficient (in arcseconds/Angstrom). [Default: 0.8] - `-p2 P2`: 2nd order polynomial coefficient (in arcsec/Angstrom**2). [Default: 0] - `--lambda0 LAMBDA0`: Zero-point of the polynomial, in Angstrom. [Default: 7050 Angstrom] - `-T TRUNCCONSTANT`, or `--truncconstant TRUNCCONSTANT`: Parameter controlling the truncation of the filter window: the filter is truncated at T*WIDTH-PARAM - were WIDTH-PARAM = sigma for Gaussian and FWHM for Moffat. [Default: 8] ##### Example usage ##### In the following example we want to apply the spatial filtering in a FITS file `datacube.fits` that contains in HDU 0 the flux datacube and in HDU 4 a variance datacube. Furthermore, we have determined that the wavelength dependency of the FWHM can be modelled by the above polynomial with `p0=0.836` arcsec, `p1=-4.4295e-3` arcsec/Angstrom at `lambda0=7050` Angstrom. Furthermore, we want to use a 2D Gaussian as a model for the PSF. lsd_cc_spatial.py --input=datacube.fits --SHDU=1 --NHDU=4 --gaussian --lambda0=7050 -p0=0.836 -p1=-4.4295e-3 --output=spat_c_datacube.fits` This command will produce a FITS file `spat_c_datacube.fits` that contains the filtered data in HDU 0 and the propagated variances in HDU 1. #### Spectral filtering #### In LSDCat we adopt as a spectral template a simple 1D Gaussian, where the width is parameterised by the FWHM in velocity. The 1D Gaussian function is an adequate model for the emission lines of unresolved distant galaxies where often no spatial disentanglement between ordered motions and unordered motions is possible. In the LSDCat paper we Spectral filtering in LSDCat is performed by the routine `lsd_cc_spectral.py`. ##### Usage ##### The routine `lsd_cc_spectral.py` has the following call signature: lsd_cc_spectral.py [-h] -i INPUT [-F FWHM] [-o OUTPUT] [-S SHDU] [-N NHDU] [-t THREADS] [--ignorenoise] [--cunit3 CUNIT3] [--nanfile NANFILE] [--nanhdu NANHDU] All parameters and switches within brackets are optional. If they are not supplied, they are set to a default value. - `-h`: Shows a long help message. - `-i INPUT`, or `--input INPUT`: Name of the input FITS file containing the flux (and variance) datacube. - `-o OUTPUT`, or `--output OUTPUT`: Name of the output FITS file. The output FITS file will contain 2 HDUs: In HDU 0 the filtered signal is stored and HDU 1 contains the propagated variances. [Default: `wavelength_smooth_+INPUT`, i.e. `wavelength_smooth_` will be appended to the input file name.] - `-S SHDU, --SHDU SHDU`: HDU number (0-indexed) or name in the input FITS file containing the flux data. [Default: 0] - `-N NHDU`, or `--NHDU NHDU`: HDU number (0-indexed) or name in the input FITS file containing the variance data. [Default: 1] - `-t THREADS`, or `--threads THREADS`: Number of CPU cores used in parallel operation. [Default: all available cpu cores] - `--ignorenoise`: Switch to not propagate the variance. If set the output FITS file will contain only 1 HDU that stores the filtered signal. - `--cunit3 CUNIT3`: Specify wavelength unit ('Angstrom' or 'nm'). [Default: Value from FITS Header.] - `--nanfile NANFILE`: Name of an FITS file that contains a 2D image in` --nanmaskhdu` (see below), that is of the same spatial dimensions as the input cube. Spectra corresponding to NaNs in this image will be ignored in the filtering. [Default: None] - `--nanhdu NANHDU`: Number or name of HDU (0-indexed) of FITS file specified in --namask, where the 2D image is stored. [Default: 4] - `-F FWHM`, `--FWHM FWHM`: Specify the FWHM of the Gaussian line template in km/s. [Default: 300 km/s] In the context of MUSE IFS data the `--nanfile` option is especially useful if the datacubes contain a pointing that was observed with a position angle (PA) significantly different from 0 deg or 90 deg. This is because the MUSE pipeline samples each observation onto a rectangular grid where the spatial axis runs from south to north and where the spectral axis runs from west to east. Hence, when the PA is 45 deg 50 % of the spaxels within the FITS file will be empty. The NaN-mask now allows to ignore these spaxels in the spectral filtering. ##### Example Usage ##### We now want to apply the spectral filtering with a `v=250 km/s` broad Gaussian to the FITS file produced by the spatial filtering routine in the example above. Since in this case the default values of `--SHDU` and `--NHDU` are already correct, the command to run the spectral filtering is: lsd_cc_spectral.py --input=spat_c_datacube.fits --FWHM=250 --output=cc_datacube.fits The resulting FITS file `cc_datacube.fits` contains 2 HDUs: HDU 0 contains the filtered signal and HDU 1 the corresponding propagated variances. Using the tool `s2n-cube.py` you can now create the S/N cube (see Eq. 15 in the LSDCat paper). s2n-cube.py --input=cc_datacube.fits --output=s2n_datacube.fits This command produces a single-HDU FITS file `s2n_datacube.fits` containing the S/N-cube. This step is not necessary, as all the following LSDCat routines do create the S/N-cube on the fly. However, it is convenient to have this cube for visual inspections (e.g. using [QFitsView](http://ascl.net/1210.019)). Moreover, this single-HDU S/N cube FITS file can also be read in by all the following LSDCat routines. For more information on `s2n-cube.py` see the section "Additional Tools" at the end of the documentation. ### Emission line source detection ### LSDCat detects emission lines by thresholding in the S/N cube which results from dividing the matched-filtered signal by the propagated variances. Because of the matched-filtering, the values in the S/N cube translate into a probability of rejecting the null-hypothesis that no emission line is present at a given position in the datacube. This is commonly referred to as the detection significance of a source. However, in a strict mathematical sense this direct translation is only valid for sources that are exactly described by the matched-filtering template. Nevertheless, the matched-filtering performed above always reduces high-frequency noise, while enhancing sources that are similar to the matched-filter template. In the LSDCat paper we quantified the loss of S/N as function of source-filter mismatch for PSF-like Gaussian emission lines. There we explained, that mismatches of the order of 20% between template and signal result basically in an insignificant reduction of S/N. The principal input parameter for the emission line source detection is the detection threshold (`THRESH`). The above mentioned relation between threshold and null-hypothesis rejection probability is only valid if the input variance datacube contains a realistic estimate of the true noise data. We recommend, that the detection threshold should be chosen as the point of diminishing returns after a visual check of the S/N cube and the distribution of values within it. A detection threshold lower than this point will produce a large increase in spurious detections with only a small compensatory increase of genuine emission lines. Thresholding and the construction of the source catalogue is performed by the routine `lsd_cat.py`. This routine collects all 3D clusters of neighbouring voxels above the detection threshold. For each of these clusters the coordinates of the S/N-peak (`X_PEAK_SN,Y_PEAK_SN, Z_PEAK_SN`), its value (`DET_SN_MAX`), and the number of voxels above the detection threshold (`NPIX`) are stored. In this catalogue each entry also gets assigned a unique identifier (a so called running ID) `I`. Moreover, LSDCat can also assign to each entry an integer object identifier `ID`: multiple detections at a similar spatial position (within a small search radius, see description of the `--radius` parameter below) get assigned the same object identifier. However, it needs to be checked afterwards whether these spatial superpositions having the same object identifier are real objects or two emission line objects at different redshifts. The resulting catalogue table is written as a FITS and ASCII table file to disk. All pixel coordinates in this output catalogue are 0-indexed. #### Usage #### The routine `lsd_cat.py` has the following call signature. lsd_cat.py [-h] -i INPUT [-S SHDU] [-N NHDU] [-e EXPMAP] [-t THRESH] [-c CATALOG] [--tabvalues TABVALUES] [-r RADIUS] [--borderdist BORDERDIST] [--clobber] All parameters and switches within brackets are optional. If they are not supplied, they are set to a default value. - `-h`: Shows a long help message. - `-i INPUT`, or `--input INPUT`: Name of the input FITS file containing either detection significance or matched filtered data and propagated variances. - `-S SHDU`, or `--SHDU SHDU`: HDU number (0-indexed) or name of input S/N or filtered cube FITS file that contains the detection significances or matched filtered data. If no `--NHDU` (see below) is supplied, we assume that the input cube in this HDU is S/N, otherwise we assume that it contains the matched filtered data. [Default: 0] - `-N NHDU`, or `--NHDU NHDU` HDU number (0-indexed) or name of filtered cube FITS file that contains the propagated variances of the matched filtering operation. [Default: not set - in this case the datacube in `--SHDU` is interpreted as S/N] - `-e EXPMAP`, or `--expmap EXPMAP`: FITS file containing a 2D array where the number of exposed voxels are stored for each spaxel. The tool `fov_map_from_expcube.py` can create such a map. This exposure map is required if the output parameter `BORDER` is demanded in the `--tabvalues` option (see below). [Default: None] - `-t THRESH`, or `--thresh THRESH`: Detection threshold [Default: 8.0] - `-c CATALOGUE`, or `--catalogue`: Filename of the output catalogue. Two files will be written on disc, an ASCII catalogue with the specified filename, and an FITS table with `.fits` being appended to this filename. [Default: `catalog_+INPUT+.cat`, where `INPUT` is the name of the input FITS file.] - `--tabvalues TABVALUES`: Comma-separated list of columns to be written in output catalogue. See below for a list of supported values. [Default: `I,ID,X_PEAK_SN,Y_PEAK_SN,Z_PEAK_SN,DETSN_MAX`] - `-r RADIUS`, `--radius RADIUS`: Grouping radius in arcsec. Detections at similar spatial positions within this search radius get assigned the same `ID`. [Default: 0.8 arcsec] - `--spaxscale`: Spatial extent of a spatial pixel in arcsec. [Default: 0.8 arcsec] - `--borderdist BORDERDIST`: Flag detection in catalogue (column `BORDER`) if it is less than `BORDERDIST` pixels near the field of view border. Only has an effect if the field `BORDER` is requested in `--tabvalues` and if an `--expmap` is supplied. [Default: 10] - `--clobber`: Overwrite already existing output files. USE WITH CAUTION AS THIS MAY OVERWRITE YOUR RESULTS! The following columns of the output catalogue can be requested with the `--tabvalues` option: - `I`: Running ID. Unique integer for each detection in the output catalogue. - `ID`: Object ID. Unique integer for groups of detections within the search radius set with the parameter `--radius` above. - `X_PEAK_SN`,`Y_PEAK_SN`, and `Z_PEAK_SN`: Position of maximum S/N of an detection in voxel coordinates. - `RA_PEAK_SN`, `DEC_PEAK_SN`, and `LAMBDA_PEAK_SN`: Position of maximum S/N of an detection in physical world-coordinates (right ascension, declination, and wavelength). For this it is required, that the input datacube HDU contains [world-coordinate system header information](http://dx.doi.org/10.1051/0004-6361:20021326). - `NPIX`: Number of voxels above the detection threshold constituting the detection. - `DETSN_MAX`: S/N value at `X_PEAK_SN,Y_PEAK_SN,Z_PEAK_SN` (formal detection significance). - `BORDER`: Binary flag. Set to 1 if objects are near a field of view border, or set to 0 otherwise. (Distance to border is set via `--borderdist` parameter.) #### Example usage #### We now create a catalogue from the `s2n_datacube.fits` S/N datacube FITS file from the previous example using a detection threshold of 8: lsd_cat.py -i s2n_datacube.fits -t 8 The header of the resulting ASCII catalogue `catalog_s2n_datacube.fits.cat` looks then as follows: # Catalog of detections in s2n_datacube.fits # S/N HDU = 0 # Cross correlation polynomial coefficents: p0=0.836 " p1=-4.4295e-05 "/nm p2=0.0 "/nm^2 # Cross correlation velocity: 250.0 km/s # Threshold: 8.0 # Detections: 211 # Spatial grouping radius: 0.8" # Generated: 2016-06-24 20:45:45 # Tool: lsd_cat.py version 1.0.4 # Command: lsd_cat.py -i s2n_datacube.fits -t 8 # 1: I # 2: ID # 3: X_PEAK_SN # 4: Y_PEAK_SN # 5: Z_PEAK_SN # 6: NPIX # 7: DETSN_MAX Note that all the parameters that were used in the matched filtering procedure are also stored in the header of the ASCII file. Similarly, all this information is stored in key-value parts within the header of the FITS output table. ### Source parameterisation / measurements ### LSDCat provides a set of basic parameters for each detection. The parameters are chosen to be robust and independent from a specific scientific application. A detailed description of the available parameters is given in the LSDCat paper. For more complex measurements, involving e.g. fitting of the sources flux distributions, the LSDCat measurement capability can serve as a starting point. Source parameterisation is performed by the routine `lsd_cat_measure.py`. As input this routine requires the output catalogue from the detection routine, the matched filtered data incl. propagated variances, the original data. The main input parameter influencing the behaviour of the source parameterisation routine is the analysis threshold (`THRESHANA`). This additional threshold must be smaller or equal than the detection threshold. The role of the analysis threshold in the calculation of the various parameters is explained in detail in the LSDCat paper. There we also give guidelines for choosing its value. #### Usage #### The routine `lsd_cat_measure.py` has the following call signature: lsd_cat_measure.py [-h] -ic INPUTCAT -ta THRESHANA [--tabvalues TABVALUES] -f FLUXCUBE [--fhdu FHDU] [--ferrhdu FERRHDU] -ff FILTEREDFLUXCUBE [--ffhdu FFHDU] [--fferhdu FFERHDU] [-ffsn SNCUBE] [--ffsnhdu FFSNHDU] [--rmin RMIN] [--rmax RMAX] [-c CATALOG] [--clobber] All parameters and switches within brackets are optional. If they are not supplied, they are set to a default value. - `-h`: Shows a long help message. - `-ic INPUTCAT`, or `--inputcat INPUTCAT`: Input LSDCat catalogue from `lsd_cat.py` (FITS file). This catalogue must contain the fields `I,X_PEAK,Y_PEAK,Z_PEAK`. - `-ta THRESHANA`, or `--threshana THRESHANA`: Analysis threshold. - `--tabvalues TABVALUES`: Comma-separated list of values to be written in output catalogue. See below for a list of supported values. [Default: all values.] - `-f FLUXCUBE`, or `--fluxcube FLUXCUBE`: FITS file containing the flux (+variance) datacube. It is recommended to use a continuum subtracted cube (e.g. median-filter subtracted). - `--fhdu FHDU`: HDU name (or number) in `FLUXCUBE` containing the (continuum subtracted) flux. [Default: `MEDFILTER-SUBTRACTED_DATA`] - `--ferrhdu FERRHDU`: HDU name (or number) in `FLUXCUBE` containing the variances. [Default: `EFF_STAT`] - `-ff`, or `--filteredfluxcube`: FITS file containing the datacube and propagated variances after matched filtering (i.e. after running `lsd_cc_spatial.py` and `lsd_cc_spectral.py` on `FLUXCUBE`.) - `--ffhdu FFHDU`: HDU name (or number) in `FILTEREDFLUXCUBE` that contains the filtered flux data. [Default: `FILTERED_DATA`] - `--fferhdu`: HDU name (or number) in `FILTEREDFLUXCUBE` that contains the propagated variances after matched filtering. [Default: `FILTERED_STAT`] - `-ffsn`, or `--sncube`: FITS file containing the S/N cube. (This cube can be generated with `./tools/s2n-cube.py` from `FILTEREDFLUXCUBE`. If not supplied it will be generated on-the-fly, however generating it in advance saves computing time in multiple runs.) [Default: None.] - `--ffsnhdu`: HDU name (or number) in `SNCUBE` that contains the S/N data. [Default: `SIGNALTONOISE`] - `-t`, `--thresh`: The detection threshold that was used in the `lsd_cat.py` run. Normally this value is not required, as it will be read from the FITS header in `INPUTCAT`. However, older versions of `lsd_cat.py` did not write this header, hence this option ensures compatibility with those versions. [Default: Use value in FITS header of catalogue.] - `--rmin`: Minimum radius for flux extraction aperture (in units of isophotal radii). [Default: 3] - `--rmax`: Maximum radius for flux extraction aperture (in units of isophotal radii). This radius also defines the area that is used to calculate the Kron-radius. [Default: 6] - `-c`, or `--catalog`: Filename of the FITS table containing the output catalogue with the columns as defined in `TABVALUES`. [Default: `INPUTCAT`+`_fluxes.fits`] - `--clobber`: Overwrite already existing output file! Use with caution as this may overwrite your results. The following columns of the output catalogue can be requested with the `--tabvalues` option (see the LSDCat paper for a detailed description of the parameters): - `X_SN,Y_SN,Z_SN`: 3D S/N weighted centroid coordinates in voxel coordinates (0-indexed) - `RA_SN, DEC_SN, LAMBDA_SN`: 3D S/N weighted centroid in physical world coordinates. - `X_FLUX, Y_FLUX, Z_FLUX`: 3D flux weighted centroid in voxel coordinates (0-indexed). - `RA_FLUX, DEC_FLUX, LAMBDA_FLUX`: 3D flux weighted centroid in physical world coordinates. - `X_SFLUX, Y_SFLUX, Z_SFLUX`: 3D filtered flux weighted centroid in voxel coordinates (0-indexed). - `RA_SFLUX`, `DEC_SFLUX`, `LAMBDA_SFLUX`: 3D filtered flux weighted centroid in physical world coordinates. - `X_1MOM`,`Y_1MOM`: 1st central moments in flux filtered narrow band (pixel coordinates). - `RA_1MOM`,`DEC_1MOM`: 1st central moment in flux filtered narrow band (pixel coordinates). - `Z_NB_MIN`, `Z_NB_MAX`: Narrow band boundary coordinates (layer coordinates). - `LAMBDA_NB_MAX`, `LAMBDA_NB_MIN`: Narrow band boundary coordinates (wavelength). - `X_2MOM`, `Y_2MOM`, `XY_2MOM`: Second central moments in flux filtered narrow band. - `SIGMA_ISO`: Isophotal radius, i.e., square-root of `(X_2MOM + Y_2MOM)/2.` - `F_KRON`, `F_2KRON`, `F_3KRON`, `F_4KRON`: Flux in 1, 2, 3, or 4 Kron-Radii in between `Z_NB_MIN` and `Z_NB_MAX`. - `F_KRON_ERR`, `F_2KRON_ERR`, `F_3KRON_ERR`, `F_4KRON_ERR` - Propagated error on flux measurement. - `Z_DIFF_FLAG`: Maximum- / Minimum narrow band window used. - `RKRON`: Kron Radius. - `RRON_FLAG`: Minimum radius reached in RKRON calculation. #### Example usage #### TBD ## Advanced Usage ## In the examples above we presented only the most simple way of using LSDCat. Here we now provide an introduction by examples into the more advanced LSDCat features. ### Subtraction of continuum objects prior matched filtering ### It is strongly recommended to subtract objects that have detectable continuum signal within the datacube. One possibility to remove continuum signal is to create a datacube median-filtered in spectral direction and to remove this median-filtered version from the original datacube. For this operation we include the tool `median-filter-cube.py` in the `./tools/` folder (see documentation in the "additional tools" section). The following image shows a white-light image of a 1h MUSE cube prior- (left) and post (right) the application of `median-filter-cube.py` with the width parameter being set to `-W 151`. ![Median Filter Example](./doc/median_filter_example.png) ### Utilising a spatial mask in the matched filtering process ### In some fields it might be benefical to mask out certain regions of the datacube for better results with LSDCat. For example, bright stars or quasars are not well subtracted from the datacube with the `median-filter-cube.py` tool. Moreover, in some cases systematic noise residuals might be present near the borders of the datacube. To overcome this issues, a mask can be used in the matched filtering process. As an example we show here a white-light image of a MUSE datacube with 2 very bright continuum objects, and some systematic residuals near the borders. Using the software [SAO ds9](http://ds9.si.edu/site/Home.html) regions were drawn that should be excluded in the matched filtering process. ![ds9 region mask example](./doc/ds9.png) Using the python library [pyregion](https://github.com/astropy/pyregion) these regions can be converted into a binary pixel mask. If the region is saved as `mask_region.reg` and your datacube (here called `cube_with_wl_image_in_hdu4.fits`) contains a white-light image, then the conversion can be done as follows: from astropy.io import fits import pyregion wl_image = fits.getdata('cube_with_wl_image_in_hdu4.fits', 4) wl_header = fits.getheader('cube_with_wl_image_in_hdu4.fits', 4) mask_regions = pyregion.open('mask_region.reg').as_imagecoord(wl_header) mask = mask_regions.get_mask(shape=wl_image.shape) mask = ~mask fits.writeto('mask.fits', mask.astype('int')) ![Mask image example](doc/mask_image.png) This mask can now be utilised in the matched filtering with `lsd_cc_spatial.py`, e.g. lsd_cc_spatial.py -i median_filtered_cube_with_wl_image_in_hdu4.fit.fits -m mask.fits ### Ignoring empty pixels in matched filtering process ### TBD ### Scripting ### When working with many datacubes it becomes useful to write little shell scripts that contain all the LSDCat commands in order with their relevant parameters. Such script in the folder of the original data also serves the purpose of documenting the whole procedure. Below is an example of this type of script for one datacube of the MUSE-Wide survey: #! /bin/bash # PSF Parameters field_id=01 p0=0.836496041376 p1=-4.42958396561e-05 # PATH + filenames - note how the filenames are generated here from field_id field_path=/store/collab/herenz/musewide_candels_lsdcat_v1.0/ input_cube=${field_path}DATACUBE_candels-cdfs-${field_id}_v1.0.fits effnoise_file=${field_path}EFFNOISE_5px_candels-cdfs-${field_id}_v1.0.fits input_cube_base=`basename ${input_cube}` echo Field ID: $field_id echo p0: $p0 echo p1: $p1 # median filtering med_filt_output=${output_dir}median_filtered_${input_cube_base} med_filt_com="median-filter-cube.py ${input_cube} --signalHDU=1 --varHDU=2 --num_cpu=48 --width=151 \ --output=${med_filt_output}" echo ${med_filt_com} ${med_filt_com} # applying effective noise apply_eff_noise_com="apply_eff_noise.py ${med_filt_output} ${effnoise_file} --NHDU=1 --blowup --rsexp \ --output=${med_filt_output}_effnoised.fits" echo ${apply_eff_noise_com} ${apply_eff_noise_com} #spatial cross-correlation med_filt_output_base=`basename ${med_filt_output} .fits` spat_cced_out=${output_dir}spat_cced_${med_filt_output_base}_effnoised.fits lsd_cc_spat_com="lsd_cc_spatial.py --input=${med_filt_output}_effnoised.fits --SHDU=0 --NHDU=4 \ --threads=48 --gaussian --lambda0=7050 -p0=${p0} -p1=${p1} --output=${spat_cced_out}" echo ${lsd_cc_spat_com} ${lsd_cc_spat_com} # spectral cross-correlation spat_cced_out_base=`basename ${spat_cced_out}` spec_cced_out=${output_dir}spec_cced_${spat_cced_out_base} lsd_cc_spec_com="lsd_cc_spectral.py --input=${spat_cced_out} --threads=48 --FWHM=250 --SHDU=0 --NHDU=1 \ --output=${spec_cced_out}" echo ${lsd_cc_spec_com} ${lsd_cc_spec_com} # creation of S/N cube s2n_com="s2n-cube.py --input=${spec_cced_out} \ --output=${output_dir}s2n_opt_v250_candels-cdfs-${field_id}_v0.2.fits --nanmask=${input_cube} \ --nanmaskhdu=3" echo ${s2n_com} ${s2n_com} This script first utilises `median-filter-cube.py` to remove continuum signal from the datacube, then a custom MUSE-Wide routine is called to correct the pipeline propagated variance (see MUSE-Wide paper - Herenz et al., in prep.). Next the matched filtering is performed with the routines `lsd_cc_spatial.py` and `lsd_cc_spectral.py`, where for the spatial part the polynomial parameters are given at the top of the script. Lastly, the S/N-cube is generated from the output of the matched filtering routine. ## Additional tools ## The following set of additional tools are shipped with LSDCat in the `./tools/` sub-folder. These scripts provide convenience functions to pre- or post-process datacubes within the context of an LSDCat run. - `s2n-cube.py`: Create a signal to noise datacube from a FITS file containing a signal and a noise HDU. - `median-filter-cube.py`: Subtract an in spectral direction median-filtered version of the datacube. Can be used to remove sources that have significant detectable continuum signal within the datacube. - `fov_map_from_expcube.py`: Creates from an exposure map datacube an exposure map image. An exposure map datacube contains in every voxel the number of exposures that went into this voxel, while an exposure map image contains the number of exposure for each spatial pixel. Such a map can be used, e.g., to identify regions without any exposures (e.g. field borders). ### s2n-cube.py ## Call signature: s2n-cube.py [-h] -i INPUT [-n NANMASK] [--nanmaskhdu NANMASKHDU] [-o OUTPUT] [-S SHDU] [-N NHDU] [--sigma] [--clobber] [--float64] Description: - `-h`: Shows a long help message. ### median-filter-cube.py ### Call signature: median-filter-cube.py [-h] [-S SIGNALHDU] [-V VARHDU] [-o OUTPUT] [-W WIDTH] [-t NUM_CPU] [--memmap] fitscube Description: - `-h`: Shows a long help message.