Clone wiki

binospec / Home

Binospec Data Reduction Pipeline

Pipeline Description and Installation

The binospec pipeline is an IDL-based code, and requires IDL 8.x to run, ideally on a machine with at least 16GB of RAM. To install the code, first download the binospec repository from here. With git, you can do:

git clone

You will also need to download the mmirs pipeline, though you only need the 'deps' folder which contains various utilities also needed for binospec

git clone
  • The 'binospec' and 'deps' folders both need to be in your IDL path
  • You must set a shell environment variable BINO_PIPELINE_PATH which points to the binospec/pipeline subdirectory of the code.

Calling Sequence and Options

The main pipeline driver is called Below is a typical calling sequence with some of the most common options highlighted.

prefix = '/data/bino/reductions/2018.0612/cattarget_1659/'
rawdir = prefix + 'crunched/'
outdir = prefix + 'reduced/'
logfile = outdir + 'logfile.txt'
sci1  = rawdir + ['sci_img_2018.0612.064247.fits', 'sci_img_2018.0612.070335.fits', 'sci_img_2018.0612.072424.fits'] ;science exposures
arc1  = rawdir + ['sci_img_2018.0612.073102.fits', 'sci_img_2018.0612.073651.fits']                                  ;comparison arc raw files
flat1 = rawdir + ['sci_img_2018.0612.073901.fits', 'sci_img_2018.0612.074104.fits', 'sci_img_2018.0612.074307.fits'] ;flat field raw files
skyflat=rawdir + ['sci_img_2018.0612.012835.fits', 'sci_img_2018.0612.013106.fits', 'sci_img_2018.0612.013252.fits'] ;typically only included in 1000l observations

binospec_quickreduce, sci1, flat = flat1, skyflat=skyflat, arc = arc1, tmpdir = outdir, $
    bright = 1,$         ;; bright targets? 0 = No
    /barycorr,$          ;; perform barycentric correction
    /extract,$           ;; perform extraction
        /extr_opt,$          ;; perform optimal extraction
        extapw = 4,$         ;; FWHM of a Gaussian for optimal extraction (4 = 1arcsec)
        extr_detect = 1, $   ;; detect targets in slits for extraction
        extr_estimate = 0,$  ;; empirically estimate extraction profile (use only for relatively bright targets)
        /abscal,$            ;; absolute flux calibration
        oh=0, $              ;; use OH lines to build the wavelength solution
        /skysubtarget,$      ;; do additional sky subtraction in linearized data
        /split1d, $          ;; generate individual files for each 1D spectrum, IRAF-readable format
        /sub_sc_sci, $       ;; mask CCD charge trap defect regions
        /skylinecorr, wl_skyline = 557.734d ;; use sky line to perform illumination correction

Some initial suggestions for how to set these parameters include:

  • The "bright" keyword should typically be set to 1 unless it is known that the slits are very short/tightly packed, and/or the targets are faint and largely will not have significant continuum detected.
  • The /oh keyword should be set for any configs where the central wavelength is longer than about 6000A, and so should be set oh=0 for most configs with the 1000l grating, and some bluer 600 line configs. It should also be set to zero for any short exposures (<120s or so).
  • The /detect keyword should usually be set.
  • The /estimate keyword should usually be unset, unless the targets are bright stars, including but not limited to standard star observations. If there are multiple objects visible in a longslit, you can limit the extraction window to only the desired object by setting n_apwmax=X where X is some integer number of FWHM as the size of the extraction window. Try 3. (Its actually 3 FWHM to each side of the object, so total width would be 6 FWHM).
  • The /skysubtarget keyword should be set (for now), except when there are bright continuum sources in short slits, or if there are faint extended sources filling most of the slit. extapw is the extraction window in pixels. The default of 4 is 1arcsec, and is generally a good start, but if conditions were particularly bad, you can increase this to 5 or 6 as desired.
  • The /skylinecorr keyword uses a specific sky line to perform an illumination correction. When set (usually a good idea), you must also provide the wavelength of a bright skyline closest to the center of the wavelength coverage of the data, using the wl_skyline = keyword. The list of tested lines (in nm) is:

    435.8335, 557.734, 630.0304, 686.3951, 734.0881, 799.3327, 846.5353, 888.5843, 950.2808

  • In addition, the pipeline can be run with the "/series" option set to reduce each science exposure individually without coadding, for the case of a time-series observation. When setting /series, please also set the keyword skysubalgorithm=2, which will enable a basic local sky subtraction in each slit (the more advanced sky subtraction is not yet available in series mode). Two additional keywords control the number of adjacent images to use for cosmic-ray cleaning (n_img_clean=3), and the number of adjacent images to stack if desired (n_stack=1).

Reduced Data Products

MMT users should be able to download their reduced data directly from the MMT scheduler website. If you have arrived at this page after receiving an email notification that your data is available to download, please note the following:

You will have one set of data products for each queue catalog target that was observed. If a target was observed on multiple nights, you will receive multiple sets of reduced data, one per night. Any co-adding across nights must currently be performed by the user on the reduced files.

We provide multiple data products, both 2D and extracted 1D spectra with and without pseudo-flux calibration, as described below. If you choose to run or re-run the pipeline yourself, you will see other intermediate-step output files as well, but the files listed below represent the most useful final data products.

  1. Co-added, sky subtracted, rectified/linearized 2D spectra and their uncertainties (indicated with "err" in the filename) stored in the following multi-extention FITS files:

    • Flat fielded in total counts but not flux calibrated
      • obj_counts_slits_lin.fits
      • obj_counts_err_slits_lin.fits
    • Corrected for blaze function and flux-calibrated (not accounting for sky or slit losses, flux hitting the detector only)
      • obj_abs_slits_lin.fits
      • obj_abs_err_slits_lin.fits

    Every slit is stored in a separate FITS extension. The fits header for each extension includes wavelength WCS info, as well as SLIT* keywords providing the information about the target and slit geometry. By default we apply a barycentric correction to all wavelength solutions. These files can be displayed using ds9 as follows: ds9 -mecube obj_counts_slits_lin.fits

  2. Quick-look versions of the results in counts, with all slits displayed together as they appeared on the slit mask, shifted so that wavelengths are aligned (a line of constant X pixel=constant wavelength). These files have two fits extensions, one for Binospec side A, and one for side B.

    • obj_counts_qlook.fits
  3. Co-added, sky subtracted, 1D spectra and their uncertainties stored as a multi-extension FITS file, indicated with "extr" in the filename:

    • Flat fielded in total counts but not flux calibrated.
      • obj_counts_slits_extr.fits
    • Corrected for blaze function and relatively flux calibrated (throughput-corrected).
      • obj_abs_slits_extr.fits

    These files contain one spectrum per row in the 1st FITS extension. The spectra are aligned in the wavelength scale and the corresponding WCS coordinates are provided in the header. The 2nd FITS extension contains the uncertainty for each spectrum, the 3rd contains the sky spectrum that was subtracted, and the 4th and 5th extensions contain fits binary tables with target information from the mask design (for mask side A and B, respectively.)

  4. 1D spectra split into individual fits files, one spectrum per file. They are distributed bundled up into tar files, whose contents are the 1D files with file names derived from object names submitted with the mask design. These files have fits header keywords that improve compatibility with IRAF.

    • obj_counts_1D.tar
    • obj_abs_1D.tar

Every FITS file produced by the pipeline contains the SOFTWARE keyword in the primary HDU that provides the pipeline version used to reduce this dataset. Should you have any question/feedback related to the data reduction, please include the pipeline version in the correspondance.