Sam Skillman avatar Sam Skillman committed a2b3521 Merge

Merging yt into stable.

Comments (0)

Files changed (159)

.bzrignore

-__config__.py
-build
-hdf5.cfg
-setuptools-0.6c9-py2.5.egg
 build
 yt.egg-info
 __config__.py
+freetype.cfg
+hdf5.cfg
+png.cfg
+yt/frontends/ramses/_ramses_reader.cpp
+yt/utilities/amr_utils.c
+yt/utilities/kdtree/forthonf2c.h
+yt/utilities/libconfig_wrapper.c
 syntax: glob
 *.pyc
 .*.swp
                                 Oliver Hahn (ohahn@stanford.edu)
                                 John ZuHone (jzuhone@cfa.harvard.edu)
                                 Chris Malone (cmalone@mail.astro.sunysb.edu)
+                                Cameron Hummels (chummels@astro.columbia.edu)
+                                Stefan Klemer (sklemer@phys.uni-goettingen.de)
+                                Andrew Myers (atmyers@astro.berkeley.edu)
 
 We also include the Delaunay Triangulation module written by Robert Kern of
 Enthought, the cmdln.py module by Trent Mick, and the progressbar module by
 Nilton Volpato.  The PasteBin interface code (as well as the PasteBin itself)
 was written by the Pocoo collective (pocoo.org).  The RamsesRead++ library was
-developed by Oliver Hahn.  Large parts of this code were guided by discussions
-with Tom Abel, Ralf Kaehler, Mike Norman and Greg Bryan.
+developed by Oliver Hahn.  yt also includes a slightly-modified version of
+libconfig (http://www.hyperrealm.com/libconfig/) and an unmodified version of
+several routines from HEALpix (http://healpix.jpl.nasa.gov/).
+
+Large parts of development of yt were guided by discussions with Tom Abel, Ralf
+Kaehler, Mike Norman and Greg Bryan.
 
 Thanks to everyone for all your contributions!

File contents unchanged.

 
 You can also download a copy of the documentation and unzip it right here:
 
-wget http://yt.enzotools.org/doc/docs_html.zip
-unzip docs_html.zip
+wget http://yt.enzotools.org/doc/download.zip
+unzip download.zip
 
 Then open index.html with your favorite web browser, and be off!

doc/how_to_develop_yt.txt

       classes for data regions, covering grids, time series, and so on.  This
       also includes derived fields and derived quantities.
 
+   astro_objects
+      This is where all objects that represent astrophysical objects should
+      live -- for instance, galaxies, halos, disks, clusters and so on.  These
+      can be expressive, provide astrophysical analysis functionality and will
+      in general be more user-modifiable, user-tweakable, and much less of a
+      black box that data_objects.
+
    analysis_modules
       This is where all mechanisms for processing data live.  This includes
       things like clump finding, halo profiling, halo finding, and so on.  This

doc/install_script.sh

 INST_PNG=1      # Install a local libpng?  Same things apply as with zlib.
 INST_FTYPE=1    # Install FreeType2 locally?
 INST_ENZO=0     # Clone a copy of Enzo?
-INST_FORTHON=1
+INST_SQLITE3=1  # Install a local version of SQLite3?
+INST_FORTHON=1  # Install Forthon?
 
 # If you've got YT some other place, set this to point to it.
 YT_DIR=""
 MPL_SUPP_CFLAGS=""
 MPL_SUPP_CXXFLAGS=""
 
+# If you want to spawn multiple Make jobs, here's the place to set the
+# arguments.  For instance, "-j4"
+MAKE_PROCS=""
+
 #------------------------------------------------------------------------------#
 #                                                                              #
 # Okay, the script starts here.  Feel free to play with it, but hopefully      #
 get_willwont ${INST_FTYPE}
 echo "be installing freetype2"
 
+printf "%-15s = %s so I " "INST_SQLITE3" "${INST_SQLITE3}"
+get_willwont ${INST_SQLITE3}
+echo "be installing SQLite3"
+
 printf "%-15s = %s so I " "INST_FORTHON" "${INST_FORTHON}"
 get_willwont ${INST_FORTHON}
 echo "be installing Forthon (for Halo Finding, etc)"
     cd $1
     if [ ! -z `echo $1 | grep h5py` ]
     then
-	    echo "${DEST_DIR}/bin/python2.6 setup.py configure --hdf5=${HDF5_DIR}"
-	    ( ${DEST_DIR}/bin/python2.6 setup.py configure --hdf5=${HDF5_DIR} 2>&1 ) 1>> ${LOG_FILE} || do_exit
+	    echo "${DEST_DIR}/bin/python2.7 setup.py configure --hdf5=${HDF5_DIR}"
+	    ( ${DEST_DIR}/bin/python2.7 setup.py configure --hdf5=${HDF5_DIR} 2>&1 ) 1>> ${LOG_FILE} || do_exit
     fi
     shift
-    ( ${DEST_DIR}/bin/python2.6 setup.py build   $* 2>&1 ) 1>> ${LOG_FILE} || do_exit
-    ( ${DEST_DIR}/bin/python2.6 setup.py install    2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( ${DEST_DIR}/bin/python2.7 setup.py build   $* 2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( ${DEST_DIR}/bin/python2.7 setup.py install    2>&1 ) 1>> ${LOG_FILE} || do_exit
     touch done
     cd ..
 }
 if [ -z "$HDF5_DIR" ]
 then
     echo "Downloading HDF5"
-    get_enzotools hdf5-1.6.9.tar.gz
+    get_enzotools hdf5-1.8.6.tar.gz
 fi
 
 [ $INST_ZLIB -eq 1 ] && get_enzotools zlib-1.2.3.tar.bz2 
 [ $INST_BZLIB -eq 1 ] && get_enzotools bzip2-1.0.5.tar.gz
 [ $INST_PNG -eq 1 ] && get_enzotools libpng-1.2.43.tar.gz
 [ $INST_FTYPE -eq 1 ] && get_enzotools freetype-2.4.4.tar.gz
-get_enzotools Python-2.6.3.tgz
+[ $INST_SQLITE3 -eq 1 ] && get_enzotools sqlite-autoconf-3070500.tar.gz
+get_enzotools Python-2.7.1.tgz
 get_enzotools numpy-1.5.1.tar.gz
 get_enzotools matplotlib-1.0.0.tar.gz
-get_enzotools mercurial-1.7.3.tar.gz
+get_enzotools mercurial-1.8.1.tar.gz
 get_enzotools ipython-0.10.tar.gz
 get_enzotools h5py-1.2.0.tar.gz
 get_enzotools Cython-0.14.tar.gz
 get_enzotools Forthon-0.8.4.tar.gz
-get_enzotools yt.hg
+get_enzotools ext-3.3.2.zip
+get_enzotools ext-slate-110328.zip
 
 if [ $INST_BZLIB -eq 1 ]
 then
         [ ! -e libpng-1.2.43 ] && tar xfz libpng-1.2.43.tar.gz
         echo "Installing PNG"
         cd libpng-1.2.43
-        ( ./configure CFLAGS=-I${DEST_DIR}/include --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
+        ( ./configure CPPFLAGS=-I${DEST_DIR}/include CFLAGS=-I${DEST_DIR}/include --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
         ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
         touch done
         cd ..
 
 if [ -z "$HDF5_DIR" ]
 then
-    if [ ! -e hdf5-1.6.9/done ]
+    if [ ! -e hdf5-1.8.6/done ]
     then
-        [ ! -e hdf5-1.6.9 ] && tar xfz hdf5-1.6.9.tar.gz
+        [ ! -e hdf5-1.8.6 ] && tar xfz hdf5-1.8.6.tar.gz
         echo "Installing HDF5"
-        cd hdf5-1.6.9
+        cd hdf5-1.8.6
         ( ./configure --prefix=${DEST_DIR}/ --enable-shared 2>&1 ) 1>> ${LOG_FILE} || do_exit
-        ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
+        ( make ${MAKE_PROCS} install 2>&1 ) 1>> ${LOG_FILE} || do_exit
         touch done
         cd ..
     fi
 fi
 export HDF5_API=16
 
-if [ ! -e Python-2.6.3/done ]
+if [ $INST_SQLITE3 -eq 1 ]
+then
+    if [ ! -e sqlite-autoconf-3070500/done ]
+    then
+        [ ! -e sqlite-autoconf-3070500 ] && tar xfz sqlite-autoconf-3070500.tar.gz
+        echo "Installing SQLite3"
+        cd sqlite-autoconf-3070500
+        ( ./configure --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
+        ( make ${MAKE_PROCS} install 2>&1 ) 1>> ${LOG_FILE} || do_exit
+        touch done
+        cd ..
+    fi
+fi
+
+if [ ! -e Python-2.7.1/done ]
 then
     echo "Installing Python.  This may take a while, but don't worry.  YT loves you."
-    [ ! -e Python-2.6.3 ] && tar xfz Python-2.6.3.tgz
-    cd Python-2.6.3
+    [ ! -e Python-2.7.1 ] && tar xfz Python-2.7.1.tgz
+    cd Python-2.7.1
     ( ./configure --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
 
-    ( make 2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( make ${MAKE_PROCS} 2>&1 ) 1>> ${LOG_FILE} || do_exit
     ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( ln -sf ${DEST_DIR}/bin/python2.7 ${DEST_DIR}/bin/pyyt 2>&1 ) 1>> ${LOG_FILE}
     touch done
     cd ..
 fi
 
-export PYTHONPATH=${DEST_DIR}/lib/python2.6/site-packages/
+export PYTHONPATH=${DEST_DIR}/lib/python2.7/site-packages/
 
 if [ $INST_HG -eq 1 ]
 then
     echo "Installing Mercurial."
-    do_setup_py mercurial-1.7.3
+    do_setup_py mercurial-1.8.1
     export HG_EXEC=${DEST_DIR}/bin/hg
 else
     # We assume that hg can be found in the path.
     elif [ ! -e yt-hg ] 
     then
         YT_DIR="$PWD/yt-hg/"
+        ( ${HG_EXEC} --debug clone http://hg.enzotools.org/yt-supplemental/ 2>&1 ) 1>> ${LOG_FILE}
         # Recently the hg server has had some issues with timeouts.  In lieu of
         # a new webserver, we are now moving to a three-stage process.
         # First we clone the repo, but only up to r0.
-        ( ${HG_EXEC} --debug clone -r0 http://hg.enzotools.org/yt/ ./yt-hg 2>&1 ) 1>> ${LOG_FILE}
-        # Now we unbundle our previously downloaded bundle of changesets.
-        # This bundle has been created to include most of the recent
-        # changesets, which should avoid any problematic timeouts.
-        ( ${HG_EXEC} -R ${YT_DIR} unbundle yt.hg 2>&1 ) 1>> ${LOG_FILE}
-        # Now we pull new changes
-        ( ${HG_EXEC} -R ${YT_DIR} pull 2>&1 ) 1>> ${LOG_FILE}
+        ( ${HG_EXEC} --debug clone http://hg.enzotools.org/yt/ ./yt-hg 2>&1 ) 1>> ${LOG_FILE}
         # Now we update to the branch we're interested in.
         ( ${HG_EXEC} -R ${YT_DIR} up -C ${BRANCH} 2>&1 ) 1>> ${LOG_FILE}
     elif [ -e yt-hg ] 
 unset LDFLAGS 
 
 echo "Installing distribute"
-( ${DEST_DIR}/bin/python2.6 ${YT_DIR}/distribute_setup.py 2>&1 ) 1>> ${LOG_FILE} || do_exit
+( ${DEST_DIR}/bin/python2.7 ${YT_DIR}/distribute_setup.py 2>&1 ) 1>> ${LOG_FILE} || do_exit
 
 echo "Installing pip"
-( ${DEST_DIR}/bin/easy_install-2.6 pip 2>&1 ) 1>> ${LOG_FILE} || do_exit
+( ${DEST_DIR}/bin/easy_install-2.7 pip 2>&1 ) 1>> ${LOG_FILE} || do_exit
 
 do_setup_py numpy-1.5.1 ${NUMPY_ARGS}
 
 [ $INST_PNG -eq 1 ] && echo $PNG_DIR > png.cfg
 [ $INST_FTYPE -eq 1 ] && echo $FTYPE_DIR > freetype.cfg
 [ $INST_FORTHON -eq 1 ] && ( ( cd yt/utilities/kdtree && FORTHON_EXE=${DEST_DIR}/bin/Forthon make 2>&1 ) 1>> ${LOG_FILE} )
-( ${DEST_DIR}/bin/python2.6 setup.py develop 2>&1 ) 1>> ${LOG_FILE} || do_exit
+( ${DEST_DIR}/bin/python2.7 setup.py develop 2>&1 ) 1>> ${LOG_FILE} || do_exit
 touch done
 cd $MY_PWD
 
     cd $MY_PWD
 fi
 
-echo
-echo
-echo "========================================================================"
-echo
-echo "yt is now installed in $DEST_DIR ."
-echo "To run from this new installation, the a few variables need to be"
-echo "prepended with the following information:"
-echo
-echo "YT_DEST         => $DEST_DIR"
-echo "PATH            => $DEST_DIR/bin/"
-echo "PYTHONPATH      => $DEST_DIR/lib/python2.6/site-packages/"
-echo "LD_LIBRARY_PATH => $DEST_DIR/lib/"
-echo
-echo "For interactive data analysis and visualization, we recommend running"
-echo "the IPython interface, which will become more fully featured with time:"
-echo
-echo "$DEST_DIR/bin/iyt"
-echo
-echo "For command line analysis run:"
-echo
-echo "$DEST_DIR/bin/yt"
-echo
-echo "Note of interest: this installation will use the directory:"
-echo "    $YT_DIR"
-echo "as the source for all the YT code.  This means you probably shouldn't"
-echo "delete it, but on the plus side, any changes you make there are"
-echo "automatically propagated."
-if [ $INST_HG -eq 1 ]
+# Now we open up the ext repository
+if [ ! -e ext-3.3.2/done ]
 then
-  echo
-  echo "Mercurial has also been installed:"
-  echo
-  echo "$DEST_DIR/bin/hg"
-  echo
+    ( unzip -o ext-3.3.2.zip 2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( echo "Symlinking ext-3.3.2 as ext-resources" 2>&1 ) 1>> ${LOG_FILE}
+    ln -sf ext-3.3.2 ext-resources
+    touch ext-3.3.2/done
 fi
-if [ $INST_ENZO -eq 1 ]
+
+# Now we open up the ext theme
+if [ ! -e ext-slate-110328/done ]
 then
-  echo "Enzo has also been checked out, but not built."
-  echo
-  echo "$DEST_DIR/src/enzo-hg-stable"
-  echo
-  echo "The value of YT_DEST can be used as an HDF5 installation location."
-  echo "Questions about Enzo should be directed to the Enzo User List."
-  echo
+    ( unzip -o ext-slate-110328.zip 2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( echo "Symlinking ext-slate-110328 as ext-theme" 2>&1 ) 1>> ${LOG_FILE}
+    ln -sf ext-slate-110328 ext-theme
+    touch ext-slate-110328/done
 fi
-echo
-echo "For support, see one of the following websites:"
-echo
-echo "    http://yt.enzotools.org/wiki/"
-echo "    http://yt.enzotools.org/doc/"
-echo
-echo "Please also join the mailing list:"
-echo 
-echo "    http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org"
-echo
-echo "========================================================================"
-echo
-echo "Oh, look at me, still talking when there's science to do!"
-echo "Good luck, and email the user list if you run into any problems."
+
+if [ -e $HOME/.matplotlib/fontList.cache ] && \
+   ( grep -q python2.6 $HOME/.matplotlib/fontList.cache )
+then
+    echo "WARNING WARNING WARNING WARNING WARNING WARNING WARNING"
+    echo "*******************************************************"
+    echo
+    echo "  You likely need to remove your old fontList.cache!"
+    echo "  You can do this with this command:"
+    echo ""
+    echo "  rm $HOME/.matplotlib/fontList.cache"
+    echo
+    echo "*******************************************************"
+fi
+
+function print_afterword
+{
+    echo
+    echo
+    echo "========================================================================"
+    echo
+    echo "yt is now installed in $DEST_DIR ."
+    echo "To run from this new installation, the a few variables need to be"
+    echo "prepended with the following information:"
+    echo
+    echo "YT_DEST         => $DEST_DIR"
+    echo "PATH            => $DEST_DIR/bin/"
+    echo "PYTHONPATH      => $DEST_DIR/lib/python2.7/site-packages/"
+    echo "LD_LIBRARY_PATH => $DEST_DIR/lib/"
+    echo
+    echo "For interactive data analysis and visualization, we recommend running"
+    echo "the IPython interface, which will become more fully featured with time:"
+    echo
+    echo "$DEST_DIR/bin/iyt"
+    echo
+    echo "For command line analysis run:"
+    echo
+    echo "$DEST_DIR/bin/yt"
+    echo
+    echo "To bootstrap a development environment for yt, run:"
+    echo 
+    echo "$DEST_DIR/bin/yt bootstrap_dev"
+    echo
+    echo "Note of interest: this installation will use the directory:"
+    echo "    $YT_DIR"
+    echo "as the source for all the YT code.  This means you probably shouldn't"
+    echo "delete it, but on the plus side, any changes you make there are"
+    echo "automatically propagated."
+    if [ $INST_HG -eq 1 ]
+    then
+      echo
+      echo "Mercurial has also been installed:"
+      echo
+      echo "$DEST_DIR/bin/hg"
+      echo
+    fi
+    if [ $INST_ENZO -eq 1 ]
+    then
+      echo "Enzo has also been checked out, but not built."
+      echo
+      echo "$DEST_DIR/src/enzo-hg-stable"
+      echo
+      echo "The value of YT_DEST can be used as an HDF5 installation location."
+      echo "Questions about Enzo should be directed to the Enzo User List."
+      echo
+    fi
+    echo
+    echo "For support, see the website and join the mailing list:"
+    echo
+    echo "    http://yt.enzotools.org/"
+    echo "    http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org"
+    echo
+    echo "========================================================================"
+    echo
+    echo "Oh, look at me, still talking when there's science to do!"
+    echo "Good luck, and email the user list if you run into any problems."
+}
+
+print_afterword
+print_afterword >> ${LOG_FILE}

scripts/eyt

-#!python
-
-from yt.mods import *
-import os
-namespace = locals().copy()
-
-doc = """\
-
-Welcome to Enzo-embedded yt!
-
-The different processors are accessible via the 'mec' variable.  To get grid
-data, try using the get_grid_field function.  When done, be sure to kill the
-processes with 'mec.kill()'!
-
-Information about the mec variable, an instance of MultiEngineClient, can be
-found in the IPython documentation:
-
-http://ipython.scipy.org/doc/manual/html/parallel/parallel_multiengine.html
-
-You can use the '%px' command to issue commands on all the engines
-simultaneously.
-
-"""
-
-import IPython.Shell
-
-if "DISPLAY" in os.environ:
-    try:
-        ip_shell = IPython.Shell.IPShellMatplotlibWX(user_ns=namespace)
-    except ImportError:
-        ip_shell = IPython.Shell.IPShellMatplotlib(user_ns=namespace)
-else:
-    ip_shell = IPython.Shell.IPShellMatplotlib(user_ns=namespace)
-
-ip = ip_shell.IP.getapi()
-
-import os   
-import glob
-import itertools
-
-ip = ip_shell.IP.getapi()
-ip.ex("from yt.mods import *")
-from IPython.kernel import client
-
-class YTClient(object):
-    mec = None
-
-    def __init__(self):
-        self.refresh()
-
-    def eval(self, varname, targets = None):
-        """
-        This function pulls anything from the remote host, but it will overwrite
-        any variable named __tmp.  This is to get around nested variables and
-        properties on the remote host.
-        """
-        self.mec.execute("__tmp = %s" % varname, targets=targets)
-        result = self.mec.pull("__tmp", targets=targets)
-        return result
-
-    def get_grid_field(self, grid_index, field_name, raw=False):
-        """
-        Return the numpy array representing a piece of field information.
-        Note that *grid_index* is the actual index into the array, which is ID-1.
-
-        If *raw* is set to True, then only raw original fields from the hierarchy
-        are returned.  This will include ghost zones, and derived fields are
-        inaccessible.
-        """
-        proc = int(self.enzo.hierarchy_information["GridProcs"][grid_index])
-        if not raw: # go through yt
-            result = self.eval("pf.h.grids[%s]['%s']" % (
-                        grid_index, field_name), [proc])[0]
-        else: # go through enzo module
-            result = self.eval("enzo.grid_data[%s + 1]['%s']" % (
-                        grid_index, field_name), [proc])[0].swapaxes(0,2)
-        return result
-
-    def refresh(self):
-        if self.mec is not None: self.mec.kill()
-        self.mec = client.MultiEngineClient()
-        self.mec.activate()
-        # there are some blocks in hierarchy instantiation, so
-        # we pre-instantiate
-        self.mec.execute("pf.h") 
-        self.enzo = enzo_module_proxy(self)
-        self.pf = EnzoStaticOutputProxy(ytc=self)
-        ip.to_user_ns(dict(
-            mec=self.mec, ytc=self, pf = self.pf))
-
-class enzo_module_proxy(object):
-    def __init__(self, ytc):
-        self.hierarchy_information = ytc.eval("enzo.hierarchy_information", [0])[0]
-        self.conversion_factors = ytc.eval("enzo.conversion_factors", [0])[0]
-        self.yt_parameter_file = ytc.eval("enzo.yt_parameter_file", [0])[0]
-
-from yt.lagos import EnzoStaticOutputInMemory, EnzoHierarchyInMemory
-from yt.lagos.HierarchyType import _data_style_funcs
-from yt.lagos.DataReadingFuncs import BaseDataQueue
-
-class EnzoHierarchyProxy(EnzoHierarchyInMemory):
-    _data_style = 'proxy'
-    def _setup_field_lists(self):
-        self.field_list = self.parameter_file.ytc.eval("pf.h.field_list", [0])[0]
-
-    def _obtain_enzo(self):
-        return self.parameter_file.ytc.enzo
-
-class EnzoStaticOutputProxy(EnzoStaticOutputInMemory):
-    _data_style = 'proxy'
-    _hierarchy_class = EnzoHierarchyProxy
-
-    def __init__(self, *args, **kwargs):
-        self.ytc = kwargs.pop("ytc")
-        EnzoStaticOutputInMemory.__init__(self, *args, **kwargs)
-
-    def _obtain_enzo(self):
-        return self.ytc.enzo
-
-def _read_proxy_slice(self, grid, field, axis, coord):
-    data = ytc.get_grid_field(grid.id - 1, field, raw=True)
-    sl = [slice(3,-3), slice(3,-3), slice(3,-3)]
-    sl[axis] = slice(coord + 3, coord + 4)
-    sl = tuple(reversed(sl))
-    return data[sl].swapaxes(0,2)
-
-class DataQueueProxy(BaseDataQueue):
-    def __init__(self, ghost_zones = 3):
-        self.my_slice = (slice(ghost_zones, -ghost_zones),
-                         slice(ghost_zones, -ghost_zones),
-                         slice(ghost_zones, -ghost_zones))
-        BaseDataQueue.__init__(self)
-
-    def _read_set(self, grid, field):
-        data = ytc.get_grid_field(grid.id - 1, field, raw=True)
-        return data[self.my_slice]
-
-    def modify(self, field):
-        return field.swapaxes(0,2)
-
-def proxy_exception(*args, **kwargs):
-    return KeyError
-
-# things like compare buffers over time
-
-_data_style_funcs['proxy'] = \
-    (None, None, None, _read_proxy_slice, proxy_exception, DataQueueProxy)
-
-ytc = YTClient()
-mec = ytc.mec
-
-ip_shell.mainloop(sys_exit=1,banner=doc)
 #!python
-import os
+import os, re
 from yt.mods import *
+from yt.data_objects.data_containers import AMRData
 namespace = locals().copy()
 
 doc = """\
 
 #main()
 
+
+# Now we add some tab completers, in the vein of:
+# http://pymel.googlecode.com/svn/trunk/tools/ipymel.py
+# We'll start with some fields.
+
+def yt_fieldname_completer(self, event):
+    """Match dictionary completions"""
+    #print "python_matches", event.symbol
+    #text = event.symbol # Not sure why this no longer works
+    text = event.line
+    #print repr(text)
+    # Another option, seems to work great. Catches things like ''.<tab>
+    #print repr(text), dir(text)
+    #m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
+    m = re.match(r"(\S+(\.\w+)*)\[[\'\\\"](\w*)$", text)
+
+    if not m:
+        raise IPython.ipapi.TryNext 
+    
+    expr, attr = m.group(1, 3)
+    #print "COMPLETING ON ", expr, attr
+    #print type(self.Completer), dir(self.Completer)
+    #print self.Completer.namespace
+    #print self.Completer.global_namespace
+    try:
+        obj = eval(expr, self.Completer.namespace)
+    except:
+        try:
+            obj = eval(expr, self.Completer.global_namespace)
+        except:
+            raise IPython.ipapi.TryNext 
+        
+    if isinstance(obj, (AMRData, ) ):
+        #print "COMPLETING ON THIS THING"
+        all_fields = [f for f in sorted(
+                obj.pf.h.field_list + obj.pf.h.derived_field_list)]
+        #matches = self.Completer.python_matches(text)
+        #print "RETURNING ", all_fields
+        return all_fields
+
+
+    raise IPython.ipapi.TryNext 
+
+ip.set_hook('complete_command', yt_fieldname_completer , re_key = ".*" )
+
 ip_shell.mainloop(sys_exit=1,banner=doc)
-import os, os.path
+import os, os.path, glob
 import sys
 import time
 import subprocess
 import distribute_setup
 distribute_setup.use_setuptools()
 
+from numpy.distutils.misc_util import appendpath
+from numpy.distutils import log
+
+DATA_FILES_HTML = glob.glob('yt/gui/reason/html/*.html')
+DATA_FILES_JS   = glob.glob('yt/gui/reason/html/js/*.js')
+DATA_FILES_PNG  = glob.glob('yt/gui/reason/html/images/*.png') \
+                + glob.glob('yt/gui/reason/html/images/*.ico')
+
+# Verify that we have Cython installed
+try:
+    import Cython
+except ImportError as e:
+    print "Received error on importing Cython:"
+    print e
+    print "Now attempting to install Cython"
+    import pip
+    rv = pip.main(["install",
+              "http://yt.enzotools.org/dependencies/Cython-latest.tar.gz"])
+    if rv == 1:
+        print "Unable to install Cython.  Please report this bug to yt-users."
+        sys.exit(1)
+
+######
+# This next bit comes from Matthew Brett, to get Cython working with NumPy
+# distutils.  I added a bit to get C++ Cython working.
+from os.path import join as pjoin, dirname
+from distutils.dep_util import newer_group
+from distutils.errors import DistutilsError
+
+
+def generate_a_pyrex_source(self, base, ext_name, source, extension):
+    ''' Monkey patch for numpy build_src.build_src method
+
+    Uses Cython instead of Pyrex.
+
+    Assumes Cython is present
+    '''
+    if self.inplace:
+        target_dir = dirname(base)
+    else:
+        target_dir = appendpath(self.build_src, dirname(base))
+    if extension.language == "c++":
+        cplus = True
+        file_ext = ".cpp"
+    else:
+        cplus = False
+        file_ext = ".c"
+    target_file = pjoin(target_dir, ext_name + file_ext)
+    depends = [source] + extension.depends
+    if self.force or newer_group(depends, target_file, 'newer'):
+        import Cython.Compiler.Main
+        log.info("cythonc:> %s" % (target_file))
+        self.mkpath(target_dir)
+        options = Cython.Compiler.Main.CompilationOptions(
+            defaults=Cython.Compiler.Main.default_options,
+            include_path=extension.include_dirs,
+            language=extension.language, cplus = cplus,
+            output_file=target_file)
+        cython_result = Cython.Compiler.Main.compile(source,
+                                                   options=options)
+        if cython_result.num_errors != 0:
+            raise DistutilsError("%d errors while compiling %r with Cython" \
+                  % (cython_result.num_errors, source))
+    return target_file
+
+
+from numpy.distutils.command import build_src
+build_src.build_src.generate_a_pyrex_source = generate_a_pyrex_source
+# End snippet
+######
+
 import setuptools
 
-DATA_FILES = []
-VERSION = "2.0stable"
+VERSION = "2.1stable"
 
 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
 
         url = "http://yt.enzotools.org/",
         license="GPL-3",
         configuration=configuration,
-        data_files=DATA_FILES,
         zip_safe=False,
-        package_data = {'': ['*.so'], }
+        data_files = [('yt/gui/reason/html/', DATA_FILES_HTML),
+                      ('yt/gui/reason/html/js/', DATA_FILES_JS),
+                      ('yt/gui/reason/html/images/', DATA_FILES_PNG)],
         )
     return
 

yt/analysis_modules/api.py

     Halo, \
     HOPHalo, \
     parallelHOPHalo, \
+    LoadedHalo, \
     FOFHalo, \
     HaloList, \
     HOPHaloList, \
     FOFHaloList, \
     parallelHOPHaloList, \
+    LoadedHaloList, \
     GenericHaloFinder, \
     parallelHF, \
     HOPHaloFinder, \
     FOFHaloFinder, \
-    HaloFinder
+    HaloFinder, \
+    LoadHaloes
 
 from .halo_mass_function.api import \
     HaloMassFcn, \

yt/analysis_modules/halo_finding/api.py

     Halo, \
     HOPHalo, \
     parallelHOPHalo, \
+    LoadedHalo, \
     FOFHalo, \
     HaloList, \
     HOPHaloList, \
     FOFHaloList, \
     parallelHOPHaloList, \
+    LoadedHaloList, \
     GenericHaloFinder, \
     parallelHF, \
     HOPHaloFinder, \
     FOFHaloFinder, \
-    HaloFinder
+    HaloFinder, \
+    LoadHaloes

yt/analysis_modules/halo_finding/halo_objects.py

 import numpy as na
 import random
 import sys
+from collections import defaultdict
 
 from yt.funcs import *
 
         self._max_dens = halo_list._max_dens
         self.id = id
         self.data = halo_list._data_source
+        self.pf = self.data.pf
         if indices is not None:
             self.indices = halo_list._base_indices[indices]
         else:
             return None
         self.bin_count = bins
         # Cosmology
-        h = self.data.pf.hubble_constant
-        Om_matter = self.data.pf.omega_matter
-        z = self.data.pf.current_redshift
+        h = self.pf.hubble_constant
+        Om_matter = self.pf.omega_matter
+        z = self.pf.current_redshift
+        period = self.pf.domain_right_edge - \
+            self.pf.domain_left_edge
+        cm = self.pf["cm"]
+        thissize = max(self.size, self.indices.size)
         rho_crit_now = 1.8788e-29 * h**2.0 * Om_matter # g cm^-3
         Msun2g = 1.989e33
         rho_crit = rho_crit_now * ((1.0 + z)**3.0)
-        
         # Get some pertinent information about the halo.
         self.mass_bins = na.zeros(self.bin_count+1, dtype='float64')
-        dist = na.empty(self.indices.size, dtype='float64')
+        dist = na.empty(thissize, dtype='float64')
         cen = self.center_of_mass()
-        period = self.data.pf.domain_right_edge - \
-            self.data.pf.domain_left_edge
         mark = 0
         # Find the distances to the particles. I don't like this much, but I
         # can't see a way to eliminate a loop like this, either here or in
         # Calculate the over densities in the bins.
         self.overdensity = self.mass_bins * Msun2g / \
         (4./3. * math.pi * rho_crit * \
-        (self.radial_bins * self.data.pf["cm"])**3.0)
+        (self.radial_bins * cm)**3.0)
         
 
 class HOPHalo(Halo):
         r"""Not implemented."""
         return self.center_of_mass()
 
+class LoadedHalo(Halo):
+    def __init__(self, pf, id, size=None, CoM=None,
+        max_dens_point=None, group_total_mass=None, max_radius=None, bulk_vel=None,
+        rms_vel=None, fnames=None):
+        self.pf = pf
+        self.id = id
+        self.size = size
+        self.CoM = CoM
+        self.max_dens_point = max_dens_point
+        self.group_total_mass = group_total_mass
+        self.max_radius = max_radius
+        self.bulk_vel = bulk_vel
+        self.rms_vel = rms_vel
+        # locs=the names of the h5 files that have particle data for this halo
+        self.fnames = fnames
+        self.bin_count = None
+        self.overdensity = None
+        self.saved_fields = {}
+        self.particle_mask = None
+        self.ds_sort = None
+        self.indices = na.array([]) # Never used for a LoadedHalo.
+
+    def __getitem__(self, key):
+        # This function will try to get particle data in one of three ways,
+        # in descending preference.
+        # 1. From saved_fields, e.g. we've already got it.
+        # 2. From the halo h5 files off disk.
+        # 3. Use the unique particle indexes of the halo to select a missing
+        # field from an AMR Sphere.
+        try:
+            # We've already got it.
+            return self.saved_fields[key]
+        except KeyError:
+            # Gotta go get it from the halo h5 files.
+            field_data = self._get_particle_data(self.id, self.fnames,
+                self.size, key)
+            #if key == 'particle_position_x': field_data = None
+            if field_data is not None:
+                self.saved_fields[key] = field_data
+                return self.saved_fields[key]
+            else:
+                # Dynamically create the masking array for particles, and get
+                # the data using standard yt methods. The 1.05 is there to
+                # account for possible silliness having to do with whether
+                # the maximum density or center of mass was used to calculate
+                # the maximum radius.
+                ds = self.pf.h.sphere(self.CoM, 1.05 * self.max_radius)
+                if self.particle_mask is None:
+                    pid = self.__getitem__('particle_index')
+                    sp_pid = ds['particle_index']
+                    self.ds_sort = sp_pid.argsort()
+                    sp_pid = sp_pid[self.ds_sort]
+                    # The result of searchsorted is an array with the positions
+                    # of the indexes in pid as they are in sp_pid. This is
+                    # because each element of pid is in sp_pid only once.
+                    self.particle_mask = na.searchsorted(sp_pid, pid)
+                # We won't store this field below in saved_fields because
+                # that would mean keeping two copies of it, one in the yt
+                # machinery and one here.
+                return ds[key][self.ds_sort][self.particle_mask]
+
+    def _get_particle_data(self, halo, fnames, size, field):
+        # Given a list of file names, a halo, its size, and the desired field,
+        # this returns the particle data for that halo.
+        # First get the list of fields from the first file. Not all fields
+        # are saved all the time (e.g. creation_time, particle_type).
+        mylog.info("Getting field %s from hdf5 halo particle files." % field)
+        f = h5py.File(fnames[0])
+        fields = f["Halo%08d" % halo].keys()
+        # If we dont have this field, we can give up right now.
+        if field not in fields: return None
+        if field == 'particle_index' or field == 'particle_type':
+            # the only integer field
+            field_data = na.empty(size, dtype='int64')
+        else:
+            field_data = na.empty(size, dtype='float64')
+        f.close()
+        offset = 0
+        for fname in fnames:
+            f = h5py.File(fname)
+            this = f["Halo%08d" % halo][field][:]
+            s = this.size
+            field_data[offset:offset+s] = this
+            offset += s
+            f.close()
+        return field_data
+        
+    def center_of_mass(self):
+        r"""Calculate and return the center of mass.
+
+        The center of mass of the halo is directly calculated and returned.
+        
+        Examples
+        --------
+        >>> com = halos[0].center_of_mass()
+        """
+        return self.CoM
+    
+    def maximum_density_location(self):
+        r"""Return the location HOP identified as maximally dense.
+        
+        Return the location HOP identified as maximally dense.
+
+        Examples
+        --------
+        >>> max_dens_loc = halos[0].maximum_density_location()
+        """
+        return self.max_dens_point[1:]
+
+    def maximum_density(self):
+        r"""Return the HOP-identified maximum density.
+
+        Return the HOP-identified maximum density.
+
+        Examples
+        --------
+        >>> max_dens = halos[0].maximum_density()
+        """
+        return self.max_dens_point[0]
+
+    def total_mass(self):
+        r"""Returns the total mass in solar masses of the halo.
+        
+        Returns the total mass in solar masses of just the particles in the
+        halo.
+
+        Examples
+        --------
+        >>> halos[0].total_mass()
+        """
+        return self.group_total_mass
+
+    def bulk_velocity(self):
+        r"""Returns the mass-weighted average velocity in cm/s.
+
+        This calculates and returns the mass-weighted average velocity of just
+        the particles in the halo in cm/s.
+        
+        Examples
+        --------
+        >>> bv = halos[0].bulk_velocity()
+        """
+        return self.bulk_vel
+
+    def rms_velocity(self):
+        r"""Returns the mass-weighted RMS velocity for the halo
+        particles in cgs units.
+
+        Calculate and return the mass-weighted RMS velocity for just the
+        particles in the halo.  The bulk velocity of the halo is subtracted
+        before computation.
+        
+        Examples
+        --------
+        >>> rms_vel = halos[0].rms_velocity()
+        """
+        return self.rms_vel
+
+    def maximum_radius(self):
+        r"""Returns the maximum radius in the halo for all particles,
+        either from the point of maximum density or from the
+        center of mass.
+
+        The maximum radius from the most dense point is calculated.  This
+        accounts for periodicity.
+        
+        Parameters
+        ----------
+        center_of_mass : bool
+            True chooses the center of mass when calculating the maximum radius.
+            False chooses from the maximum density location for HOP halos
+            (it has no effect for FOF halos).
+            Default = True.
+        
+        Examples
+        --------
+        >>> radius = halos[0].maximum_radius()
+        """
+        return self.max_radius
+
 class HaloList(object):
 
     _fields = ["particle_position_%s" % ax for ax in 'xyz']
         """
         HaloList.write_out(self, filename)
 
+class LoadedHaloList(HaloList):
+    _name = "Loaded"
+    
+    def __init__(self, pf, basename):
+        self.pf = pf
+        self._groups = []
+        self.basename = basename
+        self._retrieve_halos()
+    
+    def _retrieve_halos(self):
+        # First get the halo particulars.
+        lines = file("%s.out" % self.basename)
+        # The location of particle data for each halo.
+        locations = self._collect_halo_data_locations()
+        halo = 0
+        for line in lines:
+            # Skip the comment lines at top.
+            if line[0] == "#": continue
+            line = line.split()
+            # get the particle data
+            size = int(line[2])
+            fnames = locations[halo]
+            # Everything else
+            CoM = na.array([float(line[7]), float(line[8]), float(line[9])])
+            max_dens_point = na.array([float(line[3]), float(line[4]),
+                float(line[5]), float(line[6])])
+            group_total_mass = float(line[1])
+            max_radius = float(line[13])
+            bulk_vel = na.array([float(line[10]), float(line[11]),
+                float(line[12])])
+            rms_vel = float(line[14])
+            self._groups.append(LoadedHalo(self.pf, halo, size, CoM,
+                max_dens_point, group_total_mass, max_radius, bulk_vel,
+                rms_vel, fnames))
+            halo += 1
+    
+    def _collect_halo_data_locations(self):
+        # The halos are listed in order in the file.
+        lines = file("%s.txt" % self.basename)
+        locations = []
+        for line in lines:
+            line = line.split()
+            locations.append(line[1:])
+        lines.close()
+        return locations
+
 class parallelHOPHaloList(HaloList,ParallelAnalysisInterface):
     _name = "parallelHOP"
     _halo_class = parallelHOPHalo
             if not self._is_mine(halo): continue
             halo.write_particle_list(f)
 
+    def dump(self, basename="HopAnalysis"):
+        r"""Save the full halo data to disk.
+        
+        This function will save the halo data in such a manner that it can be
+        easily re-loaded later using `GenericHaloFinder.load`.
+        This is similar in concept to
+        pickling the data, but outputs the data in the already-established
+        data formats. The simple halo data is written to a text file
+        (e.g. "HopAnalysis.out") using
+        write_out(), and the particle data to hdf5 files (e.g. "HopAnalysis.h5")
+        using write_particle_lists().
+        
+        Parameters
+        ----------
+        basename : String
+            The base name for the files the data will be written to. Default = 
+            "HopAnalysis".
+        
+        Examples
+        --------
+        >>> halos.dump("MyHalos")
+        """
+        self.write_out("%s.out" % basename)
+        self.write_particle_lists(basename)
+        self.write_particle_lists_txt(basename)
+
 class parallelHF(GenericHaloFinder, parallelHOPHaloList):
     def __init__(self, pf, subvolume=None,threshold=160, dm_only=True, \
         resize=True, rearrange=True,\
         self._join_halolists()
 
 HaloFinder = HOPHaloFinder
+
+class LoadHaloes(GenericHaloFinder, LoadedHaloList):
+    def __init__(self, pf, basename):
+        r"""Load the full halo data into memory.
+        
+        This function takes the output of `GenericHaloFinder.dump` and
+        re-establishes the list of halos in memory. This enables the full set
+        of halo analysis features without running the halo finder again. To
+        be precise, the particle data for each halo is only read in when
+        necessary, so examining a single halo will not require as much memory
+        as is required for halo finding.
+        
+        Parameters
+        ----------
+        basename : String
+            The base name of the files that will be read in. This should match
+            what was used when `GenericHaloFinder.dump` was called. Default =
+            "HopAnalysis".
+        
+        Examples
+        --------
+        >>> pf = load("data0005")
+        >>> halos = LoadHaloes(pf, "HopAnalysis")
+        """
+        self.basename = basename
+        LoadedHaloList.__init__(self, pf, self.basename)
+
+
+        

yt/analysis_modules/halo_mass_function/halo_mass_function.py

         bins = na.logspace(self.log_mass_min,
             self.log_mass_max,self.num_sigma_bins)
         avgs = (bins[1:]+bins[:-1])/2.
-        dis, bins = na.histogram(self.haloes,bins,new=True)
+        dis, bins = na.histogram(self.haloes,bins)
         # add right to left
         for i,b in enumerate(dis):
             dis[self.num_sigma_bins-i-3] += dis[self.num_sigma_bins-i-2]

yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py

         return of_child_from_me, of_mine_from_me
 
 class EnzoFOFMergerBranch(object):
-    def __init__(self, tree, output_num, halo_id, max_children):
+    def __init__(self, tree, output_num, halo_id, max_children,
+                 min_relation=0.25):
         self.output_num = output_num
         self.halo_id = halo_id
         self.npart = tree.relationships[output_num][halo_id]["NumberOfParticles"]
         for k in sorted_keys:
             if not str(k).isdigit(): continue
             v = tree.relationships[output_num][halo_id][k]
-            if v[1] != 0.0 and halo_count < max_children:
+            if v[1] > min_relation and halo_count < max_children:
                 halo_count += 1
                 self.children.append((k,v[1],v[2]))
                 if v[1] > max_relationship:
                     this_halos.append(c[0])
             self.filter_small_halos(this, min_particles)
 
+    def get_massive_progenitors(self, halonum, min_relation=0.25):
+        r"""Returns a list of the most massive progenitor halos.
+
+        This routine walks down the tree, following the most massive
+        progenitor on each node.
+
+        Parameters
+        ----------
+        halonum : int
+            Halo number at the last output to trace.
+
+        Output
+        ------
+        output : dict
+            Dictionary of redshifts, cycle numbers, and halo numbers
+            of the most massive progenitor.  keys = {redshift, cycle,
+            halonum}
+        """
+        output = {"redshift": [], "cycle": [], "halonum": []}
+        # First (lowest redshift) node in tree
+        halo0 = halonum
+        for cycle in sorted(self.numbers, reverse=True):
+            if cycle not in self.relationships: break
+            if halo0 not in self.relationships[cycle]: break
+            node = self.relationships[cycle][halo0]
+            output["redshift"].append(self.redshifts[cycle])
+            output["cycle"].append(cycle)
+            output["halonum"].append(halo0)
+            # Find progenitor
+            max_rel = 0.0
+            for k,v in node.items():
+                if not str(k).isdigit(): continue
+                if v[1] > max_rel and v[1] > min_relation:
+                    halo0 = k
+                    max_rel = v[1]
+        return output
+
     def print_tree(self):
         r"""Prints the merger tree to stdout.
         """

yt/analysis_modules/halo_profiler/multi_halo_profiler.py

     load
 from yt.data_objects.profiles import \
     BinnedProfile1D, EmptyProfileData
-from yt.analysis_modules.halo_finding.api import \
-    HaloFinder
+from yt.analysis_modules.halo_finding.api import *
 from .halo_filters import \
     VirialFilter
 from yt.data_objects.field_info_container import \
     parallel_root_only
 from yt.visualization.fixed_resolution import \
     FixedResolutionBuffer
-from yt.visualization.plot_collection import \
-    PlotCollection
+from yt.visualization.image_writer import write_image
 
 PROFILE_RADIUS_THRESHOLD = 2
 
 class HaloProfiler(ParallelAnalysisInterface):
     "Radial profiling, filtering, and projections for halos in cosmological simulations."
-    def __init__(self, dataset, halos='multiple', halo_list_file='HopAnalysis.out', halo_list_format='yt_hop',
-                 halo_finder_function=HaloFinder, halo_finder_args=None, halo_finder_kwargs=None,
-                 use_density_center=False, density_center_exponent=1.0, use_field_max_center=None,
+    def __init__(self, dataset, output_dir=None,
+                 halos='multiple', halo_list_file='HopAnalysis.out', 
+                 halo_list_format='yt_hop', halo_finder_function=parallelHF, 
+                 halo_finder_args=None, 
+                 halo_finder_kwargs=dict(threshold=160.0, safety=1.5, 
+                                         dm_only=False, resize=True, 
+                                         fancy_padding=True, rearrange=True),
+                 use_density_center=False, density_center_exponent=1.0,
+                 use_field_max_center=None,
                  halo_radius=0.1, radius_units='1', n_profile_bins=50,
                  profile_output_dir='radial_profiles', projection_output_dir='projections',
                  projection_width=8.0, projection_width_units='mpc', project_at_level='max',
                  velocity_center=['bulk', 'halo'], filter_quantities=['id','center']):
         """
         Initialize a HaloProfiler object.
+        :param output_dir (str): if specified, all output will be put into this path instead of 
+               in the dataset directories.  Default: None.
         :param halos (str): "multiple" for profiling more than one halo.  In this mode halos are read in 
                from a list or identified with a halo finder.  In "single" mode, the one and only halo 
                center is identified automatically as the location of the peak in the density field.  
         """
 
         self.dataset = dataset
-
+        self.output_dir = output_dir
         self.profile_output_dir = profile_output_dir
         self.projection_output_dir = projection_output_dir
         self.n_profile_bins = n_profile_bins
         self.filtered_halos = []
         self._projection_halo_list = []
 
+        # Create output directory if specified
+        if self.output_dir is not None:
+            self.__check_directory(self.output_dir)
+
         # Set halo finder function and parameters, if needed.
         self.halo_finder_function = halo_finder_function
         self.halo_finder_args = halo_finder_args
         # dictionary: a dictionary containing fields and their corresponding columns.
         self.halo_list_file = halo_list_file
         if halo_list_format == 'yt_hop':
-            self.halo_list_format = {'id':0, 'mass':1, 'center':[7, 8, 9], 'velocity':[10, 11, 12], 'r_max':13}
+            self.halo_list_format = {'id':0, 'mass':1, 'np': 2, 
+                                     'center':[7, 8, 9], 'velocity':[10, 11, 12], 'r_max':13}
         elif halo_list_format == 'enzo_hop':
             self.halo_list_format = {'id':0, 'center':[4, 5, 6]}
         elif halo_list_format == 'p-groupfinder':
         self.density_center_exponent = density_center_exponent
         if self.use_density_center:
             def _MatterDensityXTotalMass(field, data):
-                return na.power((data['Matter_Density'] * data['TotalMassMsun']), self.density_center_exponent)
+                return na.power((data['Matter_Density'] * data['TotalMassMsun']), 
+                                self.density_center_exponent)
             def _Convert_MatterDensityXTotalMass(data):
                 return 1
             add_field("MatterDensityXTotalMass", units=r"",
 
         self.profile_fields.append({'field':field, 'weight_field':weight_field, 'accumulation':accumulation})
 
-    def add_projection(self, field, weight_field=None):
+    def add_projection(self, field, weight_field=None, cmap='algae'):
         "Add a field for projection."
 
-        self.projection_fields.append({'field':field, 'weight_field':weight_field})
+        self.projection_fields.append({'field':field, 'weight_field':weight_field, 
+                                       'cmap': cmap})
 
     @parallel_blocking_call
     def make_profiles(self, filename=None, prefilters=None, **kwargs):
         # Add profile fields necessary for calculating virial quantities.
         if virial_filter: self._check_for_needed_profile_fields()
 
-        outputDir = "%s/%s" % (self.pf.fullpath, self.profile_output_dir)
-        self.__check_directory(outputDir)
+        # Create output directory.
+        if self.output_dir is not None:
+            self.__check_directory("%s/%s" % (self.output_dir, self.pf.directory))
+            my_output_dir = "%s/%s/%s" % (self.output_dir, self.pf.directory, 
+                                          self.profile_output_dir)
+        else:
+            my_output_dir = "%s/%s" % (self.pf.fullpath, self.profile_output_dir)
+        self.__check_directory(my_output_dir)
 
         # Profile all halos.
         for halo in self._get_objs('all_halos', round_robin=True):
 
             if filter_result and len(self.profile_fields) > 0:
 
-                profile_filename = "%s/Halo_%04d_profile.dat" % (outputDir, halo['id'])
+                profile_filename = "%s/Halo_%04d_profile.dat" % (my_output_dir, halo['id'])
 
                 profiledHalo = self._get_halo_profile(halo, profile_filename, virial_filter=virial_filter)
 
             return
 
         # Set resolution for fixed resolution output.
-        if save_cube:
-            if self.project_at_level == 'max':
-                proj_level = self.pf.h.max_level
-            else:
-                proj_level = int(self.project_at_level)
-            proj_dx = self.pf.units[self.projection_width_units] / self.pf.parameters['TopGridDimensions'][0] / \
-                (self.pf.parameters['RefineBy']**proj_level)
-            projectionResolution = int(self.projection_width / proj_dx)
+        if self.project_at_level == 'max':
+            proj_level = self.pf.h.max_level
+        else:
+            proj_level = int(self.project_at_level)
+        proj_dx = self.pf.units[self.projection_width_units] / self.pf.parameters['TopGridDimensions'][0] / \
+            (self.pf.parameters['RefineBy']**proj_level)
+        projectionResolution = int(self.projection_width / proj_dx)
 
-        outputDir = "%s/%s" % (self.pf.fullpath, self.projection_output_dir)
-        self.__check_directory(outputDir)
+        # Create output directory.
+        if self.output_dir is not None:
+            self.__check_directory("%s/%s" % (self.output_dir, self.pf.directory))
+            my_output_dir = "%s/%s/%s" % (self.output_dir, self.pf.directory, 
+                                          self.projection_output_dir)
+        else:
+            my_output_dir = "%s/%s" % (self.pf.fullpath, self.projection_output_dir)
+        self.__check_directory(my_output_dir)
 
         center = [0.5 * (self.pf.parameters['DomainLeftEdge'][w] + self.pf.parameters['DomainRightEdge'][w])
                   for w in range(self.pf.parameters['TopGridRank'])]
             # Make projections.
             if not isinstance(axes, types.ListType): axes = list([axes])
             for w in axes:
-                # Create a plot collection.
-                pc = PlotCollection(self.pf, center=center)
+                projections = []
                 # YT projections do not follow the right-hand rule.
                 coords = range(3)
                 del coords[w]
                 y_axis = coords[1]
 
                 for hp in self.projection_fields:
-                    pc.add_projection(hp['field'], w, weight_field=hp['weight_field'], data_source=region)
+                    projections.append(self.pf.h.proj(w, hp['field'], 
+                                                      weight_field=hp['weight_field'], 
+                                                      data_source=region, center=halo['center'],
+                                                      serialize=False))
                 
                 # Set x and y limits, shift image if it overlaps domain boundary.
                 if need_per:
                     pw = self.projection_width/self.pf.units[self.projection_width_units]
-                    shift_projections(self.pf, pc, halo['center'], center, w)
+                    #shift_projections(self.pf, projections, halo['center'], center, w)
                     # Projection has now been shifted to center of box.
                     proj_left = [center[x_axis]-0.5*pw, center[y_axis]-0.5*pw]
                     proj_right = [center[x_axis]+0.5*pw, center[y_axis]+0.5*pw]
                     proj_left = [leftEdge[x_axis], leftEdge[y_axis]]
                     proj_right = [rightEdge[x_axis], rightEdge[y_axis]]
 
-                pc.set_xlim(proj_left[0], proj_right[0])
-                pc.set_ylim(proj_left[1], proj_right[1])
+                # Save projection data to hdf5 file.
+                if save_cube or save_images:
+                    axis_labels = ['x', 'y', 'z']
 
-                # Save projection data to hdf5 file.
-                if save_cube:
-                    axis_labels = ['x', 'y', 'z']
-                    dataFilename = "%s/Halo_%04d_%s_data.h5" % \
-                            (outputDir, halo['id'], axis_labels[w])
-                    mylog.info("Saving projection data to %s." % dataFilename)
+                    if save_cube:
+                        dataFilename = "%s/Halo_%04d_%s_data.h5" % \
+                            (my_output_dir, halo['id'], axis_labels[w])
+                        mylog.info("Saving projection data to %s." % dataFilename)
+                        output = h5py.File(dataFilename, "a")
 
-                    output = h5py.File(dataFilename, "a")
                     # Create fixed resolution buffer for each projection and write them out.
                     for e, hp in enumerate(self.projection_fields):
-                        frb = FixedResolutionBuffer(pc.plots[e].data, (proj_left[0], proj_right[0], 
-                                                                       proj_left[1], proj_right[1]),
-                                                          (projectionResolution, projectionResolution),
-                                                          antialias=False)
+                        frb = FixedResolutionBuffer(projections[e], (proj_left[0], proj_right[0], 
+                                                                     proj_left[1], proj_right[1]),
+                                                    (projectionResolution, projectionResolution),
+                                                    antialias=False)
                         dataset_name = "%s_%s" % (hp['field'], hp['weight_field'])
-                        if dataset_name in output.listnames(): del output[dataset_name]
-                        output.create_dataset(dataset_name, data=frb[hp['field']])
-                    output.close()
+                        if save_cube:
+                            if dataset_name in output.listnames(): del output[dataset_name]
+                            output.create_dataset(dataset_name, data=frb[hp['field']])
 
-                if save_images:
-                    pc.save("%s/Halo_%04d" % (outputDir, halo['id']), force_save=True)
+                        if save_images:
+                            filename = "%s/Halo_%04d_%s_%s.png" % (my_output_dir, halo['id'], 
+                                                                   dataset_name, axis_labels[w])
+                            write_image(na.log10(frb[hp['field']]), filename, cmap_name=hp['cmap'])
+
+                    if save_cube: output.close()
 
             del region
 
         if filename is None:
             filename = self.halo_list_file
 
-        hopFile = "%s/%s" % (self.pf.fullpath, filename)
+        if self.output_dir is not None:
+            self.__check_directory("%s/%s" % (self.output_dir, self.pf.directory))
+            hop_file = "%s/%s/%s" % (self.output_dir, self.pf.directory, filename)
+        else:
+            hop_file = "%s/%s" % (self.pf.fullpath, filename)
 
-        if not(os.path.exists(hopFile)):
-            mylog.info("Hop file not found, running hop to get halos.")
-            self._run_hop(hopFile)
+        if not(os.path.exists(hop_file)):
+            mylog.info("Halo finder file not found, running halo finder to get halos.")
+            self._run_hop(hop_file)
 
-        self.all_halos = self._read_halo_list(hopFile)
+        self.all_halos = self._read_halo_list(hop_file)
 
     def _read_halo_list(self, listFile):
         """
             return None
 
     @parallel_blocking_call
-    def _run_hop(self, hopFile):
+    def _run_hop(self, hop_file):
         "Run hop to get halos."
 
         hop_results = self.halo_finder_function(self.pf, *self.halo_finder_args, **self.halo_finder_kwargs)
-        hop_results.write_out(hopFile)
+        hop_results.write_out(hop_file)
 
         del hop_results
         self.pf.h.clear_all_data()
         fid.close()
 
     @parallel_root_only
-    def __check_directory(self, outputDir):
-        if (os.path.exists(outputDir)):
-            if not(os.path.isdir(outputDir)):
-                mylog.error("Output directory exists, but is not a directory: %s." % outputDir)
-                raise IOError(outputDir)
+    def __check_directory(self, my_output_dir):
+        if (os.path.exists(my_output_dir)):
+            if not(os.path.isdir(my_output_dir)):
+                mylog.error("Output directory exists, but is not a directory: %s." % my_output_dir)
+                raise IOError(my_output_dir)
         else:
-            os.mkdir(outputDir)
+            os.mkdir(my_output_dir)
 
-def shift_projections(pf, pc, oldCenter, newCenter, axis):
+def shift_projections(pf, projections, oldCenter, newCenter, axis):
     """
     Shift projection data around.
     This is necessary when projecting a preiodic region.
     del offset[axis]
     del width[axis]
 
-    for plot in pc.plots:
+    for plot in projections:
         # Get name of data field.
         other_fields = {'px':True, 'py':True, 'pdx':True, 'pdy':True, 'weight_field':True}
-        for pfield in plot.data.data.keys():
+        for pfield in plot.data.keys():
             if not(other_fields.has_key(pfield)):
                 field = pfield
                 break

yt/analysis_modules/level_sets/contour_finder.py

                     s1.update(joins.pop(k2))
                     s1.update([k2])
                     updated += 1
-    return joins
+    tr = []
+    for k in joins.keys():
+        v = joins.pop(k)
+        tr.append((k, na.array(list(v), dtype="int64")))
+    return tr
 
 def identify_contours(data_source, field, min_val, max_val,
                           cached_fields=None):
         total_contours += na.unique(grid["tempContours"][grid["tempContours"] > -1]).size
         new_contours = na.unique(grid["tempContours"][grid["tempContours"] > -1]).tolist()
         tree += zip(new_contours, new_contours)
+    tree = set(tree)
     pbar.finish()
     pbar = get_pbar("Calculating joins ", len(data_source._grids))
     grid_set = set()
         cg = grid.retrieve_ghost_zones(1, "tempContours", smoothed=False)
         grid_set.update(set(cg._grids))
         fd = cg["tempContours"].astype('int64')
-        tree += amr_utils.construct_boundary_relationships(fd)
+        boundary_tree = amr_utils.construct_boundary_relationships(fd)
+        tree.update(((a, b) for a, b in boundary_tree))
     pbar.finish()
-    sort_new = na.array(list(set(tree)), dtype='int64')
+    sort_new = na.array(list(tree), dtype='int64')
     mylog.info("Coalescing %s joins", sort_new.shape[0])
     joins = coalesce_join_tree(sort_new)
+    #joins = [(i, na.array(list(j), dtype="int64")) for i, j in sorted(joins.items())]
     pbar = get_pbar("Joining ", len(joins))
     # This process could and should be done faster
-    for i, new in enumerate(sorted(joins.keys())):
-        pbar.update(i)
-        old_set = joins[new]
-        for old in old_set:
-            if old == new: continue
-            i1 = (data_source["tempContours"] == old)
-            data_source["tempContours"][i1] = new
+    print "Joining..."
+    t1 = time.time()
+    ff = data_source["tempContours"].astype("int64")
+    amr_utils.update_joins(joins, ff)
+    data_source["tempContours"] = ff.astype("float64")
+    #for i, new in enumerate(sorted(joins.keys())):
+    #    pbar.update(i)
+    #    old_set = joins[new]
+    #    for old in old_set:
+    #        if old == new: continue
+    #        i1 = (data_source["tempContours"] == old)
+    #        data_source["tempContours"][i1] = new
+    t2 = time.time()
+    print "Finished joining in %0.2e seconds" % (t2-t1)
     pbar.finish()
     data_source._flush_data_to_grids("tempContours", -1, dtype='int64')
     del data_source.data["tempContours"] # Force a reload from the grids

yt/analysis_modules/light_cone/light_cone.py

 from yt.config import ytcfg
 from yt.convenience import load
 from yt.utilities.cosmology import Cosmology
-from yt.visualization.plot_collection import \
-    PlotCollection
+from yt.visualization.image_writer import write_image
 
 from .common_n_volume import commonNVolume
 from .halo_mask import light_cone_halo_map, \
     light_cone_halo_mask
-from .light_cone_projection import LightConeProjection
+from .light_cone_projection import _light_cone_projection
 
 class LightCone(EnzoSimulation):
     def __init__(self, EnzoParameterFile, initial_redshift=1.0, 
                  final_redshift=0.0, observer_redshift=0.0,
                  field_of_view_in_arcminutes=600.0, image_resolution_in_arcseconds=60.0, 
                  use_minimum_datasets=True, deltaz_min=0.0, minimum_coherent_box_fraction=0.0,
-                 output_dir='LC', output_prefix='LightCone', **kwargs):
+                 set_parameters=None, output_dir='LC', output_prefix='LightCone', **kwargs):
         """
         Initialize a LightCone object.
         :param initial_redshift (float): the initial (highest) redshift for the light cone.  Default: 1.0.
                the projection axis and center.  This was invented to allow light cones with thin slices to 
                sample coherent large scale structure, but in practice does not work so well.  Try setting 
                this parameter to 1 and see what happens.  Default: 0.0.
+        :param set_parameters (dict): dictionary of parameters to attach to pf.parameters.  Default: None.
         :param output_dir (str): the directory in which images and data files will be written.  Default: 'LC'.
         :param output_prefix (str): the prefix of all images and data files.  Default: 'LightCone'.
         """
         self.use_minimum_datasets = use_minimum_datasets
         self.deltaz_min = deltaz_min
         self.minimum_coherent_box_fraction = minimum_coherent_box_fraction
+        self.set_parameters = set_parameters
         self.output_dir = output_dir
         self.output_prefix = output_prefix
 
             del halo_mask_cube
 
     def project_light_cone(self, field, weight_field=None, apply_halo_mask=False, node=None,
-                           save_stack=True, save_slice_images=False, flatten_stack=False, photon_field=False,
-                           add_redshift_label=False, **kwargs):
+                           save_stack=True, save_slice_images=False, cmap_name='algae', 
+                           flatten_stack=False, photon_field=False):
         """
         Create projections for light cone, then add them together.
         :param weight_field (str): the weight field of the projection.  This has the same meaning as in standard 
         :param save_stack (bool): if True, the unflatted light cone data including each individual slice is written to 
                an hdf5 file.  Default: True.
         :param save_slice_images (bool): save images for each individual projection slice.  Default: False.
+        :param cmap_name (str): color map for images.  Default: 'algae'.
         :param flatten_stack (bool): if True, the light cone stack is continually flattened each time a slice is added 
                in order to save memory.  This is generally not necessary.  Default: False.
         :param photon_field (bool): if True, the projection data for each slice is decremented by 4 Pi R^2`, where R 
                 name = "%s%s_%s_%04d_%04d" % (self.output_dir, self.output_prefix,
                                               node, q, len(self.light_cone_solution))
             output['object'] = load(output['filename'])
-            frb = LightConeProjection(output, field, self.pixels, weight_field=weight_field,
-                                      save_image=save_slice_images,
-                                      name=name, node=node, **kwargs)
+            output['object'].parameters.update(self.set_parameters)
+            frb = _light_cone_projection(output, field, self.pixels, 
+                                         weight_field=weight_field, node=node)
             if ytcfg.getint("yt", "__parallel_rank") == 0:
+                if save_slice_images:
+                    write_image(na.log10(frb[field]), "%s_%s.png" % (name, field), cmap_name=cmap_name)
+
                 if photon_field:
                     # Decrement the flux by the luminosity distance. Assume field in frb is in erg/s/cm^2/Hz
                     co = Cosmology(HubbleConstantNow = (100.0 * self.enzoParameters['CosmologyHubbleConstantNow']),
                     mylog.info("Distance to slice = %e" % dL)
                     frb[field] *= factor #in erg/s/cm^2/Hz on observer's image plane.
 
-            if ytcfg.getint("yt", "__parallel_rank") == 0:
                 if weight_field is not None:
                     # Data come back normalized by the weight field.
                     # Undo that so it can be added up for the light cone.
                     if weight_field is not None:
                         self.projection_weight_field_stack = [sum(self.projection_weight_field_stack)]
 
-            # Delete the plot collection now that the frb is deleted.
-            del output['pc']
-
             # Unless this is the last slice, delete the dataset object.
             # The last one will be saved to make the plot collection.
             if (q < len(self.light_cone_solution) - 1):
             # but replace the data with the full light cone projection data.
             frb.data[field] = lightConeProjection
 
+            # Write image.
+            if save_slice_images:
+                write_image(na.log10(frb[field]), "%s_%s.png" % (filename, field), cmap_name=cmap_name)
+
             # Write stack to hdf5 file.
             if save_stack:
                 self._save_light_cone_stack(field=field, weight_field=weight_field, filename=filename)
                 else:
                     mylog.error("No halo mask loaded, call get_halo_mask.")
 
-            # Make a plot collection for the light cone projection.
-            center = [0.5 * (self.light_cone_solution[-1]['object'].parameters['DomainLeftEdge'][w] + 
-                             self.light_cone_solution[-1]['object'].parameters['DomainRightEdge'][w])
-                      for w in range(self.light_cone_solution[-1]['object'].parameters['TopGridRank'])]
-            pc = PlotCollection(self.light_cone_solution[-1]['object'], center=center)
-            pc.add_fixed_resolution_plot(frb, field, **kwargs)
-            pc.save(filename)
-
-            # Return the plot collection so the user can remake the plot if they want.
-            return pc
-
     def rerandomize_light_cone_solution(self, newSeed, recycle=True, filename=None):
         """
         When making a projection for a light cone, only randomizations along the line of sight make any 

yt/analysis_modules/light_cone/light_cone_projection.py

 from yt.config import ytcfg
 from yt.visualization.fixed_resolution import \
     FixedResolutionBuffer
-from yt.visualization.plot_collection import \
-    PlotCollection
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_blocking_call
 
 @parallel_blocking_call
-def LightConeProjection(lightConeSlice, field, pixels, weight_field=None, 
-                        save_image=False, name="", node=None, field_cuts=None,
-                        add_redshift_label=False, **kwargs):
+def _light_cone_projection(lightConeSlice, field, pixels, weight_field=None, 
+                           save_image=False, node=None, field_cuts=None):
     "Create a single projection to be added into the light cone stack."
 
     # Use some projection parameters to seed random number generator to make unique node name.
 
     mylog.info("Making projection at z = %f from %s." % (lightConeSlice['redshift'], lightConeSlice['filename']))
 
-    region_center = [0.5 * (lightConeSlice['object'].parameters['DomainRightEdge'][q] +
-                            lightConeSlice['object'].parameters['DomainLeftEdge'][q]) \
-                         for q in range(lightConeSlice['object'].parameters['TopGridRank'])]
-
-    # Make the plot collection and put it in the slice so we can delete it cleanly in the same scope 
-    # as where the frb will be deleted.
-    lightConeSlice['pc'] = PlotCollection(lightConeSlice['object'], center=region_center)
+    region_center = [0.5 * (lightConeSlice['object'].domain_right_edge[q] +
+                            lightConeSlice['object'].domain_left_edge[q]) \
+                         for q in range(lightConeSlice['object'].dimensionality)]
 
     # 1. The Depth Problem
     # Use coordinate field cut in line of sight to cut projection to proper depth.
         these_field_cuts.append(cut_mask)
 
     # Make projection.
-    lightConeSlice['pc'].add_projection(field, lightConeSlice['ProjectionAxis'], 
-                                        weight_field=weight_field, 
-                                        field_parameters=dict(field_cuts=these_field_cuts, 
-                                                              node_name=node_name),
-                                        **kwargs)
+    proj = lightConeSlice['object'].h.proj(lightConeSlice['ProjectionAxis'], field, 
+                                           weight_field, center=region_center, 
+                                           field_cuts=these_field_cuts, node_name=node_name)
 
     # If parallel: all the processes have the whole projection object, but we only need to do the tiling, shifting, and cutting once.
     if ytcfg.getint("yt", "__parallel_rank") == 0:
         # Tile projection to specified width.
 
         # Original projection data.
-        original_px = copy.deepcopy(lightConeSlice['pc'].plots[0].data['px'])
-        original_py = copy.deepcopy(lightConeSlice['pc'].plots[0].data['py'])
-        original_pdx = copy.deepcopy(lightConeSlice['pc'].plots[0].data['pdx'])
-        original_pdy = copy.deepcopy(lightConeSlice['pc'].plots[0].data['pdy'])
-        original_field = copy.deepcopy(lightConeSlice['pc'].plots[0].data[field])
-        original_weight_field = copy.deepcopy(lightConeSlice['pc'].plots[0].data['weight_field'])
+        original_px = copy.deepcopy(proj['px'])
+        original_py = copy.deepcopy(proj['py'])
+        original_pdx = copy.deepcopy(proj['pdx'])
+        original_pdy = copy.deepcopy(proj['pdy'])
+        original_field = copy.deepcopy(proj[field])
+        original_weight_field = copy.deepcopy(proj['weight_field'])
 
         # Copy original into offset positions to make tiles.
         for x in range(int(na.ceil(lightConeSlice['WidthBoxFraction']))):
             for y in range(int(na.ceil(lightConeSlice['WidthBoxFraction']))):
                 if ((x + y) > 0):
-                    lightConeSlice['pc'].plots[0].data['px'] = \
-                        na.concatenate([lightConeSlice['pc'].plots[0].data['px'], original_px+x])
-                    lightConeSlice['pc'].plots[0].data['py'] = \
-                        na.concatenate([lightConeSlice['pc'].plots[0].data['py'], original_py+y])
-                    lightConeSlice['pc'].plots[0].data['pdx'] = \
-                        na.concatenate([lightConeSlice['pc'].plots[0].data['pdx'], original_pdx])
-                    lightConeSlice['pc'].plots[0].data['pdy'] = \
-                        na.concatenate([lightConeSlice['pc'].plots[0].data['pdy'], original_pdy])
-                    lightConeSlice['pc'].plots[0].data[field] = \
-                        na.concatenate([lightConeSlice['pc'].plots[0].data[field], original_field])
-                    lightConeSlice['pc'].plots[0].data['weight_field'] = \
-                        na.concatenate([lightConeSlice['pc'].plots[0].data['weight_field'], 
-                                        original_weight_field])
+                    proj['px'] = na.concatenate([proj['px'], original_px+x])
+                    proj['py'] = na.concatenate([proj['py'], original_py+y])
+                    proj['pdx'] = na.concatenate([proj['pdx'], original_pdx])
+                    proj['pdy'] = na.concatenate([proj['pdy'], original_pdy])
+                    proj[field] = na.concatenate([proj[field], original_field])
+                    proj['weight_field'] = na.concatenate([proj['weight_field'], 
+                                                           original_weight_field])
 
         # Delete originals.
         del original_px
         del offset[lightConeSlice['ProjectionAxis']]
 
         # Shift x and y positions.
-        lightConeSlice['pc'].plots[0]['px'] -= offset[0]
-        lightConeSlice['pc'].plots[0]['py'] -= offset[1]
+        proj['px'] -= offset[0]
+        proj['py'] -= offset[1]
 
         # Wrap off-edge cells back around to other side (periodic boundary conditions).
-        lightConeSlice['pc'].plots[0]['px'][lightConeSlice['pc'].plots[0]['px'] < 0] += \
-            na.ceil(lightConeSlice['WidthBoxFraction'])
-        lightConeSlice['pc'].plots[0]['py'][lightConeSlice['pc'].plots[0]['py'] < 0] += \
-            na.ceil(lightConeSlice['WidthBoxFraction'])
+        proj['px'][proj['px'] < 0] += na.ceil(lightConeSlice['WidthBoxFraction'])
+        proj['py'][proj['py'] < 0] += na.ceil(lightConeSlice['WidthBoxFraction'])
 
         # After shifting, some cells have fractional coverage on both sides of the box.
         # Find those cells and make copies to be placed on the other side.
 
         # Cells hanging off the right edge.
-        add_x_right = lightConeSlice['pc'].plots[0]['px'] + \
-            0.5 * lightConeSlice['pc'].plots[0]['pdx'] > na.ceil(lightConeSlice['WidthBoxFractio