Overview

What this does and how to do it
===============================

This sets up an inter-communicator to pass data back and forth between a
running enzo process and a running yt process.

To start it, you have to have the MPI lookup server running.  For OpenMPI, this
looks like:

    $ ompi-server --no-daemonize -d -r ompi_server.txt

Now, you can start Enzo (compiled with python-yes) with this:

    $ mpirun -np N --ompi-server file:ompi_server.txt ./enzo.exe -d AMRCosmologySimulation.enzo

where N is the number of processors you want to run it on.  If you look in the
user_script.py file, you can see that the intercommunicator is only used by the
root processor.

In another window, maybe even on another host, once it's up and running (and
you have seen yt import) run this in either another window or another machine
with a shared file system:

    $ mpirun -np 1 --ompi-server file:ompi_server.txt python2.7 yt_recv.py --parallel

Now your yt window will update a pylab output, while your Enzo window continues
running Enzo.  You can also use the TCP sockets output by ompi-server, if you
like.

Why is this cool?
=================

Soon we will be able to dynamically connect, disconnect and reconnect processes
to running Enzo jobs.  Enzo will be able to fire-and-forget some data products,
which are then dealt with by separate communicators and yt itself.

Ultimately, this will be used for much more flexible in situ visualization.
Most interestingly, this model will also allow Enzo to run in the background
while visualization and interactive exploration occur in another window.