Commits

Kacper Kowalik committed 1cc371e

Fix some typoes

Comments (0)

Files changed (26)

source/advanced/creating_datatypes.rst

 
 The three-dimensional datatypes in yt follow a fairly simple protocol.  The
 basic principle is that if you want to define a region in space, that region
-must be identifable from some sort of cut applied against the cells --
-typically, in yt, this is done by examining the geomery.  (The
+must be identifiable from some sort of cut applied against the cells --
+typically, in yt, this is done by examining the geometry.  (The
 :class:`yt.data_objects.data_containers.ExtractedRegionBase` type is a notable
 exception to this, as it is defined as a subset of an existing data object.)
 

source/advanced/developing.rst

 
    $ hg help COMMANDNAME
 
-It also adds the URL-specifer ``bb://USERNAME/reponame`` for convenience; this
+It also adds the URL-specifier ``bb://USERNAME/reponame`` for convenience; this
 means you can reference ``sskory/yt`` to see Stephen's yt fork, for instance.
 
 The most fun of these commands are:
    $ python setup.py develop
 
 If you want to accept the changeset or reject it (if you have sufficient
-priveleges) or comment on it, you can do so from its pull request webpage.
+privileges) or comment on it, you can do so from its pull request webpage.
 
 How To Read The Source Code
 ---------------------------
  * Do not use nested classes unless you have a very good reason to, such as
    requiring a namespace or class-definition modification.  Classes should live
    at the top level.  ``__metaclass__`` is exempt from this.
- * Do not use unecessary parenthesis in conditionals.  ``if((something) and
+ * Do not use unnecessary parenthesis in conditionals.  ``if((something) and
    (something_else))`` should be rewritten as ``if something and
    something_else``.  Python is more forgiving than C.
  * Avoid copying memory when possible. For example, don't do ``a =

source/advanced/parallel_computation.rst

 
 To run scripts in parallel, you must first install
 `mpi4py <http://code.google.com/p/mpi4py>`_.
-Instructions for doing so are provided on the mpipy website.  Once that has
+Instructions for doing so are provided on the mpi4py website.  Once that has
 been accomplished, you're all done!  You just need to launch your scripts with
 ``mpirun`` (or equivalent) and signal to YT
 that you want to run them in parallel.
 ++++++++++++++++++
 
 The alternative to spatial decomposition is a simple round-robin of the grids.
-This process alows YT to pool data access to a given Enzo data file, which
+This process allows YT to pool data access to a given Enzo data file, which
 ultimately results in faster read times and better parallelism.
 
 The following operations use grid decomposition:
 
 In a fashion similar to grid decomposition, computation can be parallelized
 over objects. This is especially useful for
-`embarrasingly parallel <http://en.wikipedia.org/wiki/Embarrassingly_parallel>`_
-tasks where the items to be worked on can be split into seperate chunks and
+`embarrassingly parallel <http://en.wikipedia.org/wiki/Embarrassingly_parallel>`_
+tasks where the items to be worked on can be split into separate chunks and
 saved to a list. The list is then split up and each MPI task performs parts of
 it independently.
 
 
 
 You can also request a fixed number of processors to calculate each
-angular momenum vector.  For example, this script will calculate each angular
+angular momentum vector.  For example, this script will calculate each angular
 momentum vector using a workgroup of four processors.
 
 .. code-block:: python
  * Projections: projections are parallelized utilizing a quad-tree approach.
    Data is loaded for each processor, typically by a process that consolidates
    open/close/read operations, and each grid is then iterated over and cells
-   are deposited into a data structure that stores values correpsonding to
+   are deposited into a data structure that stores values corresponding to
    positions in the two-dimensional plane.  This provides excellent load
    balancing, and in serial is quite fast.  However, as of yt 2.3, the
    operation by which quadtrees are joined across processors scales poorly;
   * If you are using object-based parallelism but doing CPU-intensive computations
     on each object, you may find that setting ``num_procs`` equal to the 
     number of processors per compute node can lead to significant speedups.
-    By default, most mpi implimentations will assign tasks to processors on a
+    By default, most mpi implementations will assign tasks to processors on a
     'by-slot' basis, so this setting will tell yt to do computations on a single
     object using only the processors on a single compute node.  A nice application
     for this type of parallelism is calculating a list of derived quantities for 

source/advanced/plugin_file.rst

        return load(RUNDIR + fn)
 
 In this case, we've written ``load_run`` to look in a specific directory to see
-if it can find an output with the given name.  So now we can write scritps that
+if it can find an output with the given name.  So now we can write scripts that
 use this function:
 
 .. code-block:: python

source/advanced/reason_architecture.rst

 
 The ExtJS philosophy of separating models, views and controllers works quite
 well for what Reason is designed to do.  While more detailed discussions of
-this can be found in the ExtJS4 docuemntation, for our purposes this separates
+this can be found in the ExtJS4 documentation, for our purposes this separates
 into:
 
  * Model: In our situation, the model is typically the same as a store.  This
    is the fundamental data object being passed around.  For the purposes of using
    and developing Reason, the main data objects are the cells, the parameter
-   files and attendent yt data objects, the payloads from the server (including
+   files and attendant yt data objects, the payloads from the server (including
    things like new images to display in the plot windows) and the widgets.
    Each of these has a store, and each can be operated on and viewed in
    different ways.
 and some slice parameters, and then creates a widget payload describing the
 plot window that uses that slice.  This plot window knows how to deliver
 payloads back to the clientside web browser, which are threaded through to the
-isntance of the PlotWindow controller's ``applyPayload`` mechanism.  In this
+instance of the PlotWindow controller's ``applyPayload`` mechanism.  In this
 way, the Javascript code can call methods in Python, which can then supply
 results that are rich in data.
 
 
    reason.server.execute(code, false);
 
-To call a function implemneted on the ExtDirectREPL, replace ``method`` and
+To call a function implemented on the ExtDirectREPL, replace ``method`` and
 supply the first argument as a string of the method name.  Widget creation
 should be done in this manner on the server-side.
 

source/analysis_modules/absorption_spectrum.rst

 Inclusion of the peculiar velocity requires setting **get_los_velocity** to True in 
 the call to :meth:`make_light_ray`.
 
-The spectrum generator will output a file containing the wavelenght and normalized flux.  
+The spectrum generator will output a file containing the wavelength and normalized flux.  
 It will also output a text file listing all important lines.
 
 .. image:: _images/spectrum_full.png

source/analysis_modules/ellipsoid_analysis.rst

 To use the ellipsoid container to get field information, you
 will have to first determine the ellipsoid's parameters.  This can be
 done with the haloes obtained from halo finding, but essentially it
-takes the informaton:
+takes the information:
 
   #. Center position x,y,z
   #. List of particles position x,y,z

source/analysis_modules/halo_mass_function.rst

 
 When an analytical fit is desired, in nearly all cases several cosmological
 parameters will need to be specified by hand. These parameters are not
-stored with Enzo datsets. In the case where both the haloes and an analytical
+stored with Enzo datasets. In the case where both the haloes and an analytical
 fit are desired, the analysis is instantiated as below.
 ``sigma8input``, ``primordial_index`` and ``omega_baryon0`` should be set to
 the same values as

source/analysis_modules/light_ray_generator.rst

 .. sectionauthor:: Britton Smith <brittonsmith@gmail.com>
 
 Light rays are similar to light cones (:ref:`light-cone-generator`) in how  
-they stack mulitple datasets together to span a redshift interval.  Unlike 
+they stack multiple datasets together to span a redshift interval.  Unlike 
 light cones, which which stack randomly oriented projections from each 
 dataset to create synthetic images, light rays use thin pencil beams to 
 simulate QSO sight lines.
    the cookbook for an example.  Default: False.
 
  * **nearest_halo_fields** (*list*): A list of fields to be calculated for the 
-   halos nearest to every lixel in the ray.  Default: None.
+   halos nearest to every pixel in the ray.  Default: None.
 
  * **halo_profiler_parameters** (*dict*): A dictionary of parameters to be 
    passed to the HaloProfiler to create the appropriate data used to get 

source/analysis_modules/running_halofinder.rst

 of parallelization described above can be quite effective, it has its limits.
 In particular
 for highly unbalanced datasets, where most of the particles are in a single
-part of the simulational volume, it can become impossible to subdivide the
+part of the simulation's volume, it can become impossible to subdivide the
 volume sufficiently to fit a subvolume into a single node's memory.
 
 Parallel HOP is designed to be parallel at all levels of operation. There is
     matter when building haloes. Default=True.
   * ``resize``, True/False: Parallel HOP can load-balance the particles, such that
     each subvolume has the same number of particles.
-    In general, this option is a good idea for simulational volumes
+    In general, this option is a good idea for simulations' volumes
     smaller than about 300 Mpc/h, and absolutely required for those under
     100 Mpc/h. For larger volumes the particles are distributed evenly enough
     that this option is unnecessary. Default=True.
 groups in six phase-space dimensions and one time dimension, which 
 allows for robust (grid-independent, shape-independent, and noise-
 resilient) tracking of substructure. The code is prepackaged with yt, 
-but separetely available <http://code.google.com/p/rockstar>. The lead 
+but separately available <http://code.google.com/p/rockstar>. The lead 
 developer is Peter Behroozi, and the methods are described in `Behroozi
 et al. 2011 <http://rockstar.googlecode.com/files/rockstar_ap101911.pdf>.` 
 

source/analysis_modules/sunrise_export.rst

 .. sectionauthor:: Christopher Moody <cemoody@ucsc.edu>
 .. versionadded:: 1.8
 
-The yt-Sunrise exporter essentially takes grid cell data and translates it into a binary octree format, atatches star particles, and saves the output to a FITS file Sunrise can read. For every cell, the gas mass, metals mass (a fraction of which is later assumed to be in the form of dust), and the temperature are saved. Star particles are defined entirely by their mass, position, metallicity, and a 'radius.' This guide outlines the steps to exporting the data, troubleshoots common problems, and reviews recommended sanity checks. 
+The yt-Sunrise exporter essentially takes grid cell data and translates it into a binary octree format, attaches star particles, and saves the output to a FITS file Sunrise can read. For every cell, the gas mass, metals mass (a fraction of which is later assumed to be in the form of dust), and the temperature are saved. Star particles are defined entirely by their mass, position, metallicity, and a 'radius.' This guide outlines the steps to exporting the data, troubleshoots common problems, and reviews recommended sanity checks. 
 
 Simple Export
 -------------
 Sanity Check: Young Stars
 -------------------------
 
-Young stars are treated in a special way in Sunrise. Stars under 10Myr do not emit in the normal fashion; instead they are replaced with MAPPINGS III particles that emulate the emission characteristics of star forming clusters. Among other things this involves a calculation of the local pressure, P/k, which Sunrise reports for debugging purposes and is something you should also check. 
+Young stars are treated in a special way in Sunrise. Stars under 10 Myr do not emit in the normal fashion; instead they are replaced with MAPPINGS III particles that emulate the emission characteristics of star forming clusters. Among other things this involves a calculation of the local pressure, P/k, which Sunrise reports for debugging purposes and is something you should also check. 
 
-The code snippet below finds the location of every star under 10Myr and looks up the cell containing it:
+The code snippet below finds the location of every star under 10 Myr and looks up the cell containing it:
 
 .. code-block:: python
 
 .. code-block:: python
 
 	def _Pk(field,data):
-	    #calculate pressure over boltzmann constant: P/k=(n/V)T
+	    #calculate pressure over Boltzmann's constant: P/k=(n/V)T
 	    #Local stellar ISM values are ~16500 Kcm^-3
 	    vol = data['CellVolumeCode'].astype('float64')*data.pf['cm']**3.0 #volume in cm
 	    m_g = data["CellMassMsun"]*1.988435e33 #mass of H in g

source/analysis_modules/two_point_functions.rst

 Examples of two point functions are structure functions and two-point
 correlation functions.
 It can analyze the entire simulation, or a small rectangular subvolume.
-The results can be output in convenient text forma and in efficient
+The results can be output in convenient text format and in efficient
 HDF5 files.
 
 Requirements
     # Specify the dataset on which we want to base our work.
     pf = load('data0005')
     
-    # We work in simulational units, these are for conversion.
+    # We work in simulation's units, these are for conversion.
     vol_conv = pf['cm'] ** 3
     sm = pf.h.get_smallest_dx()**3
     

source/analyzing/generating_processed_data.rst

 
 
 This presents something of a challenge for visualization, as it will require
-the transformation of a variable mesh of points consistening of positions and
+the transformation of a variable mesh of points consisting of positions and
 sizes into a fixed-size array that appears like an image.  This process is that
 of pixelization, which ``yt`` handles transparently internally.  You can access
 this functionality by constructing a
    ray = pf.h.ray(  (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
    print ray["Density"]
 
-The points are ordered, but the ray is also traversing cells of varyins length,
+The points are ordered, but the ray is also traversing cells of varying length,
 as well as taking a varying distance to cross each cell.  To determine the
 distance traveled by the ray within each cell (for instance, for integration)
 the field ``dt`` is available; this field will sum to 1.0, as the ray's path

source/analyzing/objects.rst

 
    sp = pf.h.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
 
-and then look at the temperaturre of its cells within it via:
+and then look at the temperature of its cells within it via:
 
 .. code-block:: python
 
 Querying and Subselecting Objects
 ---------------------------------
 
-Often when analyzing astrophysical objects, only regions that posess certain
+Often when analyzing astrophysical objects, only regions that posses certain
 properties are of interest.  For instance, in a global galactic disk
 simulation, perhaps you want to examine only the cold gas, or only the hot gas.
 These criteria can be applied to selections of data in yt using one of two
 
    dense = sp.cut_region([ "grid['dens'] > 1e5", "grid['temp'] < 200" ])
 
-Each of the criteria will be evaluted independently and the resulting
+Each of the criteria will be evaluated independently and the resulting
 intersection will be returned.
 
 .. _extracting-connected-sets:
 To use this, call
 :meth:`~yt.data_objects.data_containers.AMR3DData.extract_connected_sets` on
 any 3D data object.  This requests a field, the number of levels of levels sets to
-extract, the min and the max value beween which sets will be identified, and
+extract, the min and the max value between which sets will be identified, and
 whether or not to conduct it in log space.
 
 .. code-block:: python

source/analyzing/time_series_analysis.rst

 :class:`~yt.data_objects.time_series.TimeSeriesData` works in parallel by
 default (see :ref:`parallel-computation`), so you can use a ``TimeSeriesData``
 object to quickly and easily parallelize your analysis.  Since doing the same
-analysis task on many simulation outputs is 'embarrasingly' parallel, this
+analysis task on many simulation outputs is 'embarrassingly' parallel, this
 naturally allows for almost arbitrary speedup - limited only by the number of
 available processors and the number of simulation outputs.
 

source/configuration.rst

 * ``loadfieldplugins`` (default: ``'True'``): Do we want to load the plugin file?
 * ``logfile`` (default: ``'False'``): Should we output to a log file in the
   filesystem?
-* ``loglevel`` (default: ``'20'``): What is the threshold (0 to 50) for outputing
+* ``loglevel`` (default: ``'20'``): What is the threshold (0 to 50) for outputting
   log files?
 * ``maximumstoredpfs`` (default: ``'500'``): How many parameter files should be
   tracked between sessions?

source/cookbook/constructing_data_objects.rst

 ~~~~~~~~~~~~~~~~~~~~
 
 Below shows the creation of a number of boolean data objects, which are built
-upon previously-defined data objects. The boolean data ojbects can be used like
+upon previously-defined data objects. The boolean data objects can be used like
 any other, except for a few cases.  Please see :ref:`boolean_data_objects` for
 more information.
 

source/cookbook/simple_plots.rst

 
 This demonstrates how to make a phase plot.  Phase plots can be thought of as
 two-dimensional histograms, where the value is either the weighted-average or
-the total accumualtion in a cell.
+the total accumulation in a cell.
 
 .. yt_cookbook:: simple_phase.py
 

source/orientation/first_steps.rst

 available to you.  We now use the ``load`` function, which accepts a filename
 and then attempts to guess the file type.  If it's able to figure out what kind
 of simulation output it is, it will parse the parameter file to determine a
-numebr of components, and then return to you an object.  The convention in the
+number of components, and then return to you an object.  The convention in the
 yt documentation is to call this object ``pf`` but you can call it whatever you
 like.  We'll load up the dataset that comes with yt::
 
 This function can accept a few more arguments, but this covers the essentials.
 We supply it a center and a radius.  The radius is specified in the units
 native to the simulation domain; this is not terribly useful, so we have used a
-unit conversion to convert from 10 kiloparseccs into the native simulation
+unit conversion to convert from 10 kiloparsecs into the native simulation
 units.  You can do this with a number of different units (all of which are
 actually listed in the ``print_stats`` output) and the opposite (multiplication
 by the conversion factor) works for conversion from code units back to a

source/orientation/making_plots.rst

    >>> slc.save('zoom')
 
 This will save a new plot to disk with a different filename - prepended with
-'zoom' instead of the name of the parmaeter file. If you want to set the width
+'zoom' instead of the name of the parameter file. If you want to set the width
 manually, you can do that as well. For example, the following sequence of
 commands will create a slice, set the width of the plot to 10 kiloparsecs, and
 save it to disk.
 There are a number of annotations available.  The full list is available in
 :ref:`callbacks`.
 
-Projectiions
+Projections
 ^^^^^^^^^^^^
 
 It can be limiting to only look at slices through 3D data.  In most cases, Doing
 This allows, for instance, the calculation of the average Temperature as a
 function of Density and velocity.  Or, it allows the distribution of all the
 mass as a function of Density and Temperature.  I have used phase plots to good
-effect to show the variation of chemical quantites as a function of spatial and
+effect to show the variation of chemical quantities as a function of spatial and
 angular distribution, for instance.  There are several ways to create a phase
 plot; we'll actually show the most flexible method, which uses data containers.
 There's a convenience function that takes a center and a radius and makes one

source/orientation/python_introduction.rst

    >>> print "Pi is precisely %0.2f" % (3.1415926)
 
 This took the number we fed it (3.1415926) and printed it out as a floating
-point number with two decimel places.  Now let's try something a bit different
+point number with two decimal places.  Now let's try something a bit different
 -- let's print out both the name of the number and its value.::
 
    >>> print "%s is precisely %0.2f" % ("pi", 3.1415926)
    >>> print a + 5.1
    >>> print a / 2.0
 
-Because of a historial aversion to floating point division in Python (which is
+Because of a historical aversion to floating point division in Python (which is
 now changing) it's always safest to ensure that either the numerator or the
 denominator is a floating point number.
 
 As a brief aside, the case/switch statement in Python is typically executed
 using an if/elif/else block; this can be done using more complicated
 dictionary-type statements with functions, but that typically only adds
-unncessary complexity.
+unnecessary complexity.
 
 For a simple example of how to do an if/else statement, we'll return to the
 idea of iterating over a loop of numbers.  We'll use the ``%`` operator, which
 sequence!)
 
 To get started with array operations, let's first import the NumPy library.
-This is the first time we've seen an import in this orientiation, so we'll
+This is the first time we've seen an import in this orientation, so we'll
 dwell for a moment on what this means.  When a library is imported, it is read
 from disk, the functions are loaded into memory, and they are made available
 to the user.  So when we execute::

source/visualizing/image_panner.rst

 The mechanism for creating images from reduced 2D AMR data in yt is to feed
 that data through a pixelization routine, which takes a flattened array of
 pixel locations, widths and values and returns a 2D buffer of data suitable for
-plotitng as an image.  This enables us to pan and zoom virtually at will, with
+plotting as an image.  This enables us to pan and zoom virtually at will, with
 no extra cost from re-projecting the simulation.
 
 Typically, this is all hidden from the user: the plot interface mostly just
 The creation of windowed image panners is slightly different than standard
 image panners: each one is explicitly told how big the final image is, and
 how big their portion is, and the index at which they should start.  Finally,
-ths list is supplied to an instance of a controller object.
+this list is supplied to an instance of a controller object.
 
 When the following code is executed:
 

source/visualizing/manual_plotting.rst

 
 For slices and projects, ``yt`` provides a manual plotting interface based on
 the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` (hereafter
-referred to as FRB) object. Desipte its somewhat unwieldy name, at its heart, an
+referred to as FRB) object. Despite its somewhat unwieldy name, at its heart, an
 FRB is a very simple object: it's essentially a window into your data: you give
 it a center and a width or a left and right edge, and an image resolution, and
 the FRB returns a fully pixelized image. The simplest way to
 Line Plots
 ----------
 
-This is perhaps the simplest thing to do. ``yt`` provides a number of one dimensional objects, and these return a 1-D numpy array of their contents with direct dictionary acess. As a simple example, take a :class:`~yt.data_objects.data_containers.AMROrthoRayBase` object, which can be created from a hierarchy by calling ``pf.h.ortho_ray(axis, center)``. 
+This is perhaps the simplest thing to do. ``yt`` provides a number of one dimensional objects, and these return a 1-D numpy array of their contents with direct dictionary access. As a simple example, take a :class:`~yt.data_objects.data_containers.AMROrthoRayBase` object, which can be created from a hierarchy by calling ``pf.h.ortho_ray(axis, center)``. 
 
 .. code-block:: python
 

source/visualizing/plots.rst

 :class:`~yt.visualization.plot_window.PlotWindow` interface is useful
 for taking a quick look at simulation outputs.  The ``PlotCollection``
 interface is best for in-depth analysis of a simulation, allowing a
-collection of plots to be set up, rendered, and saved simultaenously.
+collection of plots to be set up, rendered, and saved simultaneously.
 Below we will summarize how to use both plotting interfaces.
 
 .. _simple-inspection:
 :class:`~yt.visualization.plot_window.ProjectionPlot` for the full
 class description.
 
-Off Axis SLices
+Off Axis Slices
 ~~~~~~~~~~~~~~~
 
 Off axis slice plots can be generated in much the same way as
 :class:`~yt.data_objects.data_containers.AMRCuttingPlaneBase` to slice
 through simulation domains at an arbitrary oblique angle.  A
 :class:`~yt.visualization.plot_window.OffAxisSlicePlot` can be
-instantiated by specifying a parmaeter file, the normal to the cutting
+instantiated by specifying a parameter file, the normal to the cutting
 plane, and the name of the fields to plot.  For example:
 
 .. code-block:: python
    pf = load("RedshiftOutput0005")
    L = [1,1,0] # vector normal to cutting plane
    north_vector = [1,-1,0]
-   cut = OffAxisSlicePlot(pf,L,'Density',north_vector=noth_vector)
+   cut = OffAxisSlicePlot(pf,L,'Density',north_vector=north_vector)
    cut.save()
 
 creates an off-axis slice in the plane perpendicular to ``L``,
 ``PlotCollection`` holds a collection of plots that get created,
 rendered and saved to disk all at once.
 :class:`~yt.visualization.plot_collection.PlotCollection` is
-instantiated qith a given parameter file and (optionally) a center.
+instantiated with a given parameter file and (optionally) a center.
 
 At this point we can add various plots to our plot collection.  For starters,
 we'll take a look at creating images.  Images can be created from data in a
 for :class:`~yt.visualization.plot_collection.PlotCollection`.
 
 In addition, and as discussed in :ref:`callbacks`, a number of modifications
-can be made to a plot.  This includes velocit vectors, contours, particle
+can be made to a plot.  This includes velocity vectors, contours, particle
 plotting, and so on.
 
 

source/visualizing/streamlines.rst

 Running in Parallel
 --------------------
 
-The integration of the streamline paths is "embarassingly" parallelized by
+The integration of the streamline paths is "embarrassingly" parallelized by
 splitting the streamlines up between the processors.  Upon completion,
 each processor has access to all of the streamlines through the use of
 a reduction operation.

source/visualizing/volume_rendering.rst

 
 The camera interface allows the user to move the camera about the domain, as
 well as providing interfaces for zooming in and out.  Furthermore, ``yt`` now
-includes a steroscopic camera
+includes a stereoscopic camera
 (:class:`~yt.visualization.volume_rendering.camera.StereoPairCamera`).
 
 Much like most data objects, the