Commits

Mike Bayer committed 2a8966c

whitespace removal

Comments (0)

Files changed (251)

 
 Major SQLAlchemy features include:
 
-* An industrial strength ORM, built 
+* An industrial strength ORM, built
   from the core on the identity map, unit of work,
   and data mapper patterns.   These patterns
-  allow transparent persistence of objects 
+  allow transparent persistence of objects
   using a declarative configuration system.
   Domain models
   can be constructed and manipulated naturally,
   and changes are synchronized with the
   current transaction automatically.
 * A relationally-oriented query system, exposing
-  the full range of SQL's capabilities 
-  explicitly, including joins, subqueries, 
-  correlation, and most everything else, 
+  the full range of SQL's capabilities
+  explicitly, including joins, subqueries,
+  correlation, and most everything else,
   in terms of the object model.
-  Writing queries with the ORM uses the same 
-  techniques of relational composition you use 
+  Writing queries with the ORM uses the same
+  techniques of relational composition you use
   when writing SQL.  While you can drop into
   literal SQL at any time, it's virtually never
   needed.
-* A comprehensive and flexible system 
+* A comprehensive and flexible system
   of eager loading for related collections and objects.
   Collections are cached within a session,
-  and can be loaded on individual access, all 
+  and can be loaded on individual access, all
   at once using joins, or by query per collection
   across the full result set.
-* A Core SQL construction system and DBAPI 
+* A Core SQL construction system and DBAPI
   interaction layer.  The SQLAlchemy Core is
   separate from the ORM and is a full database
   abstraction layer in its own right, and includes
-  an extensible Python-based SQL expression 
-  language, schema metadata, connection pooling, 
+  an extensible Python-based SQL expression
+  language, schema metadata, connection pooling,
   type coercion, and custom types.
-* All primary and foreign key constraints are 
+* All primary and foreign key constraints are
   assumed to be composite and natural.  Surrogate
-  integer primary keys are of course still the 
+  integer primary keys are of course still the
   norm, but SQLAlchemy never assumes or hardcodes
   to this model.
 * Database introspection and generation.  Database
   schemas can be "reflected" in one step into
   Python structures representing database metadata;
-  those same structures can then generate 
+  those same structures can then generate
   CREATE statements right back out - all within
   the Core, independent of the ORM.
 
   that should be fully exposed.   SQLAlchemy's
   ORM provides an open-ended set of patterns
   that allow a developer to construct a custom
-  mediation layer between a domain model and 
+  mediation layer between a domain model and
   a relational schema, turning the so-called
   "object relational impedance" issue into
   a distant memory.
   of both the object model as well as the relational
   schema.   SQLAlchemy only provides the means
   to automate the execution of these decisions.
-* With SQLAlchemy, there's no such thing as 
-  "the ORM generated a bad query" - you 
-  retain full control over the structure of 
+* With SQLAlchemy, there's no such thing as
+  "the ORM generated a bad query" - you
+  retain full control over the structure of
   queries, including how joins are organized,
-  how subqueries and correlation is used, what 
+  how subqueries and correlation is used, what
   columns are requested.  Everything SQLAlchemy
   does is ultimately the result of a developer-
   initiated decision.
 * Don't use an ORM if the problem doesn't need one.
   SQLAlchemy consists of a Core and separate ORM
   component.   The Core offers a full SQL expression
-  language that allows Pythonic construction 
+  language that allows Pythonic construction
   of SQL constructs that render directly to SQL
   strings for a target database, returning
   result sets that are essentially enhanced DBAPI
   the start and end of a series of operations.
 * Never render a literal value in a SQL statement.
   Bound parameters are used to the greatest degree
-  possible, allowing query optimizers to cache 
+  possible, allowing query optimizers to cache
   query plans effectively and making SQL injection
   attacks a non-issue.
 
 Installation / Requirements
 ---------------------------
 
-Full documentation for installation is at 
+Full documentation for installation is at
 `Installation <http://www.sqlalchemy.org/docs/intro.html#installation>`_.
 
 Getting Help / Development / Bug reporting

doc/build/builder/builders.py

         builder.config.html_context['site_base'] = builder.config['site_base']
 
         self.lookup = TemplateLookup(directories=builder.config.templates_path,
-            #format_exceptions=True, 
+            #format_exceptions=True,
             imports=[
                 "from builder import util"
             ]
 
         # RTD layout
         if rtd:
-            # add variables if not present, such 
+            # add variables if not present, such
             # as if local test of READTHEDOCS variable
             if 'MEDIA_URL' not in context:
                 context['MEDIA_URL'] = "http://media.readthedocs.org/"
             'sqlpopup':[
                 (
                     r'(.*?\n)((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|ROLLBACK|COMMIT|ALTER|UPDATE|CREATE|DROP|PRAGMA|DESCRIBE).*?(?:{stop}\n?|$))',
-                    bygroups(using(PythonConsoleLexer), Token.Sql.Popup), 
+                    bygroups(using(PythonConsoleLexer), Token.Sql.Popup),
                     "#pop"
                 )
             ],
             'opensqlpopup':[
                 (
                     r'.*?(?:{stop}\n*|$)',
-                    Token.Sql, 
+                    Token.Sql,
                     "#pop"
                 )
             ]
             'sqlpopup':[
                 (
                     r'(.*?\n)((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|ROLLBACK|COMMIT|ALTER|UPDATE|CREATE|DROP|PRAGMA|DESCRIBE).*?(?:{stop}\n?|$))',
-                    bygroups(using(PythonLexer), Token.Sql.Popup), 
+                    bygroups(using(PythonLexer), Token.Sql.Popup),
                     "#pop"
                 )
             ],
             'opensqlpopup':[
                 (
                     r'.*?(?:{stop}\n*|$)',
-                    Token.Sql, 
+                    Token.Sql,
                     "#pop"
                 )
             ]

doc/build/conf.py

 
 site_base = "http://www.sqlalchemy.org"
 
-# arbitrary number recognized by builders.py, incrementing this 
+# arbitrary number recognized by builders.py, incrementing this
 # will force a rebuild
 build_number = 3
 

doc/build/copyright.rst

 
 This is the MIT license: `<http://www.opensource.org/licenses/mit-license.php>`_
 
-Copyright (c) 2005-2012 Michael Bayer and contributors. 
+Copyright (c) 2005-2012 Michael Bayer and contributors.
 SQLAlchemy is a trademark of Michael Bayer.
 
 Permission is hereby granted, free of charge, to any person obtaining a copy of this

doc/build/core/connections.rst

 connection is retrieved from the connection pool at the point at which
 :class:`.Connection` is created.
 
-The returned result is an instance of :class:`.ResultProxy`, which 
+The returned result is an instance of :class:`.ResultProxy`, which
 references a DBAPI cursor and provides a largely compatible interface
 with that of the DBAPI cursor.   The DBAPI cursor will be closed
-by the :class:`.ResultProxy` when all of its result rows (if any) are 
+by the :class:`.ResultProxy` when all of its result rows (if any) are
 exhausted.  A :class:`.ResultProxy` that returns no rows, such as that of
-an UPDATE statement (without any returned rows), 
+an UPDATE statement (without any returned rows),
 releases cursor resources immediately upon construction.
 
 When the :meth:`~.Connection.close` method is called, the referenced DBAPI
 of weakref callbacks - *never* the ``__del__`` method) - however it's never a
 good idea to rely upon Python garbage collection to manage resources.
 
-Our example above illustrated the execution of a textual SQL string. 
-The :meth:`~.Connection.execute` method can of course accommodate more than 
+Our example above illustrated the execution of a textual SQL string.
+The :meth:`~.Connection.execute` method can of course accommodate more than
 that, including the variety of SQL expression constructs described
 in :ref:`sqlexpression_toplevel`.
 
 Using Transactions
 ==================
 
-.. note:: 
+.. note::
 
-  This section describes how to use transactions when working directly 
+  This section describes how to use transactions when working directly
   with :class:`.Engine` and :class:`.Connection` objects. When using the
   SQLAlchemy ORM, the public API for transaction control is via the
   :class:`.Session` object, which makes usage of the :class:`.Transaction`
 transaction is in progress. The detection is based on the presence of the
 ``autocommit=True`` execution option on the statement.   If the statement
 is a text-only statement and the flag is not set, a regular expression is used
-to detect INSERT, UPDATE, DELETE, as well as a variety of other commands 
+to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
 for a particular backend::
 
     conn = engine.connect()
     conn.execute("INSERT INTO users VALUES (1, 'john')")  # autocommits
 
 The "autocommit" feature is only in effect when no :class:`.Transaction` has
-otherwise been declared.   This means the feature is not generally used with 
-the ORM, as the :class:`.Session` object by default always maintains an 
+otherwise been declared.   This means the feature is not generally used with
+the ORM, as the :class:`.Session` object by default always maintains an
 ongoing :class:`.Transaction`.
 
 Full control of the "autocommit" behavior is available using the generative
 :class:`.Connection`.  This was illustrated using the :meth:`~.Engine.execute` method
 of :class:`.Engine`.
 
-In addition to "connectionless" execution, it is also possible 
-to use the :meth:`~.Executable.execute` method of 
+In addition to "connectionless" execution, it is also possible
+to use the :meth:`~.Executable.execute` method of
 any :class:`.Executable` construct, which is a marker for SQL expression objects
 that support execution.   The SQL expression object itself references an
 :class:`.Engine` or :class:`.Connection` known as the **bind**, which it uses
 on the expression itself, utilizing the fact that either an
 :class:`~sqlalchemy.engine.base.Engine` or
 :class:`~sqlalchemy.engine.base.Connection` has been *bound* to the expression
-object (binding is discussed further in 
+object (binding is discussed further in
 :ref:`metadata_toplevel`):
 
 .. sourcecode:: python+sql
     call_operation3(conn)
     conn.close()
 
-Calling :meth:`~.Connection.close` on the "contextual" connection does not release 
+Calling :meth:`~.Connection.close` on the "contextual" connection does not release
 its resources until all other usages of that resource are closed as well, including
 that any ongoing transactions are rolled back or committed.
 

doc/build/core/engines.rst

 Supported Databases
 ====================
 
-SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various 
-backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package.  A 
+SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various
+backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package.  A
 SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
 
-The table below summarizes the state of DBAPI support in SQLAlchemy 0.7.  The values 
+The table below summarizes the state of DBAPI support in SQLAlchemy 0.7.  The values
 translate as:
 
 * yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
 :class:`.Engine` per database established within an
 application, rather than creating a new one for each connection.
 
-.. note:: 
+.. note::
 
    :class:`.QueuePool` is not used by default for SQLite engines.  See
    :ref:`sqlite_toplevel` for details on SQLite connection pool usage.
 namespace of SA loggers that can be turned on is as follows:
 
 * ``sqlalchemy.engine`` - controls SQL echoing.  set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
-* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects.  See the documentation of individual dialects for details. 
+* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects.  See the documentation of individual dialects for details.
 * ``sqlalchemy.pool`` - controls connection pool logging.  set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
 * ``sqlalchemy.orm`` - controls logging of various ORM functions.  set to ``logging.INFO`` for information on mapper configurations.
 
 
    The SQLAlchemy :class:`.Engine` conserves Python function call overhead
    by only emitting log statements when the current logging level is detected
-   as ``logging.INFO`` or ``logging.DEBUG``.  It only checks this level when 
-   a new connection is procured from the connection pool.  Therefore when 
+   as ``logging.INFO`` or ``logging.DEBUG``.  It only checks this level when
+   a new connection is procured from the connection pool.  Therefore when
    changing the logging configuration for an already-running application, any
    :class:`.Connection` that's currently active, or more commonly a
    :class:`~.orm.session.Session` object that's active in a transaction, won't log any
-   SQL according to the new configuration until a new :class:`.Connection` 
-   is procured (in the case of :class:`~.orm.session.Session`, this is 
+   SQL according to the new configuration until a new :class:`.Connection`
+   is procured (in the case of :class:`~.orm.session.Session`, this is
    after the current transaction ends and a new one begins).

doc/build/core/event.rst

 Events
 ======
 
-SQLAlchemy includes an event API which publishes a wide variety of hooks into 
+SQLAlchemy includes an event API which publishes a wide variety of hooks into
 the internals of both SQLAlchemy Core and ORM.
 
 .. versionadded:: 0.7
 specific types of events, which may specify alternate interfaces for the given event function, or provide
 instructions regarding secondary event targets based on the given target.
 
-The name of an event and the argument signature of a corresponding listener function is derived from 
+The name of an event and the argument signature of a corresponding listener function is derived from
 a class bound specification method, which exists bound to a marker class that's described in the documentation.
 For example, the documentation for :meth:`.PoolEvents.connect` indicates that the event name is ``"connect"``
 and that a user-defined listener function should receive two positional arguments::

doc/build/core/interfaces.rst

 
 .. module:: sqlalchemy.interfaces
 
-This section describes the class-based core event interface introduced in 
+This section describes the class-based core event interface introduced in
 SQLAlchemy 0.5.  The ORM analogue is described at :ref:`dep_interfaces_orm_toplevel`.
 
 .. deprecated:: 0.7

doc/build/core/internals.rst

 Core Internals
 ==============
 
-Some key internal constructs are listed here.   
+Some key internal constructs are listed here.
 
 .. currentmodule: sqlalchemy
 

doc/build/core/pooling.rst

 .. module:: sqlalchemy.pool
 
 A connection pool is a standard technique used to maintain
-long running connections in memory for efficient re-use, 
+long running connections in memory for efficient re-use,
 as well as to provide
 management for the total number of connections an application
 might use simultaneously.
 maintain a "pool" of active database connections in memory which are
 reused across requests.
 
-SQLAlchemy includes several connection pool implementations 
+SQLAlchemy includes several connection pool implementations
 which integrate with the :class:`.Engine`.  They can also be used
 directly for applications that want to add pooling to an otherwise
 plain DBAPI approach.
 All SQLAlchemy pool implementations have in common
 that none of them "pre create" connections - all implementations wait
 until first use before creating a connection.   At that point, if
-no additional concurrent checkout requests for more connections 
+no additional concurrent checkout requests for more connections
 are made, no additional connections are created.   This is why it's perfectly
 fine for :func:`.create_engine` to default to using a :class:`.QueuePool`
 of size five without regard to whether or not the application really needs five connections
 queued up - the pool would only grow to that size if the application
-actually used five connections concurrently, in which case the usage of a 
+actually used five connections concurrently, in which case the usage of a
 small pool is an entirely appropriate default behavior.
 
 Switching Pool Implementations
 
     from sqlalchemy.pool import NullPool
     engine = create_engine(
-              'postgresql+psycopg2://scott:tiger@localhost/test', 
+              'postgresql+psycopg2://scott:tiger@localhost/test',
               poolclass=NullPool)
 
 Using a Custom Connection Function
 ----------------------------------
 
-All :class:`.Pool` classes accept an argument ``creator`` which is 
+All :class:`.Pool` classes accept an argument ``creator`` which is
 a callable that creates a new connection.  :func:`.create_engine`
 accepts this function to pass onto the pool via an argument of
 the same name::
     cursor.execute("select foo")
 
 The purpose of the transparent proxy is to intercept the ``close()`` call,
-such that instead of the DBAPI connection being closed, its returned to the 
+such that instead of the DBAPI connection being closed, its returned to the
 pool::
 
     # "close" the connection.  Returns
     # it to the pool.
     conn.close()
 
-The proxy also returns its contained DBAPI connection to the pool 
+The proxy also returns its contained DBAPI connection to the pool
 when it is garbage collected,
 though it's not deterministic in Python that this occurs immediately (though
 it is typical with cPython).
 -----------
 
 Connection pools support an event interface that allows hooks to execute
-upon first connect, upon each new connection, and upon checkout and 
+upon first connect, upon each new connection, and upon checkout and
 checkin of connections.   See :class:`.PoolEvents` for details.
 
 Dealing with Disconnects
 ------------------------
 
-The connection pool has the ability to refresh individual connections as well as 
+The connection pool has the ability to refresh individual connections as well as
 its entire set of connections, setting the previously pooled connections as
-"invalid".   A common use case is allow the connection pool to gracefully recover 
+"invalid".   A common use case is allow the connection pool to gracefully recover
 when the database server has been restarted, and all previously established connections
 are no longer functional.   There are two approaches to this.
 
 Disconnect Handling - Optimistic
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-The most common approach is to let SQLAlchemy handle disconnects as they 
-occur, at which point the pool is refreshed.   This assumes the :class:`.Pool` 
-is used in conjunction with a :class:`.Engine`.  The :class:`.Engine` has 
+The most common approach is to let SQLAlchemy handle disconnects as they
+occur, at which point the pool is refreshed.   This assumes the :class:`.Pool`
+is used in conjunction with a :class:`.Engine`.  The :class:`.Engine` has
 logic which can detect disconnection events and refresh the pool automatically.
 
 When the :class:`.Connection` attempts to use a DBAPI connection, and an
         if e.connection_invalidated:
             print "Connection was invalidated!"
 
-    # after the invalidate event, a new connection 
+    # after the invalidate event, a new connection
     # starts with a new Pool
     c = e.connect()
     c.execute("SELECT * FROM table")
 
 The above example illustrates that no special intervention is needed, the pool
 continues normally after a disconnection event is detected.   However, an exception is
-raised.   In a typical web application using an ORM Session, the above condition would 
+raised.   In a typical web application using an ORM Session, the above condition would
 correspond to a single request failing with a 500 error, then the web application
 continuing normally beyond that.   Hence the approach is "optimistic" in that frequent
 database restarts are not anticipated.
 Setting Pool Recycle
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-An additional setting that can augment the "optimistic" approach is to set the 
+An additional setting that can augment the "optimistic" approach is to set the
 pool recycle parameter.   This parameter prevents the pool from using a particular
 connection that has passed a certain age, and is appropriate for database backends
 such as MySQL that automatically close connections that have been stale after a particular
 Disconnect Handling - Pessimistic
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-At the expense of some extra SQL emitted for each connection checked out from the pool, 
-a "ping" operation established by a checkout event handler 
+At the expense of some extra SQL emitted for each connection checked out from the pool,
+a "ping" operation established by a checkout event handler
 can detect an invalid connection before it's used::
 
     from sqlalchemy import exc
 
 Above, the :class:`.Pool` object specifically catches :class:`~sqlalchemy.exc.DisconnectionError` and attempts
 to create a new DBAPI connection, up to three times, before giving up and then raising
-:class:`~sqlalchemy.exc.InvalidRequestError`, failing the connection.   This recipe will ensure 
+:class:`~sqlalchemy.exc.InvalidRequestError`, failing the connection.   This recipe will ensure
 that a new :class:`.Connection` will succeed even if connections
 in the pool have gone stale, provided that the database server is actually running.   The expense
 is that of an additional execution performed per checkout.   When using the ORM :class:`.Session`,
 above also works with straight connection pool usage, that is, even if no :class:`.Engine` were
 involved.
 
-The event handler can be tested using a script like the following, restarting the database 
+The event handler can be tested using a script like the following, restarting the database
 server at the point at which the script pauses for input::
 
     from sqlalchemy import create_engine

doc/build/core/schema.rst

 constructs, the ability to alter those constructs, usually via the ALTER statement
 as well as other database-specific constructs, is outside of the scope of SQLAlchemy
 itself.  While it's easy enough to emit ALTER statements and similar by hand,
-such as by passing a string to :meth:`.Connection.execute` or by using the 
-:class:`.DDL` construct, it's a common practice to automate the maintenance of 
+such as by passing a string to :meth:`.Connection.execute` or by using the
+:class:`.DDL` construct, it's a common practice to automate the maintenance of
 database schemas in relation to application code using schema migration tools.
 
 There are two major migration tools available for SQLAlchemy:
 * `Alembic <http://alembic.readthedocs.org>`_ - Written by the author of SQLAlchemy,
   Alembic features a highly customizable environment and a minimalistic usage pattern,
   supporting such features as transactional DDL, automatic generation of "candidate"
-  migrations, an "offline" mode which generates SQL scripts, and support for branch 
+  migrations, an "offline" mode which generates SQL scripts, and support for branch
   resolution.
 * `SQLAlchemy-Migrate <http://code.google.com/p/sqlalchemy-migrate/>`_ - The original
   migration tool for SQLAlchemy, SQLAlchemy-Migrate is widely used and continues
-  under active development.   SQLAlchemy-Migrate includes features such as 
-  SQL script generation, ORM class generation, ORM model comparison, and extensive 
+  under active development.   SQLAlchemy-Migrate includes features such as
+  SQL script generation, ORM class generation, ORM model comparison, and extensive
   support for SQLite migrations.
 
 .. _metadata_binding:
 The :class:`.Table` is the SQLAlchemy Core construct that allows one to define
 table metadata, which among other things can be used by the SQLAlchemy ORM
 as a target to map a class.  The :ref:`Declarative <declarative_toplevel>`
-extension allows the :class:`.Table` object to be created automatically, given 
+extension allows the :class:`.Table` object to be created automatically, given
 the contents of the table primarily as a mapping of :class:`.Column` objects.
 
 To apply table-level constraint objects such as :class:`.ForeignKeyConstraint`
-to a table defined using Declarative, use the ``__table_args__`` attribute, 
+to a table defined using Declarative, use the ``__table_args__`` attribute,
 described at :ref:`declarative_table_args`.
 
 Constraints API
     CREATE INDEX idx_col34 ON mytable (col3, col4){stop}
 
 Note in the example above, the :class:`.Index` construct is created
-externally to the table which it corresponds, using :class:`.Column` 
+externally to the table which it corresponds, using :class:`.Column`
 objects directly.  :class:`.Index` also supports
-"inline" definition inside the :class:`.Table`, using string names to 
+"inline" definition inside the :class:`.Table`, using string names to
 identify columns::
 
     meta = MetaData()
 
     event.listen(
         users,
-        "after_create", 
+        "after_create",
         AddConstraint(constraint)
     )
     event.listen(
     DROP TABLE users{stop}
 
 The real usefulness of the above becomes clearer once we illustrate the :meth:`.DDLEvent.execute_if`
-method.  This method returns a modified form of the DDL callable which will 
+method.  This method returns a modified form of the DDL callable which will
 filter on criteria before responding to a received event.   It accepts a
 parameter ``dialect``, which is the string name of a dialect or a tuple of such,
 which will limit the execution of the item to just those dialects.  It also
-accepts a ``callable_`` parameter which may reference a Python callable which will 
+accepts a ``callable_`` parameter which may reference a Python callable which will
 be invoked upon event reception, returning ``True`` or ``False`` indicating if
 the event should proceed.
 

doc/build/core/tutorial.rst

     ()
     COMMIT
 
-.. note:: 
+.. note::
 
     Users familiar with the syntax of CREATE TABLE may notice that the
     VARCHAR columns were generated without a length; on SQLite and Postgresql,
     ('jack@msn.com', 'jack@yahoo.com')
     {stop}[(1, u'jack', u'Jack Jones')]
 
-Note that the :class:`.Alias` construct generated the names ``addresses_1`` and 
+Note that the :class:`.Alias` construct generated the names ``addresses_1`` and
 ``addresses_2`` in the final SQL result.  The generation of these names is determined
 by the position of the construct within the statement.   If we created a query using
-only the second ``a2`` alias, the name would come out as ``addresses_1``.  The 
-generation of the names is also *deterministic*, meaning the same SQLAlchemy 
-statement construct will produce the identical SQL string each time it is 
+only the second ``a2`` alias, the name would come out as ``addresses_1``.  The
+generation of the names is also *deterministic*, meaning the same SQLAlchemy
+statement construct will produce the identical SQL string each time it is
 rendered for a particular dialect.
 
 Since on the outside, we refer to the alias using the :class:`.Alias` construct
 Transforming a Statement
 ------------------------
 
-We've seen how methods like :meth:`.Select.where` and :meth:`._SelectBase.order_by` are 
+We've seen how methods like :meth:`.Select.where` and :meth:`._SelectBase.order_by` are
 part of the so-called *Generative* family of methods on the :func:`.select` construct,
 where one :func:`.select` copies itself to return a new one with modifications.
 SQL constructs also support another form of generative behavior which is
 
     >>> s = select([users.c.id, func.row_number().over(order_by=users.c.name)])
     >>> print s # doctest: +NORMALIZE_WHITESPACE
-    SELECT users.id, row_number() OVER (ORDER BY users.name) AS anon_1 
+    SELECT users.id, row_number() OVER (ORDER BY users.name) AS anon_1
     FROM users
 
 Unions and Other Set Operations
     {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
 
     >>> # with binds, you can also update many rows at once
-    {sql}>>> conn.execute(u, 
+    {sql}>>> conn.execute(u,
     ...     {'oldname':'jack', 'newname':'ed'},
     ...     {'oldname':'wendy', 'newname':'mary'},
     ...     {'oldname':'jim', 'newname':'jake'},
 which updates one table at a time, but can reference additional tables in an additional
 "FROM" clause that can then be referenced in the WHERE clause directly.   On MySQL,
 multiple tables can be embedded into a single UPDATE statement separated by a comma.
-The SQLAlchemy :func:`.update` construct supports both of these modes 
+The SQLAlchemy :func:`.update` construct supports both of these modes
 implicitly, by specifying multiple tables in the WHERE clause::
 
     stmt = users.update().\
 
 The resulting SQL from the above statement would render as::
 
-    UPDATE users SET name=:name FROM addresses 
-    WHERE users.id = addresses.id AND 
+    UPDATE users SET name=:name FROM addresses
+    WHERE users.id = addresses.id AND
     addresses.email_address LIKE :email_address_1 || '%%'
 
 When using MySQL, columns from each table can be assigned to in the
 
     stmt = users.update().\
             values({
-                users.c.name:'ed wood', 
+                users.c.name:'ed wood',
                 addresses.c.email_address:'ed.wood@foo.com'
             }).\
             where(users.c.id==addresses.c.id).\
 
 The tables are referenced explicitly in the SET clause::
 
-    UPDATE users, addresses SET addresses.email_address=%s, 
-            users.name=%s WHERE users.id = addresses.id 
+    UPDATE users, addresses SET addresses.email_address=%s,
+            users.name=%s WHERE users.id = addresses.id
             AND addresses.email_address LIKE concat(%s, '%%')
 
-SQLAlchemy doesn't do anything special when these constructs are used on 
+SQLAlchemy doesn't do anything special when these constructs are used on
 a non-supporting database.  The ``UPDATE FROM`` syntax generates by default
 when multiple tables are present, and the statement will be rejected
 by the database if this syntax is not supported.

doc/build/core/types.rst

 
 Each dialect provides the full set of typenames supported by
 that backend within its `__all__` collection, so that a simple
-`import *` or similar will import all supported types as 
+`import *` or similar will import all supported types as
 implemented for that backend::
 
     from sqlalchemy.dialects.postgresql import *
                Column('inetaddr', INET)
     )
 
-Where above, the INTEGER and VARCHAR types are ultimately from 
+Where above, the INTEGER and VARCHAR types are ultimately from
 sqlalchemy.types, and INET is specific to the Postgresql dialect.
 
 Some dialect level types have the same name as the SQL standard type,
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 A frequent need is to force the "string" version of a type, that is
-the one rendered in a CREATE TABLE statement or other SQL function 
+the one rendered in a CREATE TABLE statement or other SQL function
 like CAST, to be changed.   For example, an application may want
 to force the rendering of ``BINARY`` for all platforms
-except for one, in which is wants ``BLOB`` to be rendered.  Usage 
+except for one, in which is wants ``BLOB`` to be rendered.  Usage
 of an existing generic type, in this case :class:`.LargeBinary`, is
 preferred for most use cases.  But to control
 types more accurately, a compilation directive that is per-dialect
         return "BLOB"
 
 The above code allows the usage of :class:`.types.BINARY`, which
-will produce the string ``BINARY`` against all backends except SQLite, 
+will produce the string ``BINARY`` against all backends except SQLite,
 in which case it will produce ``BLOB``.
 
-See the section :ref:`type_compilation_extension`, a subsection of 
+See the section :ref:`type_compilation_extension`, a subsection of
 :ref:`sqlalchemy.ext.compiler_toplevel`, for additional examples.
 
 Augmenting Existing Types
 is that it is intended to deal *only* with Python ``unicode`` objects
 on the Python side, meaning values passed to it as bind parameters
 must be of the form ``u'some string'`` if using Python 2 and not 3.
-The encoding/decoding functions it performs are only to suit what the 
+The encoding/decoding functions it performs are only to suit what the
 DBAPI in use requires, and are primarily a private implementation detail.
 
-The use case of a type that can safely receive Python bytestrings, 
+The use case of a type that can safely receive Python bytestrings,
 that is strings that contain non-ASCII characters and are not ``u''``
 objects in Python 2, can be achieved using a :class:`.TypeDecorator`
 which coerces as needed::
 Backend-agnostic GUID Type
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-Receives and returns Python uuid() objects.  Uses the PG UUID type 
+Receives and returns Python uuid() objects.  Uses the PG UUID type
 when using Postgresql, CHAR(32) on other backends, storing them
-in stringified hex format.   Can be modified to store 
+in stringified hex format.   Can be modified to store
 binary in CHAR(16) if desired::
 
     from sqlalchemy.types import TypeDecorator, CHAR
 ~~~~~~~~~~~~~~~~~~
 
 The :class:`.UserDefinedType` class is provided as a simple base class
-for defining entirely new database types.   Use this to represent native 
+for defining entirely new database types.   Use this to represent native
 database types not known by SQLAlchemy.   If only Python translation behavior
 is needed, use :class:`.TypeDecorator` instead.
 

doc/build/dialects/drizzle.rst

             DECIMAL, DOUBLE, ENUM, FLOAT, INT, INTEGER,
             NUMERIC, TEXT, TIME, TIMESTAMP, VARBINARY, VARCHAR
 
-Types which are specific to Drizzle, or have Drizzle-specific 
+Types which are specific to Drizzle, or have Drizzle-specific
 construction arguments, are as follows:
 
 .. currentmodule:: sqlalchemy.dialects.drizzle

doc/build/dialects/index.rst

 ========
 
 The **dialect** is the system SQLAlchemy uses to communicate with various types of DBAPIs and databases.
-A compatibility chart of supported backends can be found at :ref:`supported_dbapis`.  The sections that 
+A compatibility chart of supported backends can be found at :ref:`supported_dbapis`.  The sections that
 follow contain reference documentation and notes specific to the usage of each backend, as well as notes
 for the various DBAPIs.
 

doc/build/dialects/mssql.rst

         SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
         TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR
 
-Types which are specific to SQL Server, or have SQL Server-specific 
+Types which are specific to SQL Server, or have SQL Server-specific
 construction arguments, are as follows:
 
 .. currentmodule:: sqlalchemy.dialects.mssql

doc/build/dialects/mysql.rst

             NUMERIC, NVARCHAR, REAL, SET, SMALLINT, TEXT, TIME, TIMESTAMP, \
             TINYBLOB, TINYINT, TINYTEXT, VARBINARY, VARCHAR, YEAR
 
-Types which are specific to MySQL, or have MySQL-specific 
+Types which are specific to MySQL, or have MySQL-specific
 construction arguments, are as follows:
 
 .. currentmodule:: sqlalchemy.dialects.mysql

doc/build/dialects/oracle.rst

                 NUMBER, NVARCHAR, NVARCHAR2, RAW, TIMESTAMP, VARCHAR, \
                 VARCHAR2
 
-Types which are specific to Oracle, or have Oracle-specific 
+Types which are specific to Oracle, or have Oracle-specific
 construction arguments, are as follows:
 
 .. currentmodule:: sqlalchemy.dialects.oracle

doc/build/dialects/postgresql.rst

         MACADDR, NUMERIC, REAL, SMALLINT, TEXT, TIME, TIMESTAMP, \
         UUID, VARCHAR
 
-Types which are specific to PostgreSQL, or have PostgreSQL-specific 
+Types which are specific to PostgreSQL, or have PostgreSQL-specific
 construction arguments, are as follows:
 
 .. currentmodule:: sqlalchemy.dialects.postgresql

doc/build/index.rst

 
 A high level view and getting set up.
 
-:ref:`Overview <overview>` | 
+:ref:`Overview <overview>` |
 :ref:`Installation Guide <installation>` |
 :ref:`Migration from 0.6 <migration>`
 
 ===============
 
 The breadth of SQLAlchemy's SQL rendering engine, DBAPI
-integration, transaction integration, and schema description services 
+integration, transaction integration, and schema description services
 are documented here.  In contrast to the ORM's domain-centric mode of usage, the SQL Expression Language provides a schema-centric usage paradigm.
 
 * **Read this first:**
   :ref:`Database Introspection (Reflection) <metadata_reflection>` |
   :ref:`Insert/Update Defaults <metadata_defaults>` |
   :ref:`Constraints and Indexes <metadata_constraints>` |
-  :ref:`Using Data Definition Language (DDL) <metadata_ddl>` 
+  :ref:`Using Data Definition Language (DDL) <metadata_ddl>`
 
 * **Datatypes:**
-  :ref:`Overview <types_toplevel>` | 
-  :ref:`Generic Types <types_generic>` | 
+  :ref:`Overview <types_toplevel>` |
+  :ref:`Generic Types <types_generic>` |
   :ref:`SQL Standard Types <types_sqlstandard>` |
   :ref:`Vendor Specific Types <types_vendor>` |
   :ref:`Building Custom Types <types_custom>` |
-  :ref:`API <types_api>` 
+  :ref:`API <types_api>`
 
 * **Extending the Core:**
   :doc:`SQLAlchemy Events <core/event>` |

doc/build/intro.rst

 * **Plain Python Distutils** - SQLAlchemy can be installed with a clean
   Python install using the services provided via `Python Distutils <http://docs.python.org/distutils/>`_,
   using the ``setup.py`` script. The C extensions as well as Python 3 builds are supported.
-* **Standard Setuptools** - When using `setuptools <http://pypi.python.org/pypi/setuptools/>`_, 
+* **Standard Setuptools** - When using `setuptools <http://pypi.python.org/pypi/setuptools/>`_,
   SQLAlchemy can be installed via ``setup.py`` or ``easy_install``, and the C
   extensions are supported.  setuptools is not supported on Python 3 at the time
   of of this writing.
-* **Distribute** - With `distribute <http://pypi.python.org/pypi/distribute/>`_, 
+* **Distribute** - With `distribute <http://pypi.python.org/pypi/distribute/>`_,
   SQLAlchemy can be installed via ``setup.py`` or ``easy_install``, and the C
   extensions as well as Python 3 builds are supported.
 * **pip** - `pip <http://pypi.python.org/pypi/pip/>`_ is an installer that
   rides on top of ``setuptools`` or ``distribute``, replacing the usage
   of ``easy_install``.  It is often preferred for its simpler mode of usage.
 
-.. note:: 
+.. note::
 
    It is strongly recommended that either ``setuptools`` or ``distribute`` be installed.
    Python's built-in ``distutils`` lacks many widely used installation features.
 Install via easy_install or pip
 -------------------------------
 
-When ``easy_install`` or ``pip`` is available, the distribution can be 
+When ``easy_install`` or ``pip`` is available, the distribution can be
 downloaded from Pypi and installed in one step::
 
     easy_install SQLAlchemy
 
     python setup.py --without-cextensions install
 
-.. note:: 
+.. note::
 
    The ``--without-cextensions`` flag is available **only** if ``setuptools``
    or ``distribute`` is installed.  It is not available on a plain Python ``distutils``

doc/build/orm/collections.rst

 
     jack.posts.append(Post('new post'))
 
-Since the read side of the dynamic relationship always queries the 
-database, changes to the underlying collection will not be visible 
-until the data has been flushed.  However, as long as "autoflush" is 
-enabled on the :class:`.Session` in use, this will occur 
-automatically each time the collection is about to emit a 
+Since the read side of the dynamic relationship always queries the
+database, changes to the underlying collection will not be visible
+until the data has been flushed.  However, as long as "autoflush" is
+enabled on the :class:`.Session` in use, this will occur
+automatically each time the collection is about to emit a
 query.
 
 To place a dynamic relationship on a backref, use the :func:`~.orm.backref`
     class Post(Base):
         __table__ = posts_table
 
-        user = relationship(User, 
+        user = relationship(User,
                     backref=backref('posts', lazy='dynamic')
                 )
 
 Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
 
-.. note:: 
+.. note::
 
    The :func:`~.orm.dynamic_loader` function is essentially the same
    as :func:`~.orm.relationship` with the ``lazy='dynamic'`` argument specified.
 Setting Noload
 ---------------
 
-A "noload" relationship never loads from the database, even when 
+A "noload" relationship never loads from the database, even when
 accessed.   It is configured using ``lazy='noload'``::
 
     class MyClass(Base):
     class MyClass(Base):
         __tablename__ = 'mytable'
         id = Column(Integer, primary_key=True)
-        children = relationship("MyOtherClass", 
-                        cascade="all, delete-orphan", 
+        children = relationship("MyOtherClass",
+                        cascade="all, delete-orphan",
                         passive_deletes=True)
 
     class MyOtherClass(Base):
         __tablename__ = 'myothertable'
         id = Column(Integer, primary_key=True)
-        parent_id = Column(Integer, 
+        parent_id = Column(Integer,
                     ForeignKey('mytable.id', ondelete='CASCADE')
                         )
 
 Dictionary Collections
 -----------------------
 
-A little extra detail is needed when using a dictionary as a collection. 
+A little extra detail is needed when using a dictionary as a collection.
 This because objects are always loaded from the database as lists, and a key-generation
 strategy must be available to populate the dictionary correctly.  The
 :func:`.attribute_mapped_collection` function is by far the most common way
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=attribute_mapped_collection('keyword'), 
+        notes = relationship("Note",
+                    collection_class=attribute_mapped_collection('keyword'),
                     cascade="all, delete-orphan")
 
     class Note(Base):
     >>> item.notes.items()
     {'a': <__main__.Note object at 0x2eaaf0>}
 
-:func:`.attribute_mapped_collection` will ensure that 
+:func:`.attribute_mapped_collection` will ensure that
 the ``.keyword`` attribute of each ``Note`` complies with the key in the
 dictionary.   Such as, when assigning to ``Item.notes``, the dictionary
 key we supply must match that of the actual ``Note`` object::
 
     item = Item()
     item.notes = {
-                'a': Note('a', 'atext'), 
+                'a': Note('a', 'atext'),
                 'b': Note('b', 'btext')
             }
 
 The attribute which :func:`.attribute_mapped_collection` uses as a key
 does not need to be mapped at all!  Using a regular Python ``@property`` allows virtually
-any detail or combination of details about the object to be used as the key, as 
+any detail or combination of details about the object to be used as the key, as
 below when we establish it as a tuple of ``Note.keyword`` and the first ten letters
 of the ``Note.text`` field::
 
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=attribute_mapped_collection('note_key'), 
+        notes = relationship("Note",
+                    collection_class=attribute_mapped_collection('note_key'),
                     backref="item",
                     cascade="all, delete-orphan")
 
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=column_mapped_collection(Note.__table__.c.keyword), 
+        notes = relationship("Note",
+                    collection_class=column_mapped_collection(Note.__table__.c.keyword),
                     cascade="all, delete-orphan")
 
 as well as :func:`.mapped_collection` which is passed any callable function.
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=mapped_collection(lambda note: note.text[0:10]), 
+        notes = relationship("Note",
+                    collection_class=mapped_collection(lambda note: note.text[0:10]),
                     cascade="all, delete-orphan")
 
 Dictionary mappings are often combined with the "Association Proxy" extension to produce
-streamlined dictionary views.  See :ref:`proxying_dictionaries` and :ref:`composite_association_proxy` 
+streamlined dictionary views.  See :ref:`proxying_dictionaries` and :ref:`composite_association_proxy`
 for examples.
 
 .. autofunction:: attribute_mapped_collection
 
    For the first use case, the :func:`.orm.validates` decorator is by far
    the simplest way to intercept incoming values in all cases for the purposes
-   of validation and simple marshaling.  See :ref:`simple_validators` 
+   of validation and simple marshaling.  See :ref:`simple_validators`
    for an example of this.
 
    For the second use case, the :ref:`associationproxy_toplevel` extension is a
    unaffected and avoids the need to carefully tailor collection behavior on a
    method-by-method basis.
 
-   Customized collections are useful when the collection needs to 
-   have special behaviors upon access or mutation operations that can't 
+   Customized collections are useful when the collection needs to
+   have special behaviors upon access or mutation operations that can't
    otherwise be modeled externally to the collection.   They can of course
    be combined with the above two approaches.
 
             MappedCollection.__init__(self, keyfunc=lambda node: node.name)
             OrderedDict.__init__(self, *args, **kw)
 
-When subclassing :class:`.MappedCollection`, user-defined versions 
+When subclassing :class:`.MappedCollection`, user-defined versions
 of ``__setitem__()`` or ``__delitem__()`` should be decorated
 with :meth:`.collection.internally_instrumented`, **if** they call down
 to those same methods on :class:`.MappedCollection`.  This because the methods
                                         collection
 
     class MyMappedCollection(MappedCollection):
-        """Use @internally_instrumented when your methods 
+        """Use @internally_instrumented when your methods
         call down to already-instrumented methods.
 
         """
 
 .. note::
 
-   Due to a bug in MappedCollection prior to version 0.7.6, this 
+   Due to a bug in MappedCollection prior to version 0.7.6, this
    workaround usually needs to be called before a custom subclass
    of :class:`.MappedCollection` which uses :meth:`.collection.internally_instrumented`
    can be used::

doc/build/orm/events.rst

     The event supercedes the previous system of "extension" classes.
 
 For an introduction to the event API, see :ref:`event_toplevel`.  Non-ORM events
-such as those regarding connections and low-level statement execution are described in 
+such as those regarding connections and low-level statement execution are described in
 :ref:`core_event_toplevel`.
 
 Attribute Events

doc/build/orm/extensions/associationproxy.rst

 
 ``associationproxy`` is used to create a read/write view of a
 target attribute across a relationship.  It essentially conceals
-the usage of a "middle" attribute between two endpoints, and 
+the usage of a "middle" attribute between two endpoints, and
 can be used to cherry-pick fields from a collection of
 related objects or to reduce the verbosity of using the association
 object pattern.   Applied creatively, the association proxy allows
-the construction of sophisticated collections and dictionary 
+the construction of sophisticated collections and dictionary
 views of virtually any geometry, persisted to the database using
 standard, transparently configured relational patterns.
 
 
 The :class:`.AssociationProxy` object produced by the :func:`.association_proxy` function
 is an instance of a `Python descriptor <http://docs.python.org/howto/descriptor.html>`_.
-It is always declared with the user-defined class being mapped, regardless of 
+It is always declared with the user-defined class being mapped, regardless of
 whether Declarative or classical mappings via the :func:`.mapper` function are used.
 
-The proxy functions by operating upon the underlying mapped attribute 
+The proxy functions by operating upon the underlying mapped attribute
 or collection in response to operations, and changes made via the proxy are immediately
 apparent in the mapped attribute, as well as vice versa.   The underlying
 attribute remains fully accessible.
 The example works here because we have designed the constructor for ``Keyword``
 to accept a single positional argument, ``keyword``.   For those cases where a
 single-argument constructor isn't feasible, the association proxy's creational
-behavior can be customized using the ``creator`` argument, which references a 
+behavior can be customized using the ``creator`` argument, which references a
 callable (i.e. Python function) that will produce a new object instance given the
 singular argument.  Below we illustrate this using a lambda as is typical::
 
         # ...
 
         # use Keyword(keyword=kw) on append() events
-        keywords = association_proxy('kw', 'keyword', 
+        keywords = association_proxy('kw', 'keyword',
                         creator=lambda kw: Keyword(keyword=kw))
 
 The ``creator`` function accepts a single argument in the case of a list-
 regular use.
 
 Suppose our ``userkeywords`` table above had additional columns
-which we'd like to map explicitly, but in most cases we don't 
+which we'd like to map explicitly, but in most cases we don't
 require direct access to these attributes.  Below, we illustrate
-a new mapping which introduces the ``UserKeyword`` class, which 
+a new mapping which introduces the ``UserKeyword`` class, which
 is mapped to the ``userkeywords`` table illustrated earlier.
 This class adds an additional column ``special_key``, a value which
 we occasionally want to access, but not in the usual case.   We
 create an association proxy on the ``User`` class called
 ``keywords``, which will bridge the gap from the ``user_keywords``
-collection of ``User`` to the ``.keyword`` attribute present on each 
+collection of ``User`` to the ``.keyword`` attribute present on each
 ``UserKeyword``::
 
     from sqlalchemy import Column, Integer, String, ForeignKey
         special_key = Column(String(50))
 
         # bidirectional attribute/collection of "user"/"user_keywords"
-        user = relationship(User, 
-                    backref=backref("user_keywords", 
+        user = relationship(User,
+                    backref=backref("user_keywords",
                                     cascade="all, delete-orphan")
                 )
 
         def __repr__(self):
             return 'Keyword(%s)' % repr(self.keyword)
 
-With the above configuration, we can operate upon the ``.keywords`` 
+With the above configuration, we can operate upon the ``.keywords``
 collection of each ``User`` object, and the usage of ``UserKeyword``
 is concealed::
 
     >>> user = User('log')
     >>> for kw in (Keyword('new_from_blammo'), Keyword('its_big')):
     ...     user.keywords.append(kw)
-    ... 
+    ...
     >>> print(user.keywords)
     [Keyword('new_from_blammo'), Keyword('its_big')]
 
 The ``UserKeyword`` association object has two attributes here which are populated;
 the ``.keyword`` attribute is populated directly as a result of passing
 the ``Keyword`` object as the first argument.   The ``.user`` argument is then
-assigned as the ``UserKeyword`` object is appended to the ``User.user_keywords`` 
+assigned as the ``UserKeyword`` object is appended to the ``User.user_keywords``
 collection, where the bidirectional relationship configured between ``User.user_keywords``
 and ``UserKeyword.user`` results in a population of the ``UserKeyword.user`` attribute.
 The ``special_key`` argument above is left at its default value of ``None``.
 
-For those cases where we do want ``special_key`` to have a value, we 
+For those cases where we do want ``special_key`` to have a value, we
 create the ``UserKeyword`` object explicitly.  Below we assign all three
 attributes, where the assignment of ``.user`` has the effect of the ``UserKeyword``
 being appended to the ``User.user_keywords`` collection::
 
 The association proxy can proxy to dictionary based collections as well.   SQLAlchemy
 mappings usually use the :func:`.attribute_mapped_collection` collection type to
-create dictionary collections, as well as the extended techniques described in 
+create dictionary collections, as well as the extended techniques described in
 :ref:`dictionary_collections`.
 
 The association proxy adjusts its behavior when it detects the usage of a
 always, this creation function defaults to the constructor of the intermediary
 class, and can be customized using the ``creator`` argument.
 
-Below, we modify our ``UserKeyword`` example such that the ``User.user_keywords`` 
+Below, we modify our ``UserKeyword`` example such that the ``User.user_keywords``
 collection will now be mapped using a dictionary, where the ``UserKeyword.special_key``
 argument will be used as the key for the dictionary.   We then apply a ``creator``
 argument to the ``User.keywords`` proxy so that these values are assigned appropriately
         # proxy to 'user_keywords', instantiating UserKeyword
         # assigning the new key to 'special_key', values to
         # 'keyword'.
-        keywords = association_proxy('user_keywords', 'keyword', 
+        keywords = association_proxy('user_keywords', 'keyword',
                         creator=lambda k, v:
                                     UserKeyword(special_key=k, keyword=v)
                     )
         # bidirectional user/user_keywords relationships, mapping
         # user_keywords with a dictionary against "special_key" as key.
         user = relationship(User, backref=backref(
-                        "user_keywords", 
+                        "user_keywords",
                         collection_class=attribute_mapped_collection("special_key"),
                         cascade="all, delete-orphan"
                         )
 
 Given our previous examples of proxying from relationship to scalar
 attribute, proxying across an association object, and proxying dictionaries,
-we can combine all three techniques together to give ``User`` 
-a ``keywords`` dictionary that deals strictly with the string value 
+we can combine all three techniques together to give ``User``
+a ``keywords`` dictionary that deals strictly with the string value
 of ``special_key`` mapped to the string ``keyword``.  Both the ``UserKeyword``
 and ``Keyword`` classes are entirely concealed.  This is achieved by building
 an association proxy on ``User`` that refers to an association proxy
         id = Column(Integer, primary_key=True)
         name = Column(String(64))
 
-        # the same 'user_keywords'->'keyword' proxy as in 
+        # the same 'user_keywords'->'keyword' proxy as in
         # the basic dictionary example
         keywords = association_proxy(
-                    'user_keywords', 
-                    'keyword', 
+                    'user_keywords',
+                    'keyword',
                     creator=lambda k, v:
                                 UserKeyword(special_key=k, keyword=v)
                     )
     class UserKeyword(Base):
         __tablename__ = 'user_keyword'
         user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
-        keyword_id = Column(Integer, ForeignKey('keyword.id'), 
+        keyword_id = Column(Integer, ForeignKey('keyword.id'),
                                                         primary_key=True)
         special_key = Column(String)
         user = relationship(User, backref=backref(
-                "user_keywords", 
+                "user_keywords",
                 collection_class=attribute_mapped_collection("special_key"),
                 cascade="all, delete-orphan"
                 )
         # 'kw'
         kw = relationship("Keyword")
 
-        # 'keyword' is changed to be a proxy to the 
+        # 'keyword' is changed to be a proxy to the
         # 'keyword' attribute of 'Keyword'
         keyword = association_proxy('kw', 'keyword')
 
 
 One caveat with our example above is that because ``Keyword`` objects are created
 for each dictionary set operation, the example fails to maintain uniqueness for
-the ``Keyword`` objects on their string name, which is a typical requirement for 
-a tagging scenario such as this one.  For this use case the recipe 
+the ``Keyword`` objects on their string name, which is a typical requirement for
+a tagging scenario such as this one.  For this use case the recipe
 `UniqueObject <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject>`_, or
 a comparable creational strategy, is
 recommended, which will apply a "lookup first, then create" strategy to the constructor
 a "nested" EXISTS clause, such as in our basic association object example::
 
     >>> print(session.query(User).filter(User.keywords.any(keyword='jek')))
-    SELECT user.id AS user_id, user.name AS user_name 
-    FROM user 
-    WHERE EXISTS (SELECT 1 
-    FROM user_keyword 
-    WHERE user.id = user_keyword.user_id AND (EXISTS (SELECT 1 
-    FROM keyword 
+    SELECT user.id AS user_id, user.name AS user_name
+    FROM user
+    WHERE EXISTS (SELECT 1
+    FROM user_keyword
+    WHERE user.id = user_keyword.user_id AND (EXISTS (SELECT 1
+    FROM keyword
     WHERE keyword.id = user_keyword.keyword_id AND keyword.keyword = :keyword_1)))
 
 For a proxy to a scalar attribute, ``__eq__()`` is supported::
 
     >>> print(session.query(UserKeyword).filter(UserKeyword.keyword == 'jek'))
     SELECT user_keyword.*
-    FROM user_keyword 
-    WHERE EXISTS (SELECT 1 
-        FROM keyword 
+    FROM user_keyword
+    WHERE EXISTS (SELECT 1
+        FROM keyword
         WHERE keyword.id = user_keyword.keyword_id AND keyword.keyword = :keyword_1)
 
 and ``.contains()`` is available for a proxy to a scalar collection::
 
     >>> print(session.query(User).filter(User.keywords.contains('jek')))
     SELECT user.*
-    FROM user 
-    WHERE EXISTS (SELECT 1 
-    FROM userkeywords, keyword 
-    WHERE user.id = userkeywords.user_id 
-        AND keyword.id = userkeywords.keyword_id 
+    FROM user
+    WHERE EXISTS (SELECT 1
+    FROM userkeywords, keyword
+    WHERE user.id = userkeywords.user_id
+        AND keyword.id = userkeywords.keyword_id
         AND keyword.keyword = :keyword_1)
 
 :class:`.AssociationProxy` can be used with :meth:`.Query.join` somewhat manually

doc/build/orm/extensions/declarative.rst

 ===========
 
 .. automodule:: sqlalchemy.ext.declarative
- 
+
 API Reference
 -------------
 

doc/build/orm/inheritance.rst

 ability to load elements "polymorphically", meaning that a single query can
 return objects of multiple types.
 
-.. note:: 
+.. note::
 
    This section currently uses classical mappings to illustrate inheritance
    configurations, and will soon be updated to standardize on Declarative.
             self.manager_data = manager_data
         def __repr__(self):
             return (
-                self.__class__.__name__ + " " + 
+                self.__class__.__name__ + " " +
                 self.name + " " +  self.manager_data
             )
 
             self.engineer_info = engineer_info
         def __repr__(self):
             return (
-                self.__class__.__name__ + " " + 
+                self.__class__.__name__ + " " +
                 self.name + " " +  self.engineer_info
             )
 
 child tables instead of using a foreign key::
 
     engineers = Table('engineers', metadata,
-       Column('employee_id', Integer, 
-                        ForeignKey('employees.employee_id'), 
+       Column('employee_id', Integer,
+                        ForeignKey('employees.employee_id'),
                         primary_key=True),
        Column('engineer_info', String(50)),
     )
 
     managers = Table('managers', metadata,
-       Column('employee_id', Integer, 
-                        ForeignKey('employees.employee_id'), 
+       Column('employee_id', Integer,
+                        ForeignKey('employees.employee_id'),
                         primary_key=True),
        Column('manager_data', String(50)),
     )
 
 .. sourcecode:: python+sql
 
-    mapper(Employee, employees, polymorphic_on=employees.c.type, 
+    mapper(Employee, employees, polymorphic_on=employees.c.type,
                                 polymorphic_identity='employee')
-    mapper(Engineer, engineers, inherits=Employee, 
+    mapper(Engineer, engineers, inherits=Employee,
                                 polymorphic_identity='engineer')
-    mapper(Manager, managers, inherits=Employee, 
+    mapper(Manager, managers, inherits=Employee,
                                 polymorphic_identity='manager')
 
 And that's it. Querying against ``Employee`` will return a combination of
 .. sourcecode:: python+sql
 
     {opensql}
-    SELECT employees.employee_id AS employees_employee_id, 
+    SELECT employees.employee_id AS employees_employee_id,
         employees.name AS employees_name, employees.type AS employees_type
     FROM employees
     []
 .. sourcecode:: python+sql
 
     {opensql}
-    SELECT managers.employee_id AS managers_employee_id, 
+    SELECT managers.employee_id AS managers_employee_id,
         managers.manager_data AS managers_manager_data
     FROM managers
     WHERE ? = managers.employee_id
     [5]
-    SELECT engineers.employee_id AS engineers_employee_id, 
+    SELECT engineers.employee_id AS engineers_employee_id,
         engineers.engineer_info AS engineers_engineer_info
     FROM engineers
     WHERE ? = engineers.employee_id
 
     query.all()
     {opensql}
-    SELECT employees.employee_id AS employees_employee_id, 
-        engineers.employee_id AS engineers_employee_id, 
-        managers.employee_id AS managers_employee_id, 
-        employees.name AS employees_name, 
-        employees.type AS employees_type, 
-        engineers.engineer_info AS engineers_engineer_info, 
+    SELECT employees.employee_id AS employees_employee_id,
+        engineers.employee_id AS engineers_employee_id,
+        managers.employee_id AS managers_employee_id,
+        employees.name AS employees_name,
+        employees.type AS employees_type,
+        engineers.engineer_info AS engineers_engineer_info,
         managers.manager_data AS managers_manager_data
-    FROM employees 
-        LEFT OUTER JOIN engineers 
-        ON employees.employee_id = engineers.employee_id 
-        LEFT OUTER JOIN managers 
+    FROM employees
+        LEFT OUTER JOIN engineers
+        ON employees.employee_id = engineers.employee_id
+        LEFT OUTER JOIN managers
         ON employees.employee_id = managers.employee_id
     []
 
 
     # custom selectable
     query.with_polymorphic(
-                [Engineer, Manager], 
+                [Engineer, Manager],
                 employees.outerjoin(managers).outerjoin(engineers)
             )
 
 
 .. sourcecode:: python+sql
 
-    mapper(Employee, employees, polymorphic_on=employees.c.type, 
-                                polymorphic_identity='employee', 
+    mapper(Employee, employees, polymorphic_on=employees.c.type,
+                                polymorphic_identity='employee',
                                 with_polymorphic='*')
-    mapper(Engineer, engineers, inherits=Employee, 
+    mapper(Engineer, engineers, inherits=Employee,
                                 polymorphic_identity='engineer')
-    mapper(Manager, managers, inherits=Employee, 
+    mapper(Manager, managers, inherits=Employee,
                                 polymorphic_identity='manager')
 
 The above mapping will produce a query similar to that of
 classes - it also has to be called at the outset of a query.
 
 For total control of how :class:`.Query` joins along inheritance relationships,
-use the :class:`.Table` objects directly and construct joins manually.  For example, to 
+use the :class:`.Table` objects directly and construct joins manually.  For example, to
 query the name of employees with particular criterion::
 
     session.query(Employee.name).\
 
     session.query(Company).\
         join(
-            (employees.outerjoin(engineers).outerjoin(managers), 
+            (employees.outerjoin(engineers).outerjoin(managers),
             Company.employees)
         ).\
         filter(
-            or_(Engineer.engineer_info=='someinfo', 
+            or_(Engineer.engineer_info=='someinfo',
                 Manager.manager_data=='somedata')
         )
 
 
     session.query(Company).filter(
         exists([1],
-            and_(Engineer.engineer_info=='someinfo', 
+            and_(Engineer.engineer_info=='someinfo',
                 employees.c.company_id==companies.c.company_id),
             from_obj=employees.join(engineers)
         )
 
     employee_mapper = mapper(Employee, employees_table, \
         polymorphic_on=employees_table.c.type, polymorphic_identity='employee')
-    manager_mapper = mapper(Manager, inherits=employee_mapper, 
+    manager_mapper = mapper(Manager, inherits=employee_mapper,
                                         polymorphic_identity='manager')
-    engineer_mapper = mapper(Engineer, inherits=employee_mapper, 
+    engineer_mapper = mapper(Engineer, inherits=employee_mapper,
                                         polymorphic_identity='engineer')
 
 Note that the mappers for the derived classes Manager and Engineer omit the
         'engineer': engineers_table
     }, 'type', 'pjoin')
 
-    employee_mapper = mapper(Employee, employees_table, 
-                                        with_polymorphic=('*', pjoin), 
-                                        polymorphic_on=pjoin.c.type, 
+    employee_mapper = mapper(Employee, employees_table,
+                                        with_polymorphic=('*', pjoin),
+                                        polymorphic_on=pjoin.c.type,
                                         polymorphic_identity='employee')
-    manager_mapper = mapper(Manager, managers_table, 
-                                        inherits=employee_mapper, 
-                                        concrete=True, 
+    manager_mapper = mapper(Manager, managers_table,
+                                        inherits=employee_mapper,
+                                        concrete=True,
                                         polymorphic_identity='manager')
-    engineer_mapper = mapper(Engineer, engineers_table, 
-                                        inherits=employee_mapper, 
-                                        concrete=True, 
+    engineer_mapper = mapper(Engineer, engineers_table,
+                                        inherits=employee_mapper,
+                                        concrete=True,
                                         polymorphic_identity='engineer')
 
 Upon select, the polymorphic union produces a query like this:
 
     session.query(Employee).all()
     {opensql}
-    SELECT pjoin.type AS pjoin_type, 
-            pjoin.manager_data AS pjoin_manager_data, 
+    SELECT pjoin.type AS pjoin_type,
+            pjoin.manager_data AS pjoin_manager_data,
             pjoin.employee_id AS pjoin_employee_id,
     pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
     FROM (
-        SELECT employees.employee_id AS employee_id, 
+        SELECT employees.employee_id AS employee_id,
             CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
             CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
         FROM employees
     UNION ALL
-        SELECT managers.employee_id AS employee_id, 
+        SELECT managers.employee_id AS employee_id,
             managers.manager_data AS manager_data, managers.name AS name,
             CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
         FROM managers
     UNION ALL
-        SELECT engineers.employee_id AS employee_id, 
+        SELECT engineers.employee_id AS employee_id,
             CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
         engineers.engineer_info AS engineer_info, 'engineer' AS type
         FROM engineers
         Column('company_id', Integer, ForeignKey('companies.id'))
     )
 
-    mapper(Employee, employees_table, 
-                    with_polymorphic=('*', pjoin), 
-                    polymorphic_on=pjoin.c.type, 
+    mapper(Employee, employees_table,
+                    with_polymorphic=('*', pjoin),
+                    polymorphic_on=pjoin.c.type,
                     polymorphic_identity='employee')
 
-    mapper(Manager, managers_table, 
-                    inherits=employee_mapper, 
-                    concrete=True, 
+    mapper(Manager, managers_table,
+                    inherits=employee_mapper,
+                    concrete=True,
                     polymorphic_identity='manager')
 
-    mapper(Engineer, engineers_table, 
-                    inherits=employee_mapper, 
-                    concrete=True, 
+    mapper(Engineer, engineers_table,
+                    inherits=employee_mapper,
+                    concrete=True,
                     polymorphic_identity='engineer')
 
     mapper(Company, companies, properties={
             'some_c':relationship(C, back_populates='many_a')
     })
     mapper(C, c_table, properties={
-        'many_a':relationship(A, collection_class=set, 
+        'many_a':relationship(A, collection_class=set,
                                     back_populates='some_c'),
     })
 

doc/build/orm/loading.rst

 .. sourcecode:: python+sql
 
     {sql}>>> jack.addresses
-    SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, 
+    SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
     addresses.user_id AS addresses_user_id
     FROM addresses
     WHERE ? = addresses.user_id
     [5]
     {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
 
-The one case where SQL is not emitted is for a simple many-to-one relationship, when 
+The one case where SQL is not emitted is for a simple many-to-one relationship, when
 the related object can be identified by its primary key alone and that object is already
 present in the current :class:`.Session`.
 
 
     {sql}>>> jack = session.query(User).\
     ... options(subqueryload('addresses')).\
-    ... filt