Mike Bayer avatar Mike Bayer committed 594122d

trailing whitespace bonanza

Comments (0)

Files changed (32)

doc/build/core/connections.rst

 connection is retrieved from the connection pool at the point at which
 :class:`.Connection` is created.
 
-The returned result is an instance of :class:`.ResultProxy`, which 
+The returned result is an instance of :class:`.ResultProxy`, which
 references a DBAPI cursor and provides a largely compatible interface
 with that of the DBAPI cursor.   The DBAPI cursor will be closed
-by the :class:`.ResultProxy` when all of its result rows (if any) are 
+by the :class:`.ResultProxy` when all of its result rows (if any) are
 exhausted.  A :class:`.ResultProxy` that returns no rows, such as that of
-an UPDATE statement (without any returned rows), 
+an UPDATE statement (without any returned rows),
 releases cursor resources immediately upon construction.
 
 When the :meth:`~.Connection.close` method is called, the referenced DBAPI
 of weakref callbacks - *never* the ``__del__`` method) - however it's never a
 good idea to rely upon Python garbage collection to manage resources.
 
-Our example above illustrated the execution of a textual SQL string. 
-The :meth:`~.Connection.execute` method can of course accommodate more than 
+Our example above illustrated the execution of a textual SQL string.
+The :meth:`~.Connection.execute` method can of course accommodate more than
 that, including the variety of SQL expression constructs described
 in :ref:`sqlexpression_toplevel`.
 
 Using Transactions
 ==================
 
-.. note:: 
+.. note::
 
-  This section describes how to use transactions when working directly 
+  This section describes how to use transactions when working directly
   with :class:`.Engine` and :class:`.Connection` objects. When using the
   SQLAlchemy ORM, the public API for transaction control is via the
   :class:`.Session` object, which makes usage of the :class:`.Transaction`
 transaction is in progress. The detection is based on the presence of the
 ``autocommit=True`` execution option on the statement.   If the statement
 is a text-only statement and the flag is not set, a regular expression is used
-to detect INSERT, UPDATE, DELETE, as well as a variety of other commands 
+to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
 for a particular backend::
 
     conn = engine.connect()
     conn.execute("INSERT INTO users VALUES (1, 'john')")  # autocommits
 
 The "autocommit" feature is only in effect when no :class:`.Transaction` has
-otherwise been declared.   This means the feature is not generally used with 
-the ORM, as the :class:`.Session` object by default always maintains an 
+otherwise been declared.   This means the feature is not generally used with
+the ORM, as the :class:`.Session` object by default always maintains an
 ongoing :class:`.Transaction`.
 
 Full control of the "autocommit" behavior is available using the generative
 :class:`.Connection`.  This was illustrated using the :meth:`~.Engine.execute` method
 of :class:`.Engine`.
 
-In addition to "connectionless" execution, it is also possible 
-to use the :meth:`~.Executable.execute` method of 
+In addition to "connectionless" execution, it is also possible
+to use the :meth:`~.Executable.execute` method of
 any :class:`.Executable` construct, which is a marker for SQL expression objects
 that support execution.   The SQL expression object itself references an
 :class:`.Engine` or :class:`.Connection` known as the **bind**, which it uses
 on the expression itself, utilizing the fact that either an
 :class:`~sqlalchemy.engine.base.Engine` or
 :class:`~sqlalchemy.engine.base.Connection` has been *bound* to the expression
-object (binding is discussed further in 
+object (binding is discussed further in
 :ref:`metadata_toplevel`):
 
 .. sourcecode:: python+sql
     call_operation3(conn)
     conn.close()
 
-Calling :meth:`~.Connection.close` on the "contextual" connection does not release 
+Calling :meth:`~.Connection.close` on the "contextual" connection does not release
 its resources until all other usages of that resource are closed as well, including
 that any ongoing transactions are rolled back or committed.
 
       """
 
 If the dialect is providing support for a particular DBAPI on top of
-an existing SQLAlchemy-supported database, the name can be given 
+an existing SQLAlchemy-supported database, the name can be given
 including a database-qualification.  For example, if ``FooDialect``
 were in fact a MySQL dialect, the entry point could be established like this::
 

doc/build/core/engines.rst

 Supported Databases
 ====================
 
-SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various 
-backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package.  A 
+SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various
+backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package.  A
 SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
 
-The table below summarizes the state of DBAPI support in SQLAlchemy 0.7.  The values 
+The table below summarizes the state of DBAPI support in SQLAlchemy 0.7.  The values
 translate as:
 
 * yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
 :class:`.Engine` per database established within an
 application, rather than creating a new one for each connection.
 
-.. note:: 
+.. note::
 
    :class:`.QueuePool` is not used by default for SQLite engines.  See
    :ref:`sqlite_toplevel` for details on SQLite connection pool usage.
 namespace of SA loggers that can be turned on is as follows:
 
 * ``sqlalchemy.engine`` - controls SQL echoing.  set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
-* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects.  See the documentation of individual dialects for details. 
+* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects.  See the documentation of individual dialects for details.
 * ``sqlalchemy.pool`` - controls connection pool logging.  set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
 * ``sqlalchemy.orm`` - controls logging of various ORM functions.  set to ``logging.INFO`` for information on mapper configurations.
 
 
    The SQLAlchemy :class:`.Engine` conserves Python function call overhead
    by only emitting log statements when the current logging level is detected
-   as ``logging.INFO`` or ``logging.DEBUG``.  It only checks this level when 
-   a new connection is procured from the connection pool.  Therefore when 
+   as ``logging.INFO`` or ``logging.DEBUG``.  It only checks this level when
+   a new connection is procured from the connection pool.  Therefore when
    changing the logging configuration for an already-running application, any
    :class:`.Connection` that's currently active, or more commonly a
    :class:`~.orm.session.Session` object that's active in a transaction, won't log any
-   SQL according to the new configuration until a new :class:`.Connection` 
-   is procured (in the case of :class:`~.orm.session.Session`, this is 
+   SQL according to the new configuration until a new :class:`.Connection`
+   is procured (in the case of :class:`~.orm.session.Session`, this is
    after the current transaction ends and a new one begins).

doc/build/core/internals.rst

 Core Internals
 ==============
 
-Some key internal constructs are listed here.   
+Some key internal constructs are listed here.
 
 .. currentmodule: sqlalchemy
 

doc/build/intro.rst

 * **Plain Python Distutils** - SQLAlchemy can be installed with a clean
   Python install using the services provided via `Python Distutils <http://docs.python.org/distutils/>`_,
   using the ``setup.py`` script. The C extensions as well as Python 3 builds are supported.
-* **Standard Setuptools** - When using `setuptools <http://pypi.python.org/pypi/setuptools/>`_, 
+* **Standard Setuptools** - When using `setuptools <http://pypi.python.org/pypi/setuptools/>`_,
   SQLAlchemy can be installed via ``setup.py`` or ``easy_install``, and the C
   extensions are supported.  setuptools is not supported on Python 3 at the time
   of of this writing.
-* **Distribute** - With `distribute <http://pypi.python.org/pypi/distribute/>`_, 
+* **Distribute** - With `distribute <http://pypi.python.org/pypi/distribute/>`_,
   SQLAlchemy can be installed via ``setup.py`` or ``easy_install``, and the C
   extensions as well as Python 3 builds are supported.
 * **pip** - `pip <http://pypi.python.org/pypi/pip/>`_ is an installer that
   rides on top of ``setuptools`` or ``distribute``, replacing the usage
   of ``easy_install``.  It is often preferred for its simpler mode of usage.
 
-.. note:: 
+.. note::
 
    It is strongly recommended that either ``setuptools`` or ``distribute`` be installed.
    Python's built-in ``distutils`` lacks many widely used installation features.
 Install via easy_install or pip
 -------------------------------
 
-When ``easy_install`` or ``pip`` is available, the distribution can be 
+When ``easy_install`` or ``pip`` is available, the distribution can be
 downloaded from Pypi and installed in one step::
 
     easy_install SQLAlchemy
 
     python setup.py --without-cextensions install
 
-.. note:: 
+.. note::
 
    The ``--without-cextensions`` flag is available **only** if ``setuptools``
    or ``distribute`` is installed.  It is not available on a plain Python ``distutils``

doc/build/orm/collections.rst

 
     jack.posts.append(Post('new post'))
 
-Since the read side of the dynamic relationship always queries the 
-database, changes to the underlying collection will not be visible 
-until the data has been flushed.  However, as long as "autoflush" is 
-enabled on the :class:`.Session` in use, this will occur 
-automatically each time the collection is about to emit a 
+Since the read side of the dynamic relationship always queries the
+database, changes to the underlying collection will not be visible
+until the data has been flushed.  However, as long as "autoflush" is
+enabled on the :class:`.Session` in use, this will occur
+automatically each time the collection is about to emit a
 query.
 
 To place a dynamic relationship on a backref, use the :func:`~.orm.backref`
     class Post(Base):
         __table__ = posts_table
 
-        user = relationship(User, 
+        user = relationship(User,
                     backref=backref('posts', lazy='dynamic')
                 )
 
 Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
 
-.. note:: 
+.. note::
 
    The :func:`~.orm.dynamic_loader` function is essentially the same
    as :func:`~.orm.relationship` with the ``lazy='dynamic'`` argument specified.
 Setting Noload
 ---------------
 
-A "noload" relationship never loads from the database, even when 
+A "noload" relationship never loads from the database, even when
 accessed.   It is configured using ``lazy='noload'``::
 
     class MyClass(Base):
     class MyClass(Base):
         __tablename__ = 'mytable'
         id = Column(Integer, primary_key=True)
-        children = relationship("MyOtherClass", 
-                        cascade="all, delete-orphan", 
+        children = relationship("MyOtherClass",
+                        cascade="all, delete-orphan",
                         passive_deletes=True)
 
     class MyOtherClass(Base):
         __tablename__ = 'myothertable'
         id = Column(Integer, primary_key=True)
-        parent_id = Column(Integer, 
+        parent_id = Column(Integer,
                     ForeignKey('mytable.id', ondelete='CASCADE')
                         )
 
 Dictionary Collections
 -----------------------
 
-A little extra detail is needed when using a dictionary as a collection. 
+A little extra detail is needed when using a dictionary as a collection.
 This because objects are always loaded from the database as lists, and a key-generation
 strategy must be available to populate the dictionary correctly.  The
 :func:`.attribute_mapped_collection` function is by far the most common way
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=attribute_mapped_collection('keyword'), 
+        notes = relationship("Note",
+                    collection_class=attribute_mapped_collection('keyword'),
                     cascade="all, delete-orphan")
 
     class Note(Base):
     >>> item.notes.items()
     {'a': <__main__.Note object at 0x2eaaf0>}
 
-:func:`.attribute_mapped_collection` will ensure that 
+:func:`.attribute_mapped_collection` will ensure that
 the ``.keyword`` attribute of each ``Note`` complies with the key in the
 dictionary.   Such as, when assigning to ``Item.notes``, the dictionary
 key we supply must match that of the actual ``Note`` object::
 
     item = Item()
     item.notes = {
-                'a': Note('a', 'atext'), 
+                'a': Note('a', 'atext'),
                 'b': Note('b', 'btext')
             }
 
 The attribute which :func:`.attribute_mapped_collection` uses as a key
 does not need to be mapped at all!  Using a regular Python ``@property`` allows virtually
-any detail or combination of details about the object to be used as the key, as 
+any detail or combination of details about the object to be used as the key, as
 below when we establish it as a tuple of ``Note.keyword`` and the first ten letters
 of the ``Note.text`` field::
 
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=attribute_mapped_collection('note_key'), 
+        notes = relationship("Note",
+                    collection_class=attribute_mapped_collection('note_key'),
                     backref="item",
                     cascade="all, delete-orphan")
 
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=column_mapped_collection(Note.__table__.c.keyword), 
+        notes = relationship("Note",
+                    collection_class=column_mapped_collection(Note.__table__.c.keyword),
                     cascade="all, delete-orphan")
 
 as well as :func:`.mapped_collection` which is passed any callable function.
     class Item(Base):
         __tablename__ = 'item'
         id = Column(Integer, primary_key=True)
-        notes = relationship("Note", 
-                    collection_class=mapped_collection(lambda note: note.text[0:10]), 
+        notes = relationship("Note",
+                    collection_class=mapped_collection(lambda note: note.text[0:10]),
                     cascade="all, delete-orphan")
 
 Dictionary mappings are often combined with the "Association Proxy" extension to produce
-streamlined dictionary views.  See :ref:`proxying_dictionaries` and :ref:`composite_association_proxy` 
+streamlined dictionary views.  See :ref:`proxying_dictionaries` and :ref:`composite_association_proxy`
 for examples.
 
 .. autofunction:: attribute_mapped_collection
 
    For the first use case, the :func:`.orm.validates` decorator is by far
    the simplest way to intercept incoming values in all cases for the purposes
-   of validation and simple marshaling.  See :ref:`simple_validators` 
+   of validation and simple marshaling.  See :ref:`simple_validators`
    for an example of this.
 
    For the second use case, the :ref:`associationproxy_toplevel` extension is a
    unaffected and avoids the need to carefully tailor collection behavior on a
    method-by-method basis.
 
-   Customized collections are useful when the collection needs to 
-   have special behaviors upon access or mutation operations that can't 
+   Customized collections are useful when the collection needs to
+   have special behaviors upon access or mutation operations that can't
    otherwise be modeled externally to the collection.   They can of course
    be combined with the above two approaches.
 
             MappedCollection.__init__(self, keyfunc=lambda node: node.name)
             OrderedDict.__init__(self, *args, **kw)
 
-When subclassing :class:`.MappedCollection`, user-defined versions 
+When subclassing :class:`.MappedCollection`, user-defined versions
 of ``__setitem__()`` or ``__delitem__()`` should be decorated
 with :meth:`.collection.internally_instrumented`, **if** they call down
 to those same methods on :class:`.MappedCollection`.  This because the methods
                                         collection
 
     class MyMappedCollection(MappedCollection):
-        """Use @internally_instrumented when your methods 
+        """Use @internally_instrumented when your methods
         call down to already-instrumented methods.
 
         """
 
 .. note::
 
-   Due to a bug in MappedCollection prior to version 0.7.6, this 
+   Due to a bug in MappedCollection prior to version 0.7.6, this
    workaround usually needs to be called before a custom subclass
    of :class:`.MappedCollection` which uses :meth:`.collection.internally_instrumented`
    can be used::

doc/build/orm/inheritance.rst

 In joined table inheritance, each class along a particular classes' list of
 parents is represented by a unique table. The total set of attributes for a
 particular instance is represented as a join along all tables in its
-inheritance path. Here, we first define the ``Employee`` class. 
+inheritance path. Here, we first define the ``Employee`` class.
 This table will contain a primary key column (or columns), and a column
 for each attribute that's represented by ``Employee``. In this case it's just
 ``name``::
 The mapped table also has a column called ``type``.   The purpose of
 this column is to act as the **discriminator**, and stores a value
 which indicates the type of object represented within the row. The column may
-be of any datatype, though string and integer are the most common. 
+be of any datatype, though string and integer are the most common.
 
 The discriminator column is only needed if polymorphic loading is
 desired, as is usually the case.   It is not strictly necessary that
-it be present directly on the base mapped table, and can instead be defined on a 
-derived select statement that's used when the class is queried; 
+it be present directly on the base mapped table, and can instead be defined on a
+derived select statement that's used when the class is queried;
 however, this is a much more sophisticated configuration scenario.
 
 The mapping receives additional arguments via the ``__mapper_args__``
-dictionary.   Here the ``type`` column is explicitly stated as the 
+dictionary.   Here the ``type`` column is explicitly stated as the
 discriminator column, and the **polymorphic identity** of ``employee``
 is also given; this is the value that will be
 stored in the polymorphic discriminator column for instances of this
         }
 
 It is standard practice that the same column is used for both the role
-of primary key as well as foreign key to the parent table, 
+of primary key as well as foreign key to the parent table,
 and that the column is also named the same as that of the parent table.
 However, both of these practices are optional.  Separate columns may be used for
 primary key and parent-relationship, the column may be named differently than
     One natural effect of the joined table inheritance configuration is that the
     identity of any mapped object can be determined entirely from the base table.
     This has obvious advantages, so SQLAlchemy always considers the primary key
-    columns of a joined inheritance class to be those of the base table only. 
+    columns of a joined inheritance class to be those of the base table only.
     In other words, the ``id``
     columns of both the ``engineer`` and ``manager`` tables are not used to locate
     ``Engineer`` or ``Manager`` objects - only the value in
 .. sourcecode:: python+sql
 
     {opensql}
-    SELECT employee.id AS employee_id, 
+    SELECT employee.id AS employee_id,
         employee.name AS employee_name, employee.type AS employee_type
     FROM employee
     []
 .. sourcecode:: python+sql
 
     {opensql}
-    SELECT manager.id AS manager_id, 
+    SELECT manager.id AS manager_id,
         manager.manager_data AS manager_manager_data
     FROM manager
     WHERE ? = manager.id
     [5]
-    SELECT engineer.id AS engineer_id, 
+    SELECT engineer.id AS engineer_id,
         engineer.engineer_info AS engineer_engineer_info
     FROM engineer
     WHERE ? = engineer.id
 
     query = session.query(eng_plus_manager)
 
-The above produces a query which joins the ``employee`` table to both the 
+The above produces a query which joins the ``employee`` table to both the
 ``engineer`` and ``manager`` tables like the following:
 
 .. sourcecode:: python+sql
 
     query.all()
     {opensql}
-    SELECT employee.id AS employee_id, 
-        engineer.id AS engineer_id, 
-        manager.id AS manager_id, 
-        employee.name AS employee_name, 
-        employee.type AS employee_type, 
-        engineer.engineer_info AS engineer_engineer_info, 
+    SELECT employee.id AS employee_id,
+        engineer.id AS engineer_id,
+        manager.id AS manager_id,
+        employee.name AS employee_name,
+        employee.type AS employee_type,
+        engineer.engineer_info AS engineer_engineer_info,
         manager.manager_data AS manager_manager_data
-    FROM employee 
-        LEFT OUTER JOIN engineer 
-        ON employee.id = engineer.id 
-        LEFT OUTER JOIN manager 
+    FROM employee
+        LEFT OUTER JOIN engineer
+        ON employee.id = engineer.id
+        LEFT OUTER JOIN manager
         ON employee.id = manager.id
     []
 
 The entity returned by :func:`.orm.with_polymorphic` is an :class:`.AliasedClass`
 object, which can be used in a :class:`.Query` like any other alias, including
-named attributes for those attributes on the ``Employee`` class.   In our 
+named attributes for those attributes on the ``Employee`` class.   In our
 example, ``eng_plus_manager`` becomes the entity that we use to refer to the
-three-way outer join above.  It also includes namespaces for each class named 
-in the list of classes, so that attributes specific to those subclasses can be 
+three-way outer join above.  It also includes namespaces for each class named
+in the list of classes, so that attributes specific to those subclasses can be
 called upon as well.   The following example illustrates calling upon attributes
 specific to ``Engineer`` as well as ``Manager`` in terms of ``eng_plus_manager``::
 
     eng_plus_manager = with_polymorphic(Employee, [Engineer, Manager])
     query = session.query(eng_plus_manager).filter(
                     or_(
-                        eng_plus_manager.Engineer.engineer_info=='x', 
+                        eng_plus_manager.Engineer.engineer_info=='x',
                         eng_plus_manager.Manager.manager_data=='y'
                     )
                 )
     engineer = Engineer.__table__
     entity = with_polymorphic(
                 Employee,
-                [Engineer, Manager], 
+                [Engineer, Manager],
                 employee.outerjoin(manager).outerjoin(engineer)
             )
 
 +++++++++++++++++++++++++++++++++++++++++++++
 
 The ``with_polymorphic`` functions work fine for
-simplistic scenarios.   However, direct control of table rendering 
+simplistic scenarios.   However, direct control of table rendering
 is called for, such as the case when one wants to
 render to only the subclass table and not the parent table.
 
-This use case can be achieved by using the mapped :class:`.Table` 
-objects directly.   For example, to 
+This use case can be achieved by using the mapped :class:`.Table`
+objects directly.   For example, to
 query the name of employees with particular criterion::
 
     engineer = Engineer.__table__
         id = Column(Integer, primary_key=True)
         name = Column(String(50))
 
-        employees = relationship("Employee", 
+        employees = relationship("Employee",
                         backref='company',
                         cascade='all, delete-orphan')
 
 function to create a polymorphic selectable::
 
     manager_and_engineer = with_polymorphic(
-                                Employee, [Manager, Engineer], 
+                                Employee, [Manager, Engineer],
                                 aliased=True)
 
     session.query(Company).\
         join(manager_and_engineer, Company.employees).\
         filter(
-            or_(manager_and_engineer.Engineer.engineer_info=='someinfo', 
+            or_(manager_and_engineer.Engineer.engineer_info=='someinfo',
                 manager_and_engineer.Manager.manager_data=='somedata')
         )
 
 with the polymorphic construct::
 
     manager_and_engineer = with_polymorphic(
-                                Employee, [Manager, Engineer], 
+                                Employee, [Manager, Engineer],
                                 aliased=True)
 
     session.query(Company).\
         join(Company.employees.of_type(manager_and_engineer)).\
         filter(
-            or_(manager_and_engineer.Engineer.engineer_info=='someinfo', 
+            or_(manager_and_engineer.Engineer.engineer_info=='someinfo',
                 manager_and_engineer.Manager.manager_data=='somedata')
         )
 
 
     session.query(Company).filter(
         exists([1],
-            and_(Engineer.engineer_info=='someinfo', 
+            and_(Engineer.engineer_info=='someinfo',
                 employees.c.company_id==companies.c.company_id),
             from_obj=employees.join(engineers)
         )
 
 The :func:`.joinedload` and :func:`.subqueryload` options also support
 paths which make use of :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type`.
-Below we load ``Company`` rows while eagerly loading related ``Engineer`` 
+Below we load ``Company`` rows while eagerly loading related ``Engineer``
 objects, querying the ``employee`` and ``engineer`` tables simultaneously::
 
     session.query(Company).\
-        options(subqueryload_all(Company.employees.of_type(Engineer), 
+        options(subqueryload_all(Company.employees.of_type(Engineer),
                         Engineer.machines))
 
 .. versionadded:: 0.8
     :func:`.joinedload` and :func:`.subqueryload` support
-    paths that are qualified with 
+    paths that are qualified with
     :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type`.
 
 Single Table Inheritance
         }
 
 Note that the mappers for the derived classes Manager and Engineer omit the
-``__tablename__``, indicating they do not have a mapped table of 
+``__tablename__``, indicating they do not have a mapped table of
 their own.
 
 .. _concrete_inheritance:
 .. note::
 
     this section is currently using classical mappings.  The
-    Declarative system fully supports concrete inheritance 
+    Declarative system fully supports concrete inheritance
     however.   See the links below for more information on using
     declarative with concrete table inheritance.
 
         'engineer': engineers_table
     }, 'type', 'pjoin')
 
-    employee_mapper = mapper(Employee, employees_table, 
-                                        with_polymorphic=('*', pjoin), 
-                                        polymorphic_on=pjoin.c.type, 
+    employee_mapper = mapper(Employee, employees_table,
+                                        with_polymorphic=('*', pjoin),
+                                        polymorphic_on=pjoin.c.type,
                                         polymorphic_identity='employee')
-    manager_mapper = mapper(Manager, managers_table, 
-                                        inherits=employee_mapper, 
-                                        concrete=True, 
+    manager_mapper = mapper(Manager, managers_table,
+                                        inherits=employee_mapper,
+                                        concrete=True,
                                         polymorphic_identity='manager')
-    engineer_mapper = mapper(Engineer, engineers_table, 
-                                        inherits=employee_mapper, 
-                                        concrete=True, 
+    engineer_mapper = mapper(Engineer, engineers_table,
+                                        inherits=employee_mapper,
+                                        concrete=True,
                                         polymorphic_identity='engineer')
 
 Upon select, the polymorphic union produces a query like this:
 
     session.query(Employee).all()
     {opensql}
-    SELECT pjoin.type AS pjoin_type, 
-            pjoin.manager_data AS pjoin_manager_data, 
+    SELECT pjoin.type AS pjoin_type,
+            pjoin.manager_data AS pjoin_manager_data,
             pjoin.employee_id AS pjoin_employee_id,
     pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
     FROM (
-        SELECT employees.employee_id AS employee_id, 
+        SELECT employees.employee_id AS employee_id,
             CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
             CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
         FROM employees
     UNION ALL
-        SELECT managers.employee_id AS employee_id, 
+        SELECT managers.employee_id AS employee_id,
             managers.manager_data AS manager_data, managers.name AS name,
             CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
         FROM managers
     UNION ALL
-        SELECT engineers.employee_id AS employee_id, 
+        SELECT engineers.employee_id AS employee_id,
             CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
         engineers.engineer_info AS engineer_info, 'engineer' AS type
         FROM engineers
         Column('company_id', Integer, ForeignKey('companies.id'))
     )
 
-    mapper(Employee, employees_table, 
-                    with_polymorphic=('*', pjoin), 
-                    polymorphic_on=pjoin.c.type, 
+    mapper(Employee, employees_table,
+                    with_polymorphic=('*', pjoin),
+                    polymorphic_on=pjoin.c.type,
                     polymorphic_identity='employee')
 
-    mapper(Manager, managers_table, 
-                    inherits=employee_mapper, 
-                    concrete=True, 
+    mapper(Manager, managers_table,
+                    inherits=employee_mapper,
+                    concrete=True,
                     polymorphic_identity='manager')
 
-    mapper(Engineer, engineers_table, 
-                    inherits=employee_mapper, 
-                    concrete=True, 
+    mapper(Engineer, engineers_table,
+                    inherits=employee_mapper,
+                    concrete=True,
                     polymorphic_identity='engineer')
 
     mapper(Company, companies, properties={
             'some_c':relationship(C, back_populates='many_a')
     })
     mapper(C, c_table, properties={
-        'many_a':relationship(A, collection_class=set, 
+        'many_a':relationship(A, collection_class=set,
                                     back_populates='some_c'),
     })
 

doc/build/orm/relationships.rst

 Basic Relational Patterns
 --------------------------
 
-A quick walkthrough of the basic relational patterns. 
+A quick walkthrough of the basic relational patterns.
 
 The imports used for each of the following sections is as follows::
 
     class Parent(Base):
         __tablename__ = 'left'
         id = Column(Integer, primary_key=True)
-        children = relationship("Child", 
+        children = relationship("Child",
                         secondary=association_table)
 
     class Child(Base):
     class Parent(Base):
         __tablename__ = 'left'
         id = Column(Integer, primary_key=True)
-        children = relationship("Child", 
-                        secondary=association_table, 
+        children = relationship("Child",
+                        secondary=association_table,
                         backref="parents")
 
     class Child(Base):
         id = Column(Integer, primary_key=True)
 
 The ``secondary`` argument of :func:`.relationship` also accepts a callable
-that returns the ultimate argument, which is evaluated only when mappers are 
+that returns the ultimate argument, which is evaluated only when mappers are
 first used.   Using this, we can define the ``association_table`` at a later
 point, as long as it's available to the callable after all module initialization
 is complete::
     class Parent(Base):
         __tablename__ = 'left'
         id = Column(Integer, primary_key=True)
-        children = relationship("Child", 
-                        secondary=lambda: association_table, 
+        children = relationship("Child",
+                        secondary=lambda: association_table,
                         backref="parents")
 
 With the declarative extension in use, the traditional "string name of the table"
     class Parent(Base):
         __tablename__ = 'left'
         id = Column(Integer, primary_key=True)
-        children = relationship("Child", 
-                        secondary="association", 
+        children = relationship("Child",
+                        secondary="association",
                         backref="parents")
 
 Deleting Rows from the Many to Many Table
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 A behavior which is unique to the ``secondary`` argument to :func:`.relationship`
-is that the :class:`.Table` which is specified here is automatically subject 
+is that the :class:`.Table` which is specified here is automatically subject
 to INSERT and DELETE statements, as objects are added or removed from the collection.
-There is **no need to delete from this table manually**.   The act of removing a 
+There is **no need to delete from this table manually**.   The act of removing a
 record from the collection will have the effect of the row being deleted on flush::
 
     # row will be deleted from the "secondary" table
 
 There are several possibilities here:
 
-* If there is a :func:`.relationship` from ``Parent`` to ``Child``, but there is 
+* If there is a :func:`.relationship` from ``Parent`` to ``Child``, but there is
   **not** a reverse-relationship that links a particular ``Child`` to each ``Parent``,
   SQLAlchemy will not have any awareness that when deleting this particular
   ``Child`` object, it needs to maintain the "secondary" table that links it to
   the ``Parent``.  No delete of the "secondary" table will occur.
 * If there is a relationship that links a particular ``Child`` to each ``Parent``,
-  suppose it's called ``Child.parents``, SQLAlchemy by default will load in 
+  suppose it's called ``Child.parents``, SQLAlchemy by default will load in
   the ``Child.parents`` collection to locate all ``Parent`` objects, and remove
   each row from the "secondary" table which establishes this link.  Note that
   this relationship does not need to be bidrectional; SQLAlchemy is strictly
   looking at every :func:`.relationship` associated with the ``Child`` object
   being deleted.
-* A higher performing option here is to use ON DELETE CASCADE directives 
+* A higher performing option here is to use ON DELETE CASCADE directives
   with the foreign keys used by the database.   Assuming the database supports
-  this feature, the database itself can be made to automatically delete rows in the 
+  this feature, the database itself can be made to automatically delete rows in the
   "secondary" table as referencing rows in "child" are deleted.   SQLAlchemy
-  can be instructed to forego actively loading in the ``Child.parents`` 
+  can be instructed to forego actively loading in the ``Child.parents``
   collection in this case using the ``passive_deletes=True`` directive
   on :meth:`.relationship`; see :ref:`passive_deletes` for more details
   on this.
 Association Object
 ~~~~~~~~~~~~~~~~~~
 
-The association object pattern is a variant on many-to-many: it's 
+The association object pattern is a variant on many-to-many: it's
 used when your association table contains additional columns beyond those
 which are foreign keys to the left and right tables. Instead of using the
 ``secondary`` argument, you map a new class directly to the association table.
 The left side of the relationship references the association object via
 one-to-many, and the association class references the right side via
-many-to-one.  Below we illustrate an association table mapped to the 
+many-to-one.  Below we illustrate an association table mapped to the
 ``Association`` class which includes a column called ``extra_data``,
 which is a string value that is stored along with each association
 between ``Parent`` and ``Child``::
   advisable that the association-mapped table not be used
   as the ``secondary`` argument on a :func:`.relationship`
   elsewhere, unless that :func:`.relationship` contains
-  the option ``viewonly=True``.   SQLAlchemy otherwise 
-  may attempt to emit redundant INSERT and DELETE 
+  the option ``viewonly=True``.   SQLAlchemy otherwise
+  may attempt to emit redundant INSERT and DELETE
   statements on the same table, if similar state is detected
   on the related attribute as well as the associated
   object.
 -----------------------------
 
 The **adjacency list** pattern is a common relational pattern whereby a table
-contains a foreign key reference to itself. This is the most common 
+contains a foreign key reference to itself. This is the most common
 way to represent hierarchical data in flat tables.  Other methods
 include **nested sets**, sometimes called "modified preorder",
 as well as **materialized path**.  Despite the appeal that modified preorder
     6        1             child3
 
 The :func:`.relationship` configuration here works in the
-same way as a "normal" one-to-many relationship, with the 
+same way as a "normal" one-to-many relationship, with the
 exception that the "direction", i.e. whether the relationship
 is one-to-many or many-to-one, is assumed by default to
 be one-to-many.   To establish the relationship as many-to-one,
         id = Column(Integer, primary_key=True)
         parent_id = Column(Integer, ForeignKey('node.id'))
         data = Column(String(50))
-        children = relationship("Node", 
+        children = relationship("Node",
                     backref=backref('parent', remote_side=[id])
                 )
 
     # get all nodes named 'child2'
     session.query(Node).filter(Node.data=='child2')
 
-However extra care is needed when attempting to join along 
+However extra care is needed when attempting to join along
 the foreign key from one level of the tree to the next.  In SQL,
 a join from a table to itself requires that at least one side of the
 expression be "aliased" so that it can be unambiguously referred to.
 
 Recall from :ref:`ormtutorial_aliases` in the ORM tutorial that the
-:class:`.orm.aliased` construct is normally used to provide an "alias" of 
+:class:`.orm.aliased` construct is normally used to provide an "alias" of
 an ORM entity.  Joining from ``Node`` to itself using this technique
 looks like:
 
                     join(nodealias, Node.parent).\
                     filter(nodealias.data=="child2").\
                     all()
-    SELECT node.id AS node_id, 
-            node.parent_id AS node_parent_id, 
+    SELECT node.id AS node_id,
+            node.parent_id AS node_parent_id,
             node.data AS node_data
     FROM node JOIN node AS node_1
-        ON node.parent_id = node_1.id 
-    WHERE node.data = ? 
+        ON node.parent_id = node_1.id
+    WHERE node.data = ?
         AND node_1.data = ?
     ['subchild1', 'child2']
 
-:meth:`.Query.join` also includes a feature known as ``aliased=True`` that 
+:meth:`.Query.join` also includes a feature known as ``aliased=True`` that
 can shorten the verbosity self-referential joins, at the expense
 of query flexibility.  This feature
-performs a similar "aliasing" step to that above, without the need for an 
-explicit entity.   Calls to :meth:`.Query.filter` and similar subsequent to 
+performs a similar "aliasing" step to that above, without the need for an
+explicit entity.   Calls to :meth:`.Query.filter` and similar subsequent to
 the aliased join will **adapt** the ``Node`` entity to be that of the alias:
 
 .. sourcecode:: python+sql
             join(Node.parent, aliased=True).\
             filter(Node.data=='child2').\
             all()
-    SELECT node.id AS node_id, 
-            node.parent_id AS node_parent_id, 
+    SELECT node.id AS node_id,
+            node.parent_id AS node_parent_id,
             node.data AS node_data
-    FROM node 
+    FROM node
         JOIN node AS node_1 ON node_1.id = node.parent_id
     WHERE node.data = ? AND node_1.data = ?
     ['subchild1', 'child2']
 
 .. sourcecode:: python+sql
 
-    # get all nodes named 'subchild1' with a 
+    # get all nodes named 'subchild1' with a
     # parent named 'child2' and a grandparent 'root'
     {sql}session.query(Node).\
             filter(Node.data=='subchild1').\
             join(Node.parent, aliased=True, from_joinpoint=True).\
             filter(Node.data=='root').\
             all()
-    SELECT node.id AS node_id, 
-            node.parent_id AS node_parent_id, 
+    SELECT node.id AS node_id,
+            node.parent_id AS node_parent_id,
             node.data AS node_data
-    FROM node 
-        JOIN node AS node_1 ON node_1.id = node.parent_id 
+    FROM node
+        JOIN node AS node_1 ON node_1.id = node.parent_id
         JOIN node AS node_2 ON node_2.id = node_1.parent_id
-    WHERE node.data = ? 
-        AND node_1.data = ? 
+    WHERE node.data = ?
+        AND node_1.data = ?
         AND node_2.data = ?
     ['subchild1', 'child2', 'root']
 
-:meth:`.Query.reset_joinpoint` will also remove the "aliasing" from filtering 
+:meth:`.Query.reset_joinpoint` will also remove the "aliasing" from filtering
 calls::
 
     session.query(Node).\
                         join_depth=2)
 
     {sql}session.query(Node).all()
-    SELECT node_1.id AS node_1_id, 
-            node_1.parent_id AS node_1_parent_id, 
-            node_1.data AS node_1_data, 
-            node_2.id AS node_2_id, 
-            node_2.parent_id AS node_2_parent_id, 
-            node_2.data AS node_2_data, 
-            node.id AS node_id, 
-            node.parent_id AS node_parent_id, 
+    SELECT node_1.id AS node_1_id,
+            node_1.parent_id AS node_1_parent_id,
+            node_1.data AS node_1_data,
+            node_2.id AS node_2_id,
+            node_2.parent_id AS node_2_parent_id,
+            node_2.data AS node_2_data,
+            node.id AS node_id,
+            node.parent_id AS node_parent_id,
             node.data AS node_data
-    FROM node 
-        LEFT OUTER JOIN node AS node_2 
-            ON node.id = node_2.parent_id 
-        LEFT OUTER JOIN node AS node_1 
+    FROM node
+        LEFT OUTER JOIN node AS node_2
+            ON node.id = node_2.parent_id
+        LEFT OUTER JOIN node AS node_1
             ON node_2.id = node_1.parent_id
     []
 
 
         user = relationship("User", back_populates="addresses")
 
-Above, we add a ``.user`` relationship to ``Address`` explicitly.  On 
-both relationships, the ``back_populates`` directive tells each relationship 
+Above, we add a ``.user`` relationship to ``Address`` explicitly.  On
+both relationships, the ``back_populates`` directive tells each relationship
 about the other one, indicating that they should establish "bidirectional"
 behavior between each other.   The primary effect of this configuration
-is that the relationship adds event handlers to both attributes 
+is that the relationship adds event handlers to both attributes
 which have the behavior of "when an append or set event occurs here, set ourselves
 onto the incoming attribute using this particular attribute name".
 The behavior is illustrated as follows.   Start with a ``User`` and an ``Address``
 
 This behavior of course works in reverse for removal operations as well, as well
 as for equivalent operations on both sides.   Such as
-when ``.user`` is set again to ``None``, the ``Address`` object is removed 
+when ``.user`` is set again to ``None``, the ``Address`` object is removed
 from the reverse collection::
 
     >>> a1.user = None
     >>> u1.addresses
     []
 
-The manipulation of the ``.addresses`` collection and the ``.user`` attribute 
-occurs entirely in Python without any interaction with the SQL database.  
+The manipulation of the ``.addresses`` collection and the ``.user`` attribute
+occurs entirely in Python without any interaction with the SQL database.
 Without this behavior, the proper state would be apparent on both sides once the
 data has been flushed to the database, and later reloaded after a commit or
 expiration operation occurs.  The ``backref``/``back_populates`` behavior has the advantage
 ~~~~~~~~~~~~~~~~~~
 
 We've established that the ``backref`` keyword is merely a shortcut for building
-two individual :func:`.relationship` constructs that refer to each other.  Part of 
-the behavior of this shortcut is that certain configurational arguments applied to 
+two individual :func:`.relationship` constructs that refer to each other.  Part of
+the behavior of this shortcut is that certain configurational arguments applied to
 the :func:`.relationship`
 will also be applied to the other direction - namely those arguments that describe
 the relationship at a schema level, and are unlikely to be different in the reverse
 direction.  The usual case
 here is a many-to-many :func:`.relationship` that has a ``secondary`` argument,
-or a one-to-many or many-to-one which has a ``primaryjoin`` argument (the 
+or a one-to-many or many-to-one which has a ``primaryjoin`` argument (the
 ``primaryjoin`` argument is discussed in :ref:`relationship_primaryjoin`).  Such
 as if we limited the list of ``Address`` objects to those which start with "tony"::
 
         id = Column(Integer, primary_key=True)
         name = Column(String)
 
-        addresses = relationship("Address", 
+        addresses = relationship("Address",
                         primaryjoin="and_(User.id==Address.user_id, "
                             "Address.email.startswith('tony'))",
                         backref="user")
 
     >>> print User.addresses.property.primaryjoin
     "user".id = address.user_id AND address.email LIKE :email_1 || '%%'
-    >>> 
+    >>>
     >>> print Address.user.property.primaryjoin
     "user".id = address.user_id AND address.email LIKE :email_1 || '%%'
-    >>> 
+    >>>
 
 This reuse of arguments should pretty much do the "right thing" - it uses
 only arguments that are applicable, and in the case of a many-to-many
 relationship, will reverse the usage of ``primaryjoin`` and ``secondaryjoin``
-to correspond to the other direction (see the example in :ref:`self_referential_many_to_many` 
+to correspond to the other direction (see the example in :ref:`self_referential_many_to_many`
 for this).
 
 It's very often the case however that we'd like to specify arguments that
-are specific to just the side where we happened to place the "backref". 
+are specific to just the side where we happened to place the "backref".
 This includes :func:`.relationship` arguments like ``lazy``, ``remote_side``,
 ``cascade`` and ``cascade_backrefs``.   For this case we use the :func:`.backref`
 function in place of a string::
         id = Column(Integer, primary_key=True)
         name = Column(String)
 
-        addresses = relationship("Address", 
+        addresses = relationship("Address",
                         backref=backref("user", lazy="joined"))
 
 Where above, we placed a ``lazy="joined"`` directive only on the ``Address.user``
 An unusual case is that of the "one way backref".   This is where the "back-populating"
 behavior of the backref is only desirable in one direction. An example of this
 is a collection which contains a filtering ``primaryjoin`` condition.   We'd like to append
-items to this collection as needed, and have them populate the "parent" object on the 
+items to this collection as needed, and have them populate the "parent" object on the
 incoming object. However, we'd also like to have items that are not part of the collection,
-but still have the same "parent" association - these items should never be in the 
-collection.  
+but still have the same "parent" association - these items should never be in the
+collection.
 
 Taking our previous example, where we established a ``primaryjoin`` that limited the
 collection only to ``Address`` objects whose email address started with the word ``tony``,
 the transaction committed and their attributes expired for a re-load, the ``addresses``
 collection will hit the database on next access and no longer have this ``Address`` object
 present, due to the filtering condition.   But we can do away with this unwanted side
-of the "backref" behavior on the Python side by using two separate :func:`.relationship` constructs, 
+of the "backref" behavior on the Python side by using two separate :func:`.relationship` constructs,
 placing ``back_populates`` only on one side::
 
     from sqlalchemy import Integer, ForeignKey, String, Column
         __tablename__ = 'user'
         id = Column(Integer, primary_key=True)
         name = Column(String)
-        addresses = relationship("Address", 
+        addresses = relationship("Address",
                         primaryjoin="and_(User.id==Address.user_id, "
                             "Address.email.startswith('tony'))",
                         back_populates="user")
 Setting the primaryjoin and secondaryjoin
 -----------------------------------------
 
-A common scenario arises when we attempt to relate two 
+A common scenario arises when we attempt to relate two
 classes together, where there exist multiple ways to join the
 two tables.
 
 to load in an associated ``Address``, there is the choice of retrieving
 the ``Address`` referred to by the ``billing_address_id`` column or the one
 referred to by the ``shipping_address_id`` column.  The :func:`.relationship`,
-as it is, cannot determine its full configuration.   The examples at 
+as it is, cannot determine its full configuration.   The examples at
 :ref:`relationship_patterns` didn't have this issue, because in each of those examples
 there was only **one** way to refer to the related table.
 
-To resolve this issue, :func:`.relationship` accepts an argument named 
+To resolve this issue, :func:`.relationship` accepts an argument named
 ``primaryjoin`` which accepts a Python-based SQL expression, using the system described
 at :ref:`sqlexpression_toplevel`, that describes how the two tables should be joined
 together.  When using the declarative system, we often will specify this Python
         billing_address_id = Column(Integer, ForeignKey("address.id"))
         shipping_address_id = Column(Integer, ForeignKey("address.id"))
 
-        billing_address = relationship("Address", 
+        billing_address = relationship("Address",
                         primaryjoin="Address.id==Customer.billing_address_id")
-        shipping_address = relationship("Address", 
+        shipping_address = relationship("Address",
                         primaryjoin="Address.id==Customer.shipping_address_id")
 
 Above, loading the ``Customer.billing_address`` relationship from a ``Customer``
-object will use the value present in ``billing_address_id`` in order to 
+object will use the value present in ``billing_address_id`` in order to
 identify the row in ``Address`` to be loaded; similarly, ``shipping_address_id``
-is used for the ``shipping_address`` relationship.   The linkage of the two 
+is used for the ``shipping_address`` relationship.   The linkage of the two
 columns also plays a role during persistence; the newly generated primary key
-of a just-inserted ``Address`` object will be copied into the appropriate 
+of a just-inserted ``Address`` object will be copied into the appropriate
 foreign key column of an associated ``Customer`` object during a flush.
 
 Specifying Alternate Join Conditions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The open-ended nature of ``primaryjoin`` also allows us to customize how 
-related items are loaded.   In the example below, using the ``User`` class 
-as well as an ``Address`` class which stores a street address,  we 
+The open-ended nature of ``primaryjoin`` also allows us to customize how
+related items are loaded.   In the example below, using the ``User`` class
+as well as an ``Address`` class which stores a street address,  we
 create a relationship ``boston_addresses`` which will only
 load those ``Address`` objects which specify a city of "Boston"::
 
         __tablename__ = 'user'
         id = Column(Integer, primary_key=True)
         name = Column(String)
-        addresses = relationship("Address", 
+        addresses = relationship("Address",
                         primaryjoin="and_(User.id==Address.user_id, "
                             "Address.city=='Boston')")
 
 ``Address.user_id`` columns to each other, as well as limiting rows in ``Address``
 to just ``city='Boston'``.   When using Declarative, rudimentary SQL functions like
 :func:`.and_` are automatically available in the evaluated namespace of a string
-:func:`.relationship` argument.    
+:func:`.relationship` argument.
 
 When using classical mappings, we have the advantage of the :class:`.Table` objects
 already being present when the mapping is defined, so that the SQL expression
 Note that the custom criteria we use in a ``primaryjoin`` is generally only significant
 when SQLAlchemy is rendering SQL in order to load or represent this relationship.
 That is, it's  used
-in the SQL statement that's emitted in order to perform a per-attribute lazy load, or when a join is 
+in the SQL statement that's emitted in order to perform a per-attribute lazy load, or when a join is
 constructed at query time, such as via :meth:`.Query.join`, or via the eager "joined" or "subquery"
 styles of loading.   When in-memory objects are being manipulated, we can place any ``Address`` object
 we'd like into the ``boston_addresses`` collection, regardless of what the value of the ``.city``
 attribute is.   The objects will remain present in the collection until the attribute is expired
-and re-loaded from the database where the criterion is applied.   When 
+and re-loaded from the database where the criterion is applied.   When
 a flush occurs, the objects inside of ``boston_addresses`` will be flushed unconditionally, assigning
 value of the primary key ``user.id`` column onto the foreign-key-holding ``address.user_id`` column
 for each row.  The ``city`` criteria has no effect here, as the flush process only cares about synchronizing primary
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Many to many relationships can be customized by one or both of ``primaryjoin``
-and ``secondaryjoin`` - the latter is significant for a relationship that 
-specifies a many-to-many reference using the ``secondary`` argument.    
+and ``secondaryjoin`` - the latter is significant for a relationship that
+specifies a many-to-many reference using the ``secondary`` argument.
 A common situation which involves the usage of ``primaryjoin`` and ``secondaryjoin``
 is when establishing a many-to-many relationship from a class to itself, as shown below::
 
                         )})
 
 
-Note that in both examples, the ``backref`` keyword specifies a ``left_nodes`` 
-backref - when :func:`.relationship` creates the second relationship in the reverse 
+Note that in both examples, the ``backref`` keyword specifies a ``left_nodes``
+backref - when :func:`.relationship` creates the second relationship in the reverse
 direction, it's smart enough to reverse the ``primaryjoin`` and ``secondaryjoin`` arguments.
 
 Specifying Foreign Keys
 
     class User(Base):
         __table__ = users_table
-        addresses = relationship(Address, 
+        addresses = relationship(Address,
                         primaryjoin=
                         users_table.c.user_id==addresses_table.c.user_id,
                         foreign_keys=[addresses_table.c.user_id])
 and DELETE in order to delete without violating foreign key constraints). The
 two use cases are:
 
-* A table contains a foreign key to itself, and a single row will 
+* A table contains a foreign key to itself, and a single row will
   have a foreign key value pointing to its own primary key.
-* Two tables each contain a foreign key referencing the other 
+* Two tables each contain a foreign key referencing the other
   table, with a row in each table referencing the other.
 
 For example::
 identifiers were populated manually (again essentially bypassing
 :func:`~sqlalchemy.orm.relationship`).
 
-To enable the usage of a supplementary UPDATE statement, 
+To enable the usage of a supplementary UPDATE statement,
 we use the ``post_update`` option
 of :func:`.relationship`.  This specifies that the linkage between the
 two rows should be created using an UPDATE statement after both rows
-have been INSERTED; it also causes the rows to be de-associated with 
+have been INSERTED; it also causes the rows to be de-associated with
 each other via UPDATE before a DELETE is emitted.  The flag should
-be placed on just *one* of the relationships, preferably the 
+be placed on just *one* of the relationships, preferably the
 many-to-one side.  Below we illustrate
 a complete example, including two :class:`.ForeignKey` constructs, one which
 specifies ``use_alter=True`` to help with emitting CREATE TABLE statements::
         __tablename__ = 'widget'
 
         widget_id = Column(Integer, primary_key=True)
-        favorite_entry_id = Column(Integer, 
-                                ForeignKey('entry.entry_id', 
-                                use_alter=True, 
+        favorite_entry_id = Column(Integer,
+                                ForeignKey('entry.entry_id',
+                                use_alter=True,
                                 name="fk_favorite_entry"))
         name = Column(String(50))
 
 
         __table_args__ = (
             ForeignKeyConstraint(
-                ["widget_id", "favorite_entry_id"], 
+                ["widget_id", "favorite_entry_id"],
                 ["entry.widget_id", "entry.entry_id"],
                 name="fk_favorite_entry", use_alter=True
             ),
 well. For databases which enforce referential integrity,
 it's required to use the database's ON UPDATE CASCADE
 functionality in order to propagate primary key changes
-to referenced foreign keys - the values cannot be out 
+to referenced foreign keys - the values cannot be out
 of sync for any moment.
 
 For databases that don't support this, such as SQLite and
-MySQL without their referential integrity options turned 
+MySQL without their referential integrity options turned
 on, the ``passive_updates`` flag can
 be set to ``False``, most preferably on a one-to-many or
 many-to-many :func:`.relationship`, which instructs
         __tablename__ = 'address'
 
         email = Column(String(50), primary_key=True)
-        username = Column(String(50), 
+        username = Column(String(50),
                     ForeignKey('user.username', onupdate="cascade")
                 )
 

examples/adjacency_list/adjacency_list.py

     parent_id = Column(Integer, ForeignKey(id))
     name = Column(String(50), nullable=False)
 
-    children = relationship("TreeNode", 
+    children = relationship("TreeNode",
 
                         # cascade deletions
                         cascade="all",
 
                         # many to one + adjacency list - remote_side
-                        # is required to reference the 'remote' 
+                        # is required to reference the 'remote'
                         # column in the join condition.
                         backref=backref("parent", remote_side=id),
 
         return "   " * _indent + repr(self) + \
                     "\n" + \
                     "".join([
-                        c.dump(_indent +1) 
+                        c.dump(_indent +1)
                         for c in self.children.values()]
                     )
 
         "selecting tree on root, using eager loading to join four levels deep.")
     session.expunge_all()
     node = session.query(TreeNode).\
-                        options(joinedload_all("children", "children", 
+                        options(joinedload_all("children", "children",
                                                 "children", "children")).\
                         filter(TreeNode.name=="rootnode").\
                         first()

examples/large_collection/large_collection.py

 
 meta = MetaData()
 
-org_table = Table('organizations', meta, 
+org_table = Table('organizations', meta,
     Column('org_id', Integer, primary_key=True),
     Column('org_name', String(50), nullable=False, key='name'),
     mysql_engine='InnoDB')
         self.name = name
 
 mapper(Organization, org_table, properties = {
-    'members' : relationship(Member, 
+    'members' : relationship(Member,
         # Organization.members will be a Query object - no loading
         # of the entire collection occurs unless requested
-        lazy="dynamic", 
+        lazy="dynamic",
 
-        # Member objects "belong" to their parent, are deleted when 
+        # Member objects "belong" to their parent, are deleted when
         # removed from the collection
         cascade="all, delete-orphan",
 
         # "delete, delete-orphan" cascade does not load in objects on delete,
         # allows ON DELETE CASCADE to handle it.
-        # this only works with a database that supports ON DELETE CASCADE - 
+        # this only works with a database that supports ON DELETE CASCADE -
         # *not* sqlite or MySQL with MyISAM
-        passive_deletes=True, 
+        passive_deletes=True,
     )
 })
 
     print "-------------------------\nflush one - save org + 3 members\n"
     sess.commit()
 
-    # the 'members' collection is a Query.  it issues 
+    # the 'members' collection is a Query.  it issues
     # SQL as needed to load subsets of the collection.
     print "-------------------------\nload subset of members\n"
     members = org.members.filter(member_table.c.name.like('%member t%')).all()
     print "-------------------------\nflush two - save 3 more members\n"
     sess.commit()
 
-    # delete the object.   Using ON DELETE CASCADE 
-    # SQL is only emitted for the head row - the Member rows 
+    # delete the object.   Using ON DELETE CASCADE
+    # SQL is only emitted for the head row - the Member rows
     # disappear automatically without the need for additional SQL.
     sess.delete(org)
     print "-------------------------\nflush three - delete org, delete members in one statement\n"

examples/postgis/__init__.py

-"""A naive example illustrating techniques to help 
+"""A naive example illustrating techniques to help
 embed PostGIS functionality.
 
 This example was originally developed in the hopes that it would be extrapolated into a comprehensive PostGIS integration layer.  We are pleased to announce that this has come to fruition as `GeoAlchemy <http://www.geoalchemy.org/>`_.
 
 The example illustrates:
 
-* a DDL extension which allows CREATE/DROP to work in 
+* a DDL extension which allows CREATE/DROP to work in
   conjunction with AddGeometryColumn/DropGeometryColumn
 
 * a Geometry type, as well as a few subtypes, which
 * a standalone operator example.
 
 The implementation is limited to only public, well known
-and simple to use extension points. 
+and simple to use extension points.
 
 E.g.::
 

examples/sharding/attribute_shard.py

 import datetime
 
 # step 2. databases.
-# db1 is used for id generation. The "pool_threadlocal" 
+# db1 is used for id generation. The "pool_threadlocal"
 # causes the id_generator() to use the same connection as that
 # of an ongoing transaction within db1.
 echo = True
 
 # we need a way to create identifiers which are unique across all
 # databases.  one easy way would be to just use a composite primary key, where one
-# value is the shard id.  but here, we'll show something more "generic", an 
+# value is the shard id.  but here, we'll show something more "generic", an
 # id generation function.  we'll use a simplistic "id table" stored in database
 # #1.  Any other method will do just as well; UUID, hilo, application-specific, etc.
 
 # table setup.  we'll store a lead table of continents/cities,
 # and a secondary table storing locations.
 # a particular row will be placed in the database whose shard id corresponds to the
-# 'continent'.  in this setup, secondary rows in 'weather_reports' will 
+# 'continent'.  in this setup, secondary rows in 'weather_reports' will
 # be placed in the same DB as that of the parent, but this can be changed
 # if you're willing to write more complex sharding functions.
 
 
 # step 5. define sharding functions.
 
-# we'll use a straight mapping of a particular set of "country" 
+# we'll use a straight mapping of a particular set of "country"
 # attributes to shard id.
 shard_lookup = {
     'North America':'north_america',
     """shard chooser.
 
     looks at the given instance and returns a shard id
-    note that we need to define conditions for 
+    note that we need to define conditions for
     the WeatherLocation class, as well as our secondary Report class which will
     point back to its WeatherLocation via its 'location' attribute.
 
 
     given a primary key, returns a list of shards
     to search.  here, we don't have any particular information from a
-    pk so we just return all shard ids. often, youd want to do some 
-    kind of round-robin strategy here so that requests are evenly 
+    pk so we just return all shard ids. often, youd want to do some
+    kind of round-robin strategy here so that requests are evenly
     distributed among DBs.
 
     """
         # "shares_lineage()" returns True if both columns refer to the same
         # statement column, adjusting for any annotations present.
         # (an annotation is an internal clone of a Column object
-        # and occur when using ORM-mapped attributes like 
-        # "WeatherLocation.continent"). A simpler comparison, though less accurate, 
+        # and occur when using ORM-mapped attributes like
+        # "WeatherLocation.continent"). A simpler comparison, though less accurate,
         # would be "column.key == 'continent'".
         if column.shares_lineage(weather_locations.c.continent):
             if operator == operators.eq:
     """Search an orm.Query object for binary expressions.
 
     Returns expressions which match a Column against one or more
-    literal values as a list of tuples of the form 
+    literal values as a list of tuples of the form
     (column, operator, values).   "values" is a single value
     or tuple of values depending on the operator.
 
     comparisons = []
 
     def visit_bindparam(bind):
-        # visit a bind parameter.   
+        # visit a bind parameter.
 
         # check in _params for it first
         if bind.key in query._params:
             value = query._params[bind.key]
         elif bind.callable:
-            # some ORM functions (lazy loading) 
-            # place the bind's value as a 
-            # callable for deferred evaulation. 
+            # some ORM functions (lazy loading)
+            # place the bind's value as a
+            # callable for deferred evaulation.
             value = bind.callable()
         else:
             # just use .value
                 binary.operator == operators.in_op and \
                 hasattr(binary.right, 'clauses'):
             comparisons.append(
-                (binary.left, binary.operator, 
+                (binary.left, binary.operator,
                     tuple(binds[bind] for bind in binary.right.clauses)
                 )
             )
 
 # further configure create_session to use these functions
 create_session.configure(
-                    shard_chooser=shard_chooser, 
-                    id_chooser=id_chooser, 
+                    shard_chooser=shard_chooser,
+                    id_chooser=id_chooser,
                     query_chooser=query_chooser
                     )
 

examples/versioning/__init__.py

 be run via nose::
 
     cd examples/versioning
-    nosetests -v 
+    nosetests -v
 
 A fragment of example usage, using declarative::
 

lib/sqlalchemy/dialects/firebird/base.py

     __visit_name__ = 'VARCHAR'
 
     def __init__(self, length = None, **kwargs):
-        super(VARCHAR, self).__init__(length=length, **kwargs) 
+        super(VARCHAR, self).__init__(length=length, **kwargs)
 
 class CHAR(_StringType, sqltypes.CHAR):
     """Firebird CHAR type"""
     }
 
 
-# TODO: date conversion types (should be implemented as _FBDateTime, 
+# TODO: date conversion types (should be implemented as _FBDateTime,
 # _FBDate, etc. as bind/result functionality is required)
 
 class FBTypeCompiler(compiler.GenericTypeCompiler):
         """Get the next value from the sequence using ``gen_id()``."""
 
         return self._execute_scalar(
-                "SELECT gen_id(%s, 1) FROM rdb$database" % 
+                "SELECT gen_id(%s, 1) FROM rdb$database" %
                 self.dialect.identifier_preparer.format_sequence(seq),
                 type_
                 )
             return name
 
     def has_table(self, connection, table_name, schema=None):
-        """Return ``True`` if the given table exists, ignoring 
+        """Return ``True`` if the given table exists, ignoring
         the `schema`."""
 
         tblqry = """
         return {'constrained_columns':pkfields, 'name':None}
 
     @reflection.cache
-    def get_column_sequence(self, connection, 
-                                table_name, column_name, 
+    def get_column_sequence(self, connection,
+                                table_name, column_name,
                                 schema=None, **kw):
         tablename = self.denormalize_name(table_name)
         colname = self.denormalize_name(column_name)
                             COALESCE(cs.rdb$bytes_per_character,1) AS flen,
                         f.rdb$field_precision AS fprec,
                         f.rdb$field_scale AS fscale,
-                        COALESCE(r.rdb$default_source, 
+                        COALESCE(r.rdb$default_source,
                                 f.rdb$default_source) AS fdefault
         FROM rdb$relation_fields r
              JOIN rdb$fields f ON r.rdb$field_source=f.rdb$field_name
                 coltype = sqltypes.NULLTYPE
             elif colspec == 'INT64':
                 coltype = coltype(
-                                precision=row['fprec'], 
+                                precision=row['fprec'],
                                 scale=row['fscale'] * -1)
             elif colspec in ('VARYING', 'CSTRING'):
                 coltype = coltype(row['flen'])
             if row['fdefault'] is not None:
                 # the value comes down as "DEFAULT 'value'": there may be
                 # more than one whitespace around the "DEFAULT" keyword
-                # and it may also be lower case 
+                # and it may also be lower case
                 # (see also http://tracker.firebirdsql.org/browse/CORE-356)
                 defexpr = row['fdefault'].lstrip()
                 assert defexpr[:8].rstrip().upper() == \

lib/sqlalchemy/dialects/firebird/kinterbasdb.py

   SQLAlchemy uses 200 with Unicode, datetime and decimal support (see
   details__).
 
-* concurrency_level - set the backend policy with regards to threading 
+* concurrency_level - set the backend policy with regards to threading
   issues: by default SQLAlchemy uses policy 1 (see details__).
 
-* enable_rowcount - True by default, setting this to False disables 
-  the usage of "cursor.rowcount" with the 
+* enable_rowcount - True by default, setting this to False disables
+  the usage of "cursor.rowcount" with the
   Kinterbasdb dialect, which SQLAlchemy ordinarily calls upon automatically
-  after any UPDATE or DELETE statement.   When disabled, SQLAlchemy's 
-  ResultProxy will return -1 for result.rowcount.   The rationale here is 
-  that Kinterbasdb requires a second round trip to the database when 
-  .rowcount is called -  since SQLA's resultproxy automatically closes 
-  the cursor after a non-result-returning statement, rowcount must be 
+  after any UPDATE or DELETE statement.   When disabled, SQLAlchemy's
+  ResultProxy will return -1 for result.rowcount.   The rationale here is
+  that Kinterbasdb requires a second round trip to the database when
+  .rowcount is called -  since SQLA's resultproxy automatically closes
+  the cursor after a non-result-returning statement, rowcount must be
   called, if at all, before the result object is returned.   Additionally,
   cursor.rowcount may not return correct results with older versions
-  of Firebird, and setting this flag to False will also cause the 
+  of Firebird, and setting this flag to False will also cause the
   SQLAlchemy ORM to ignore its usage. The behavior can also be controlled on a
   per-execution basis using the `enable_rowcount` option with
   :meth:`execution_options()`::
 class FBExecutionContext_kinterbasdb(FBExecutionContext):
     @property
     def rowcount(self):
-        if self.execution_options.get('enable_rowcount', 
+        if self.execution_options.get('enable_rowcount',
                                         self.dialect.enable_rowcount):
             return self.cursor.rowcount
         else:
         # that for backward compatibility reasons returns a string like
         #   LI-V6.3.3.12981 Firebird 2.0
         # where the first version is a fake one resembling the old
-        # Interbase signature. 
+        # Interbase signature.
 
         fbconn = connection.connection
         version = fbconn.server_version
             msg = str(e)
             return ('Unable to complete network request to host' in msg or
                     'Invalid connection state' in msg or
-                    'Invalid cursor state' in msg or 
+                    'Invalid cursor state' in msg or
                     'connection shutdown' in msg)
         else:
             return False

lib/sqlalchemy/dialects/mssql/base.py

     SELECT TOP n
 
 If using SQL Server 2005 or above, LIMIT with OFFSET
-support is available through the ``ROW_NUMBER OVER`` construct. 
+support is available through the ``ROW_NUMBER OVER`` construct.
 For versions below 2005, LIMIT with OFFSET usage will fail.
 
 Nullability
 
 SQLAlchemy by default uses OUTPUT INSERTED to get at newly
 generated primary key values via IDENTITY columns or other
-server side defaults.   MS-SQL does not 
+server side defaults.   MS-SQL does not
 allow the usage of OUTPUT INSERTED on tables that have triggers.
 To disable the usage of OUTPUT INSERTED on a per-table basis,
 specify ``implicit_returning=False`` for each :class:`.Table`
 which has triggers::
 
-    Table('mytable', metadata, 
-        Column('id', Integer, primary_key=True), 
+    Table('mytable', metadata,
+        Column('id', Integer, primary_key=True),
         # ...,
         implicit_returning=False
     )
 Enabling Snapshot Isolation
 ---------------------------
 
-Not necessarily specific to SQLAlchemy, SQL Server has a default transaction 
+Not necessarily specific to SQLAlchemy, SQL Server has a default transaction
 isolation mode that locks entire tables, and causes even mildly concurrent
 applications to have long held locks and frequent deadlocks.
-Enabling snapshot isolation for the database as a whole is recommended 
-for modern levels of concurrency support.  This is accomplished via the 
+Enabling snapshot isolation for the database as a whole is recommended
+for modern levels of concurrency support.  This is accomplished via the
 following ALTER DATABASE commands executed at the SQL prompt::
 
     ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
                 return value.date()
             elif isinstance(value, basestring):
                 return datetime.date(*[
-                        int(x or 0) 
+                        int(x or 0)
                         for x in self._reg.match(value).groups()
                     ])
             else:
                 return value.time()
             elif isinstance(value, basestring):
                 return datetime.time(*[
-                        int(x or 0) 
+                        int(x or 0)
                         for x in self._reg.match(value).groups()])
             else:
                 return value
         return self._extend("TEXT", type_)
 
     def visit_VARCHAR(self, type_):
-        return self._extend("VARCHAR", type_, 
+        return self._extend("VARCHAR", type_,
                     length = type_.length or 'max')
 
     def visit_CHAR(self, type_):
         return self._extend("NCHAR", type_)
 
     def visit_NVARCHAR(self, type_):
-        return self._extend("NVARCHAR", type_, 
+        return self._extend("NVARCHAR", type_,
                     length = type_.length or 'max')
 
     def visit_date(self, type_):
 
     def visit_VARBINARY(self, type_):
         return self._extend(
-                        "VARBINARY", 
-                        type_, 
+                        "VARBINARY",
+                        type_,
                         length=type_.length or 'max')
 
     def visit_boolean(self, type_):
                                         not self.executemany
 
             if self._enable_identity_insert:
-                self.root_connection._cursor_execute(self.cursor, 
-                    "SET IDENTITY_INSERT %s ON" % 
+                self.root_connection._cursor_execute(self.cursor,
+                    "SET IDENTITY_INSERT %s ON" %
                     self.dialect.identifier_preparer.format_table(tbl),
                     ())
 
         conn = self.root_connection
         if self._select_lastrowid:
             if self.dialect.use_scope_identity:
-                conn._cursor_execute(self.cursor, 
+                conn._cursor_execute(self.cursor,
                     "SELECT scope_identity() AS lastrowid", ())
             else:
-                conn._cursor_execute(self.cursor, 
+                conn._cursor_execute(self.cursor,
                     "SELECT @@identity AS lastrowid", ())
             # fetchall() ensures the cursor is consumed without closing it
             row = self.cursor.fetchall()[0]
             self._result_proxy = base.FullyBufferedResultProxy(self)
 
         if self._enable_identity_insert:
-            conn._cursor_execute(self.cursor, 
+            conn._cursor_execute(self.cursor,
                         "SET IDENTITY_INSERT %s OFF" %
                             self.dialect.identifier_preparer.
                                 format_table(self.compiled.statement.table),
         if self._enable_identity_insert:
             try:
                 self.cursor.execute(
-                        "SET IDENTITY_INSERT %s OFF" % 
+                        "SET IDENTITY_INSERT %s OFF" %
                             self.dialect.identifier_preparer.\
                             format_table(self.compiled.statement.table)
                         )
 
     def visit_concat_op(self, binary, **kw):
         return "%s + %s" % \
-                (self.process(binary.left, **kw), 
+                (self.process(binary.left, **kw),
                 self.process(binary.right, **kw))
 
     def visit_match_op(self, binary, **kw):
         return "CONTAINS (%s, %s)" % (
-                                        self.process(binary.left, **kw), 
+                                        self.process(binary.left, **kw),
                                         self.process(binary.right, **kw))
 
     def get_select_precolumns(self, select):
         return "SAVE TRANSACTION %s" % self.preparer.format_savepoint(savepoint_stmt)
 
     def visit_rollback_to_savepoint(self, savepoint_stmt):
-        return ("ROLLBACK TRANSACTION %s" 
+        return ("ROLLBACK TRANSACTION %s"
                 % self.preparer.format_savepoint(savepoint_stmt))
 
     def visit_column(self, column, result_map=None, **kwargs):
                                         t, column)
 
                 if result_map is not None:
-                    result_map[column.name 
-                                if self.dialect.case_sensitive 
+                    result_map[column.name
+                                if self.dialect.case_sensitive
                                 else column.name.lower()] = \
-                                    (column.name, (column, ), 
+                                    (column.name, (column, ),
                                                     column.type)
 
                 return super(MSSQLCompiler, self).\
-                                visit_column(converted, 
+                                visit_column(converted,
                                             result_map=None, **kwargs)
 
-        return super(MSSQLCompiler, self).visit_column(column, 
-                                                       result_map=result_map, 
+        return super(MSSQLCompiler, self).visit_column(column,
+                                                       result_map=result_map,
                                                        **kwargs)
 
     def visit_binary(self, binary, **kwargs):
 
         """
         if (
-            isinstance(binary.left, expression.BindParameter) 
+            isinstance(binary.left, expression.BindParameter)
             and binary.operator == operator.eq
             and not isinstance(binary.right, expression.BindParameter)
             ):
             return self.process(
-                                expression.BinaryExpression(binary.right, 
-                                                             binary.left, 
-                                                             binary.operator), 
+                                expression.BinaryExpression(binary.right,
+                                                             binary.left,
+                                                             binary.operator),
                                 **kwargs)
         return super(MSSQLCompiler, self).visit_binary(binary, **kwargs)
 
 
         columns = [
             self.process(
-                col_label(c), 
-                within_columns_clause=True, 
+                col_label(c),
+                within_columns_clause=True,
                 result_map=self.result_map
-            ) 
+            )
             for c in expression._select_iterables(returning_cols)
         ]
         return 'OUTPUT ' + ', '.join(columns)
                             label_select_column(select, column, asfrom)
 
     def for_update_clause(self, select):
-        # "FOR UPDATE" is only allowed on "DECLARE CURSOR" which 
+        # "FOR UPDATE" is only allowed on "DECLARE CURSOR" which
         # SQLAlchemy doesn't use
         return ''
 
                                 from_hints,
                                 **kw):
         """Render the UPDATE..FROM clause specific to MSSQL.
-        
+
         In MSSQL, if the UPDATE statement involves an alias of the table to
         be updated, then the table itself must be added to the FROM list as
         well. Otherwise, it is optional. Here, we add it regardless.
-        
+
         """
         return "FROM " + ', '.join(
                     t._compiler_dispatch(self, asfrom=True,
     def visit_in_op(self, binary, **kw):
         kw['literal_binds'] = True
         return "%s IN %s" % (
-                                self.process(binary.left, **kw), 
+                                self.process(binary.left, **kw),
                                 self.process(binary.right, **kw)
             )
 
     def visit_notin_op(self, binary, **kw):
         kw['literal_binds'] = True
         return "%s NOT IN %s" % (
-                                self.process(binary.left, **kw), 
+                                self.process(binary.left, **kw),
                                 self.process(binary.right, **kw)
             )
 
 
 class MSDDLCompiler(compiler.DDLCompiler):
     def get_column_specification(self, column, **kwargs):
-        colspec = (self.preparer.format_column(column) + " " 
+        colspec = (self.preparer.format_column(column) + " "
                    + self.dialect.type_compiler.process(column.type))
 
         if column.nullable is not None:
 
         if column.table is None:
             raise exc.CompileError(
-                            "mssql requires Table-bound columns " 
+                            "mssql requires Table-bound columns "
                             "in order to generate DDL")
 
         seq_col = column.table._autoincrement_column
     reserved_words = RESERVED_WORDS
 
     def __init__(self, dialect):
-        super(MSIdentifierPreparer, self).__init__(dialect, initial_quote='[', 
+        super(MSIdentifierPreparer, self).__init__(dialect, initial_quote='[',
                                                    final_quote=']')
 
     def _escape_identifier(self, value):
         super(MSDialect, self).initialize(connection)
         if self.server_version_info[0] not in range(8, 17):
             # FreeTDS with version 4.2 seems to report here
-            # a number like "95.10.255".  Don't know what 
+            # a number like "95.10.255".  Don't know what
             # that is.  So emit warning.
             util.warn(
                 "Unrecognized server version info '%s'.   Version specific "
                 "join sys.schemas as sch on sch.schema_id=tab.schema_id "
                 "where tab.name = :tabname "
                 "and sch.name=:schname "
-                "and ind.is_primary_key=0", 
+                "and ind.is_primary_key=0",
                 bindparams=[
-                    sql.bindparam('tabname', tablename, 
+                    sql.bindparam('tabname', tablename,
                                     sqltypes.String(convert_unicode=True)),
-                    sql.bindparam('schname', current_schema, 
+                    sql.bindparam('schname', current_schema,
                                     sqltypes.String(convert_unicode=True))
                 ],
                 typemap = {
                 "where tab.name=:tabname "
                 "and sch.name=:schname",
                         bindparams=[
-                            sql.bindparam('tabname', tablename, 
+                            sql.bindparam('tabname', tablename,
                                     sqltypes.String(convert_unicode=True)),
-                            sql.bindparam('schname', current_schema, 
+                            sql.bindparam('schname', current_schema,
                                     sqltypes.String(convert_unicode=True))
                         ],
                         typemap = {