Commits

Mike Bayer committed 790b6a4

roughly the finished product.

  • Participants
  • Parent commits bb1fb0f

Comments (0)

Files changed (25)

doc/build/core/connections.rst

 .. index::
    single: thread safety; connections
 
-Connection facts:
-
-* the Connection object is **not thread-safe**. While a Connection can be
-  shared among threads using properly synchronized access, this is also not
-  recommended as many DBAPIs have issues with, if not outright disallow,
-  sharing of connection state between threads.
-* The Connection object represents a single dbapi connection checked out from
-  the connection pool. In this state, the connection pool has no affect upon
-  the connection, including its expiration or timeout state. For the
-  connection pool to properly manage connections, **connections should be
-  returned to the connection pool (i.e. ``connection.close()``) whenever the
-  connection is not in use**. If your application has a need for management of
-  multiple connections or is otherwise long running (this includes all web
-  applications, threaded or not), don't hold a single connection open at the
-  module level.
-
 Connection API
 ===============
 
 .. autoclass:: Connection
+   :show-inheritance:
    :members:
-   :undoc-members:
 
 .. autoclass:: Connectable
+   :show-inheritance:
    :members:
 
 Engine API
 ===========
 
 .. autoclass:: Engine
+   :show-inheritance:
    :members:
 
 Result Object API
 .. autoclass:: sqlalchemy.engine.base.RowProxy
     :members:
 
-Using Connection-level Transactions
-===================================
+Using Transactions
+==================
+
+.. note:: This section describes how to use transactions when working directly 
+  with :class:`.Engine` and :class:`.Connection` objects. When using the
+  SQLAlchemy ORM, the public API for transaction control is via the
+  :class:`.Session` object, which makes usage of the :class:`.Transaction`
+  object internally. See :ref:`unitofwork_transaction` for further
+  information.
 
 The :class:`~sqlalchemy.engine.base.Connection` object provides a ``begin()``
 method which returns a :class:`~sqlalchemy.engine.base.Transaction` object.
 available, but will automatically participate in an enclosing transaction if
 one exists.
 
-Note that SQLAlchemy's Object Relational Mapper also provides a way to control
-transaction scope at a higher level; this is described in
-:ref:`unitofwork_transaction`.
-
 .. index::
    single: thread safety; transactions
 
-Transaction Facts:
-
-* the Transaction object, just like its parent Connection, is **not thread-safe**.
-
 .. autoclass:: Transaction
     :members:
 

doc/build/core/expression_api.rst

 
 .. module:: sqlalchemy.sql.expression
 
+This section presents the API reference for the SQL Expression Language.  For a full introduction to its usage,
+see :ref:`sqlexpression_toplevel`.
+
 Functions
 ---------
 

doc/build/core/index.rst

 ===============
 
 .. toctree::
-    :maxdepth: 1
+    :maxdepth: 2
     
     tutorial
     expression_api
     types
     interfaces
     compiler
+    serializer
     
     

doc/build/core/interfaces.rst

-.. currentmodule:: sqlalchemy.interfaces
-
 .. _interfaces_core_toplevel:
 
+Core Event Interfaces
+======================
 
-Core Event Interfaces
-====================
+.. module:: sqlalchemy.interfaces
 
 This section describes the various categories of events which can be intercepted
 in SQLAlchemy core, including execution and connection pool events.

doc/build/core/schema.rst

 * Your application has multiple schemas that correspond to different engines.
   Using one :class:`~sqlalchemy.schema.MetaData` for each schema, bound to
   each engine, provides a decent place to delineate between the schemas. The
-  ORM will also integrate with this approach, where the :class:`Session` will
+  ORM will also integrate with this approach, where the :class:`.Session` will
   naturally use the engine that is bound to each table via its metadata
-  (provided the :class:`Session` itself has no ``bind`` configured.).
+  (provided the :class:`.Session` itself has no ``bind`` configured.).
 
 Alternatively, the ``bind`` attribute of :class:`~sqlalchemy.schema.MetaData`
 is *confusing* if:
   :class:`~sqlalchemy.schema.MetaData` object is *not* appropriate for
   per-request switching like this, although a
   :class:`~sqlalchemy.schema.ThreadLocalMetaData` object is.
-* You are using the ORM :class:`Session` to handle which class/table is bound
-  to which engine, or you are using the :class:`Session` to manage switching
+* You are using the ORM :class:`.Session` to handle which class/table is bound
+  to which engine, or you are using the :class:`.Session` to manage switching
   between engines. Its a good idea to keep the "binding of tables to engines"
   in one place - either using :class:`~sqlalchemy.schema.MetaData` only (the
-  :class:`Session` can of course be present, it just has no ``bind``
-  configured), or using :class:`Session` only (the ``bind`` attribute of
+  :class:`.Session` can of course be present, it just has no ``bind``
+  configured), or using :class:`.Session` only (the ``bind`` attribute of
   :class:`~sqlalchemy.schema.MetaData` is left empty).
 
 Specifying the Schema Name
 SQL Expressions
 ---------------
 
-The "default" and "onupdate" keywords may also be passed SQL expressions, including select statements or direct function calls::
+The "default" and "onupdate" keywords may also be passed SQL expressions,
+including select statements or direct function calls::
 
     t = Table("mytable", meta,
         Column('id', Integer, primary_key=True),
 is true:
 
 * the column is a primary key column
-
 * the database dialect does not support a usable ``cursor.lastrowid`` accessor
-(or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as
-well as some MySQL dialects.
-
+  (or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as
+  well as some MySQL dialects.
 * the dialect does not support the "RETURNING" clause or similar, or the
-``implicit_returning`` flag is set to ``False`` for the dialect. Dialects
-which support RETURNING currently include Postgresql, Oracle, Firebird, and
-MS-SQL.
-
+  ``implicit_returning`` flag is set to ``False`` for the dialect. Dialects
+  which support RETURNING currently include Postgresql, Oracle, Firebird, and
+  MS-SQL.
 * the statement is a single execution, i.e. only supplies one set of
-parameters and doesn't use "executemany" behavior
-
+  parameters and doesn't use "executemany" behavior
 * the ``inline=True`` flag is not set on the
-:class:`~sqlalchemy.sql.expression.Insert()` or
-:class:`~sqlalchemy.sql.expression.Update()` construct, and the statement has
-not defined an explicit `returning()` clause.
+  :class:`~sqlalchemy.sql.expression.Insert()` or
+  :class:`~sqlalchemy.sql.expression.Update()` construct, and the statement has
+  not defined an explicit `returning()` clause.
 
 Whether or not the default generation clause "pre-executes" is not something
 that normally needs to be considered, unless it is being addressed for
 .. autoclass:: Constraint
     :show-inheritance:
 
+.. autoclass:: ColumnCollectionConstraint
+    :show-inheritance:
+    
 .. autoclass:: PrimaryKeyConstraint
     :show-inheritance:
 
-
 Indexes
 -------
 
 Controlling DDL Sequences
 -------------------------
 
-The ``sqlalchemy.schema`` package contains SQL expression constructs that provide DDL expressions.   For example, to produce a ``CREATE TABLE`` statement:
+The ``sqlalchemy.schema`` package contains SQL expression constructs that
+provide DDL expressions. For example, to produce a ``CREATE TABLE`` statement:
 
 .. sourcecode:: python+sql
 
         col6 INTEGER
     ){stop}
 
-Above, the :class:`~sqlalchemy.schema.CreateTable` construct works like any other expression construct (such as ``select()``, ``table.insert()``, etc.).  A full reference of available constructs is in :ref:`schema_api_ddl`.
+Above, the :class:`~sqlalchemy.schema.CreateTable` construct works like any
+other expression construct (such as ``select()``, ``table.insert()``, etc.). A
+full reference of available constructs is in :ref:`schema_api_ddl`.
 
-The DDL constructs all extend a common base class which provides the capability to be associated with an individual :class:`~sqlalchemy.schema.Table` or :class:`~sqlalchemy.schema.MetaData` object, to be invoked upon create/drop events.   Consider the example of a table which contains a CHECK constraint:
+The DDL constructs all extend a common base class which provides the
+capability to be associated with an individual
+:class:`~sqlalchemy.schema.Table` or :class:`~sqlalchemy.schema.MetaData`
+object, to be invoked upon create/drop events. Consider the example of a table
+which contains a CHECK constraint:
 
 .. sourcecode:: python+sql
 
         CONSTRAINT cst_user_name_length  CHECK (length(user_name) >= 8)
     ){stop}
 
-The above table contains a column "user_name" which is subject to a CHECK constraint that validates that the length of the string is at least eight characters.   When a ``create()`` is issued for this table, DDL for the :class:`~sqlalchemy.schema.CheckConstraint` will also be issued inline within the table definition.
+The above table contains a column "user_name" which is subject to a CHECK
+constraint that validates that the length of the string is at least eight
+characters. When a ``create()`` is issued for this table, DDL for the
+:class:`~sqlalchemy.schema.CheckConstraint` will also be issued inline within
+the table definition.
 
-The :class:`~sqlalchemy.schema.CheckConstraint` construct can also be constructed externally and associated with the :class:`~sqlalchemy.schema.Table` afterwards::
+The :class:`~sqlalchemy.schema.CheckConstraint` construct can also be
+constructed externally and associated with the
+:class:`~sqlalchemy.schema.Table` afterwards::
 
     constraint = CheckConstraint('length(user_name) >= 8',name="cst_user_name_length")
     users.append_constraint(constraint)
 
-So far, the effect is the same.  However, if we create DDL elements corresponding to the creation and removal of this constraint, and associate them with the :class:`~sqlalchemy.schema.Table` as events, these new events will take over the job of issuing DDL for the constraint.  Additionally, the constraint will be added via ALTER:
+So far, the effect is the same. However, if we create DDL elements
+corresponding to the creation and removal of this constraint, and associate
+them with the :class:`~sqlalchemy.schema.Table` as events, these new events
+will take over the job of issuing DDL for the constraint. Additionally, the
+constraint will be added via ALTER:
 
 .. sourcecode:: python+sql
 
     ALTER TABLE users DROP CONSTRAINT cst_user_name_length
     DROP TABLE users{stop}
 
-The real usefulness of the above becomes clearer once we illustrate the ``on`` attribute of a DDL event.  The ``on`` parameter is part of the constructor, and may be a string name of a database dialect name, a tuple containing dialect names, or a Python callable.   This will limit the execution of the item to just those dialects, or when the return value of the callable is ``True``.  So if our :class:`~sqlalchemy.schema.CheckConstraint` was only supported by Postgresql and not other databases, we could limit it to just that dialect::
+The real usefulness of the above becomes clearer once we illustrate the ``on``
+attribute of a DDL event. The ``on`` parameter is part of the constructor, and
+may be a string name of a database dialect name, a tuple containing dialect
+names, or a Python callable. This will limit the execution of the item to just
+those dialects, or when the return value of the callable is ``True``. So if
+our :class:`~sqlalchemy.schema.CheckConstraint` was only supported by
+Postgresql and not other databases, we could limit it to just that dialect::
 
     AddConstraint(constraint, on='postgresql').execute_at("after-create", users)
     DropConstraint(constraint, on='postgresql').execute_at("before-drop", users)
     AddConstraint(constraint, on=('postgresql', 'mysql')).execute_at("after-create", users)
     DropConstraint(constraint, on=('postgresql', 'mysql')).execute_at("before-drop", users)
 
-When using a callable, the callable is passed the ddl element, event name, the :class:`~sqlalchemy.schema.Table` or :class:`~sqlalchemy.schema.MetaData` object whose "create" or "drop" event is in progress, and the :class:`~sqlalchemy.engine.base.Connection` object being used for the operation, as well as additional information as keyword arguments.  The callable can perform checks, such as whether or not a given item already exists.  Below we define ``should_create()`` and ``should_drop()`` callables that check for the presence of our named constraint:
+When using a callable, the callable is passed the ddl element, event name, the
+:class:`~sqlalchemy.schema.Table` or :class:`~sqlalchemy.schema.MetaData`
+object whose "create" or "drop" event is in progress, and the
+:class:`~sqlalchemy.engine.base.Connection` object being used for the
+operation, as well as additional information as keyword arguments. The
+callable can perform checks, such as whether or not a given item already
+exists. Below we define ``should_create()`` and ``should_drop()`` callables
+that check for the presence of our named constraint:
 
 .. sourcecode:: python+sql
 
 custom compilation - see :ref:`sqlalchemy.ext.compiler_toplevel` for
  details.
 
+.. _schema_api_ddl:
+
 DDL API
 -------
 

doc/build/examples.rst

-.. _examples_toplevel:
-
-Examples
-========
-
-The SQLAlchemy distribution includes a variety of code examples illustrating a select set of patterns, some typical and some not so typical.   All are runnable and can be found in the ``/examples`` directory of the distribution.   Each example contains a README in its ``__init__.py`` file, each of which are listed below.
-
-Additional SQLAlchemy examples, some user contributed, are available on the wiki at `<http://www.sqlalchemy.org/trac/wiki/UsageRecipes>`_.
-
-.. _examples_adjacencylist:
-
-Adjacency List
---------------
-
-Location: /examples/adjacency_list/
-
-.. automodule:: adjacency_list
-
-Associations
-------------
-
-Location: /examples/association/
-
-.. automodule:: association
-
-
-.. _examples_instrumentation:
-
-Attribute Instrumentation
--------------------------
-
-Location: /examples/custom_attributes/
-
-.. automodule:: custom_attributes
-
-.. _examples_caching:
-
-Beaker Caching
---------------
-
-Location: /examples/beaker_caching/
-
-.. automodule:: beaker_caching
-
-Derived Attributes
-------------------
-
-Location: /examples/derived_attributes/
-
-.. automodule:: derived_attributes
-
-
-Directed Graphs
----------------
-
-Location: /examples/graphs/
-
-.. automodule:: graphs
-
-Dynamic Relations as Dictionaries
-----------------------------------
-
-Location: /examples/dynamic_dict/
-
-.. automodule:: dynamic_dict
-
-.. _examples_sharding:
-
-Horizontal Sharding
--------------------
-
-Location: /examples/sharding
-
-.. automodule:: sharding
-
-Inheritance Mappings
---------------------
-
-Location: /examples/inheritance/
-
-.. automodule:: inheritance
-
-Large Collections
------------------
-
-Location: /examples/large_collection/
-
-.. automodule:: large_collection
-
-Nested Sets
------------
-
-Location: /examples/nested_sets/
-
-.. automodule:: nested_sets
-
-Polymorphic Associations
-------------------------
-
-Location: /examples/poly_assoc/
-
-.. automodule:: poly_assoc
-
-PostGIS Integration
--------------------
-
-Location: /examples/postgis
-
-.. automodule:: postgis
-
-Versioned Objects
------------------
-
-Location: /examples/versioning
-
-.. automodule:: versioning
-
-Vertical Attribute Mapping
---------------------------
-
-Location: /examples/vertical
-
-.. automodule:: vertical
-
-.. _examples_xmlpersistence:
-
-XML Persistence
----------------
-
-Location: /examples/elementtree/
-
-.. automodule:: elementtree

doc/build/intro.rst

 =============
 
 Working code examples, mostly regarding the ORM, are included in the
-SQLAlchemy distribution, and there are also usage recipes on the SQLAlchemy
-wiki. A description of all the included example applications is at
-:ref:`examples_toplevel`.
+SQLAlchemy distribution. A description of all the included example
+applications is at :ref:`examples_toplevel`.
+
+There is also a wide variety of examples involving both core SQLAlchemy
+constructs as well as the ORM on the wiki.  See
+`<http://www.sqlalchemy.org/trac/wiki/UsageRecipes>`_.
 
 Installing SQLAlchemy
 ======================
 
 Installing SQLAlchemy from scratch is most easily achieved with `setuptools
-<http://pypi.python.org/pypi/setuptools/>`_. Assuming it's installed, just run
+<http://pypi.python.org/pypi/setuptools/>`_, or alternatively
+`pip <http://pypi.python.org/pypi/pip/>`_. Assuming it's installed, just run
 this from the command-line:
 
 .. sourcecode:: none
 
     # easy_install SQLAlchemy
+    
+Or with pip:
+
+.. sourcecode:: none
+
+    # pip install SQLAlchemy
 
 This command will download the latest version of SQLAlchemy from the `Python
 Cheese Shop <http://pypi.python.org/pypi/SQLAlchemy>`_ and install it to your
 system.
 
-* setuptools_ 
-* `install setuptools <http://peak.telecommunity.com/DevCenter/EasyInstall#installation-instructions>`_
-* `pypi <http://pypi.python.org/pypi/SQLAlchemy>`_
-
 Otherwise, you can install from the distribution using the ``setup.py`` script:
 
 .. sourcecode:: none

doc/build/mappers.rst

-.. _datamapping_toplevel:
-
-====================
-Mapper Configuration
-====================
-This section references most major configurational patterns involving the
-:func:`~.orm.mapper` and :func:`.relationship` functions. It assumes you've
-worked through :ref:`ormtutorial_toplevel` and know how to construct and use
-rudimentary mappers and relationships.
-
-Mapper Configuration
-====================
-
-This section describes a variety of configurational patterns that are usable
-with mappers.   Most of these examples apply equally well
-to the usage of distinct :func:`~.orm.mapper` and :class:`.Table` objects 
-as well as when using the :mod:`sqlalchemy.ext.declarative` extension.
-
-Any example in this section which takes a form such as::
-
-    mapper(User, users_table, primary_key=[users_table.c.id])
-    
-Would translate into declarative as::
-
-    class User(Base):
-        __table__ = users_table
-        __mapper_args__ = {
-            'primary_key':users_table.c.id
-        }
-
-Or if using ``__tablename__``, :class:`.Column` objects are declared inline
-with the class definition. These are usable as is within ``__mapper_args__``::
-
-    class User(Base):
-        __tablename__ = 'users'
-        
-        id = Column(Integer)
-        
-        __mapper_args__ = {
-            'primary_key':id
-        }
-
-For a full reference of all options available on mappers, please see the API
-description of :func:`~.orm.mapper`.
-
-Customizing Column Properties
-------------------------------
-
-The default behavior of :func:`~.orm.mapper` is to assemble all the columns in
-the mapped :class:`.Table` into mapped object attributes. This behavior can be
-modified in several ways, as well as enhanced by SQL expressions.
-
-Mapping a Subset of Table Columns
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To reference a subset of columns referenced by a table as mapped attributes,
-use the ``include_properties`` or ``exclude_properties`` arguments. For
-example::
-
-    mapper(User, users_table, include_properties=['user_id', 'user_name'])
-    
-...will map the ``User`` class to the ``users_table`` table, only including
-the "user_id" and "user_name" columns - the rest are not refererenced.
-Similarly::
-
-    mapper(Address, addresses_table, 
-                exclude_properties=['street', 'city', 'state', 'zip'])
-
-...will map the ``Address`` class to the ``addresses_table`` table, including
-all columns present except "street", "city", "state", and "zip".
-
-When this mapping is used, the columns that are not included will not be
-referenced in any SELECT statements emitted by :class:`.Query`, nor will there
-be any mapped attribute on the mapped class which represents the column;
-assigning an attribute of that name will have no effect beyond that of
-a normal Python attribute assignment.
-
-In some cases, multiple columns may have the same name, such as when
-mapping to a join of two or more tables that share some column name.  To 
-exclude or include individual columns, :class:`.Column` objects
-may also be placed within the "include_properties" and "exclude_properties"
-collections (new feature as of 0.6.4)::
-
-    mapper(UserAddress, users_table.join(addresses_table),
-                exclude_properties=[addresses_table.c.id],
-                primary_key=users_table.c.id
-            )
-
-It should be noted that insert and update defaults configured on individal
-:class:`.Column` objects, such as those configured by the "default",
-"on_update", "server_default" and "server_onupdate" arguments, will continue
-to function normally even if those :class:`.Column` objects are not mapped.
-This functionality is part of the SQL expression and execution system and
-occurs below the level of the ORM.
-
-
-Attribute Names for Mapped Columns
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To change the name of the attribute mapped to a particular column, place the
-:class:`~sqlalchemy.schema.Column` object in the ``properties`` dictionary
-with the desired key::
-
-    mapper(User, users_table, properties={
-       'id': users_table.c.user_id,
-       'name': users_table.c.user_name,
-    })
-
-When using :mod:`~sqlalchemy.ext.declarative`, the above configuration is more
-succinct - place the full column name in the :class:`.Column` definition,
-using the desired attribute name in the class definition::
-
-    from sqlalchemy.ext.declarative import declarative_base
-    Base = declarative_base()
-    
-    class User(Base):
-        __tablename__ = 'user'
-        id = Column('user_id', Integer, primary_key=True)
-        name = Column('user_name', String(50))
-
-To change the names of all attributes using a prefix, use the
-``column_prefix`` option.  This is useful for some schemes that would like
-to declare alternate attributes::
-
-    mapper(User, users_table, column_prefix='_')
-
-The above will place attribute names such as ``_user_id``, ``_user_name``,
-``_password`` etc. on the mapped ``User`` class.
-
-
-Mapping Multiple Columns to a Single Attribute
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To place multiple columns which are known to be "synonymous" based on foreign
-key relationship or join condition into the same mapped attribute, put them
-together using a list, as below where we map to a :func:`~.expression.join`::
-
-    from sqlalchemy.sql import join
-    
-    # join users and addresses
-    usersaddresses = join(users_table, addresses_table, \
-        users_table.c.user_id == addresses_table.c.user_id)
-
-    # user_id columns are equated under the 'user_id' attribute
-    mapper(User, usersaddresses, properties={
-        'id':[users_table.c.user_id, addresses_table.c.user_id],
-    })
-
-For further examples on this particular use case, see :ref:`maptojoin`.
-
-Deferred Column Loading
-------------------------
-
-This feature allows particular columns of a table to not be loaded by default,
-instead being loaded later on when first referenced. It is essentially
-"column-level lazy loading". This feature is useful when one wants to avoid
-loading a large text or binary field into memory when it's not needed.
-Individual columns can be lazy loaded by themselves or placed into groups that
-lazy-load together::
-
-    book_excerpts = Table('books', db,
-        Column('book_id', Integer, primary_key=True),
-        Column('title', String(200), nullable=False),
-        Column('summary', String(2000)),
-        Column('excerpt', String),
-        Column('photo', Binary)
-    )
-
-    class Book(object):
-        pass
-
-    # define a mapper that will load each of 'excerpt' and 'photo' in
-    # separate, individual-row SELECT statements when each attribute
-    # is first referenced on the individual object instance
-    mapper(Book, book_excerpts, properties={
-       'excerpt': deferred(book_excerpts.c.excerpt),
-       'photo': deferred(book_excerpts.c.photo)
-    })
-
-Deferred columns can be placed into groups so that they load together::
-
-    book_excerpts = Table('books', db,
-      Column('book_id', Integer, primary_key=True),
-      Column('title', String(200), nullable=False),
-      Column('summary', String(2000)),
-      Column('excerpt', String),
-      Column('photo1', Binary),
-      Column('photo2', Binary),
-      Column('photo3', Binary)
-    )
-
-    class Book(object):
-        pass
-
-    # define a mapper with a 'photos' deferred group.  when one photo is referenced,
-    # all three photos will be loaded in one SELECT statement.  The 'excerpt' will
-    # be loaded separately when it is first referenced.
-    mapper(Book, book_excerpts, properties = {
-      'excerpt': deferred(book_excerpts.c.excerpt),
-      'photo1': deferred(book_excerpts.c.photo1, group='photos'),
-      'photo2': deferred(book_excerpts.c.photo2, group='photos'),
-      'photo3': deferred(book_excerpts.c.photo3, group='photos')
-    })
-
-You can defer or undefer columns at the :class:`~sqlalchemy.orm.query.Query` level using the :func:`.defer` and :func:`.undefer` query options::
-
-    query = session.query(Book)
-    query.options(defer('summary')).all()
-    query.options(undefer('excerpt')).all()
-
-And an entire "deferred group", i.e. which uses the ``group`` keyword argument to :func:`~sqlalchemy.orm.deferred()`, can be undeferred using :func:`.undefer_group()`, sending in the group name::
-
-    query = session.query(Book)
-    query.options(undefer_group('photos')).all()
-
-.. _mapper_sql_expressions:
-
-SQL Expressions as Mapped Attributes
--------------------------------------
-
-Any SQL expression that relates to the primary mapped selectable can be mapped as a 
-read-only attribute which will be bundled into the SELECT emitted
-for the target mapper when rows are loaded.   This effect is achieved
-using the :func:`.column_property` function.  Any
-scalar-returning
-:class:`.ClauseElement` may be
-used.  Unlike older versions of SQLAlchemy, there is no :func:`~.sql.expression.label` requirement::
-
-    mapper(User, users_table, properties={
-        'fullname': column_property(
-            users_table.c.firstname + " " + users_table.c.lastname
-        )
-    })
-
-Correlated subqueries may be used as well:
-
-.. sourcecode:: python+sql
-
-    from sqlalchemy import select, func
-    
-    mapper(User, users_table, properties={
-        'address_count': column_property(
-                select([func.count(addresses_table.c.address_id)]).\
-                where(addresses_table.c.user_id==users_table.c.user_id)
-            )
-    })
-
-The declarative form of the above is described in :ref:`declarative_sql_expressions`.
-
-Note that :func:`.column_property` is used to provide the effect of a SQL
-expression that is actively rendered into the SELECT generated for a
-particular mapped class.  Alternatively, for the typical attribute that
-represents a composed value, its usually simpler to define it as a Python
-property which is evaluated as it is invoked on instances after they've been
-loaded::
-
-    class User(object):
-        @property
-        def fullname(self):
-            return self.firstname + " " + self.lastname
-            
-To invoke a SQL statement from an instance that's already been loaded, the
-session associated with the instance can be acquired using
-:func:`~.session.object_session` which will provide the appropriate
-transactional context from which to emit a statement::
-
-    from sqlalchemy.orm import object_session
-    from sqlalchemy import select, func
-    
-    class User(object):
-        @property
-        def address_count(self):
-            return object_session(self).\
-                scalar(
-                    select([func.count(addresses_table.c.address_id)]).\
-                        where(addresses_table.c.user_id==self.user_id)
-                )
-
-On the subject of object-level methods, be sure to see the :mod:`.derived_attributes` example,
-which provides a simple method of reusing instance-level expressions simultaneously
-as SQL expressions.   The :mod:`.derived_attributes` example is slated to become a
-built-in feature of SQLAlchemy in a future release.
-
-Changing Attribute Behavior
-----------------------------
-
-Simple Validators
-~~~~~~~~~~~~~~~~~~
-
-A quick way to add a "validation" routine to an attribute is to use the
-:func:`~sqlalchemy.orm.validates` decorator. An attribute validator can raise
-an exception, halting the process of mutating the attribute's value, or can
-change the given value into something different. Validators, like all
-attribute extensions, are only called by normal userland code; they are not
-issued when the ORM is populating the object.
-
-.. sourcecode:: python+sql
-    
-    from sqlalchemy.orm import validates
-    
-    addresses_table = Table('addresses', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('email', String)
-    )
-
-    class EmailAddress(object):
-        @validates('email')
-        def validate_email(self, key, address):
-            assert '@' in address
-            return address
-
-    mapper(EmailAddress, addresses_table)
-
-Validators also receive collection events, when items are added to a collection:
-
-.. sourcecode:: python+sql
-
-    class User(object):
-        @validates('addresses')
-        def validate_address(self, key, address):
-            assert '@' in address.email
-            return address
-
-.. _synonyms:
-
-Using Descriptors
-~~~~~~~~~~~~~~~~~~
-
-A more comprehensive way to produce modified behavior for an attribute is to
-use descriptors. These are commonly used in Python using the ``property()``
-function. The standard SQLAlchemy technique for descriptors is to create a
-plain descriptor, and to have it read/write from a mapped attribute with a
-different name. Below we illustrate this using Python 2.6-style properties::
-
-    class EmailAddress(object):
-        
-        @property
-        def email(self):
-            return self._email
-            
-        @email.setter
-        def email(self, email):
-            self._email = email
-
-    mapper(EmailAddress, addresses_table, properties={
-        '_email': addresses_table.c.email
-    })
-
-The approach above will work, but there's more we can add. While our
-``EmailAddress`` object will shuttle the value through the ``email``
-descriptor and into the ``_email`` mapped attribute, the class level
-``EmailAddress.email`` attribute does not have the usual expression semantics
-usable with :class:`.Query`. To provide these, we instead use the
-:func:`.synonym` function as follows::
-
-    mapper(EmailAddress, addresses_table, properties={
-        'email': synonym('_email', map_column=True)
-    })
-
-The ``email`` attribute is now usable in the same way as any
-other mapped attribute, including filter expressions,
-get/set operations, etc.::
-
-    address = session.query(EmailAddress).filter(EmailAddress.email == 'some address').one()
-
-    address.email = 'some other address'
-    session.flush()
-
-    q = session.query(EmailAddress).filter_by(email='some other address')
-
-If the mapped class does not provide a property, the :func:`.synonym` construct will create a default getter/setter object automatically.
-
-To use synonyms with :mod:`~sqlalchemy.ext.declarative`, see the section 
-:ref:`declarative_synonyms`.
-
-Note that the "synonym" feature is eventually to be replaced by the superior
-"hybrid attributes" approach, slated to become a built in feature of SQLAlchemy
-in a future release.  "hybrid" attributes are simply Python properties that evaulate
-at both the class level and at the instance level.  For an example of their usage,
-see the :mod:`derived_attributes` example.
-
-.. _custom_comparators:
-
-Custom Comparators
-~~~~~~~~~~~~~~~~~~~
-
-The expressions returned by comparison operations, such as
-``User.name=='ed'``, can be customized, by implementing an object that
-explicitly defines each comparison method needed. This is a relatively rare
-use case. For most needs, the approach in :ref:`mapper_sql_expressions` will
-often suffice, or alternatively a scheme like that of the 
-:mod:`.derived_attributes` example.  Those approaches should be tried first
-before resorting to custom comparison objects.
-
-Each of :func:`.column_property`, :func:`~.composite`, :func:`.relationship`,
-and :func:`.comparable_property` accept an argument called
-``comparator_factory``. A subclass of :class:`.PropComparator` can be provided
-for this argument, which can then reimplement basic Python comparison methods
-such as ``__eq__()``, ``__ne__()``, ``__lt__()``, and so on. See each of those
-functions for subclassing guidelines, as it's usually best to subclass the
-:class:`.PropComparator` subclass used by that type of property, so that all
-methods remain implemented. For example, to allow a column-mapped attribute to
-do case-insensitive comparison::
-
-    from sqlalchemy.orm.properties import ColumnProperty
-    from sqlalchemy.sql import func
-    
-    class MyComparator(ColumnProperty.Comparator):
-        def __eq__(self, other):
-            return func.lower(self.__clause_element__()) == func.lower(other)
-
-    mapper(EmailAddress, addresses_table, properties={
-        'email':column_property(addresses_table.c.email,
-                                comparator_factory=MyComparator)
-    })
-
-Above, comparisons on the ``email`` column are wrapped in the SQL lower() function to produce case-insensitive matching::
-
-    >>> str(EmailAddress.email == 'SomeAddress@foo.com')
-    lower(addresses.email) = lower(:lower_1)
-
-In contrast, a similar effect is more easily accomplished, although
-with less control of it's behavior, using a column-mapped expression::
-    
-    from sqlachemy.orm import column_property
-    from sqlalchemy.sql import func
-    
-    mapper(EmailAddress, addresses_table, properties={
-        'email':column_property(func.lower(addresses_table.c.email))
-    })
-
-In the above case, the "email" attribute will be rendered as ``lower(email)`` 
-in all queries, including in the columns clause of the SELECT statement.  
-This means the value of "email" will be loaded as lower case, not just in
-comparisons.  It's up to the user to decide if the finer-grained control
-but more upfront work of a custom :class:`.PropComparator` is necessary.
-
-.. _mapper_composite:
-
-Composite Column Types
------------------------
-
-Sets of columns can be associated with a single user-defined datatype.  The ORM provides a single attribute which represents the group of columns 
-using the class you provide.
-
-A simple example represents pairs of columns as a "Point" object.  
-Starting with a table that represents two points as x1/y1 and x2/y2::
-
-    from sqlalchemy import Table, Column
-    
-    vertices = Table('vertices', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('x1', Integer),
-        Column('y1', Integer),
-        Column('x2', Integer),
-        Column('y2', Integer),
-        )
-
-We create a new class, ``Point``, that will represent each x/y as a 
-pair::
-
-    class Point(object):
-        def __init__(self, x, y):
-            self.x = x
-            self.y = y
-        def __composite_values__(self):
-            return self.x, self.y
-        def __set_composite_values__(self, x, y):
-            self.x = x
-            self.y = y
-        def __eq__(self, other):
-            return other is not None and \
-                    other.x == self.x and \
-                    other.y == self.y
-        def __ne__(self, other):
-            return not self.__eq__(other)
-
-The requirements for the custom datatype class are that it have a
-constructor which accepts positional arguments corresponding to its column
-format, and also provides a method ``__composite_values__()`` which
-returns the state of the object as a list or tuple, in order of its
-column-based attributes. It also should supply adequate ``__eq__()`` and
-``__ne__()`` methods which test the equality of two instances.
-
-The ``__set_composite_values__()`` method is optional. If it's not
-provided, the names of the mapped columns are taken as the names of
-attributes on the object, and ``setattr()`` is used to set data.
-
-The :func:`.composite` function is then used in the mapping::
-
-    from sqlalchemy.orm import mapper, composite
-
-    class Vertex(object):
-        pass
-
-    mapper(Vertex, vertices, properties={
-        'start': composite(Point, vertices.c.x1, vertices.c.y1),
-        'end': composite(Point, vertices.c.x2, vertices.c.y2)
-    })
-
-We can now use the ``Vertex`` instances as well as querying as though the
-``start`` and ``end`` attributes are regular scalar attributes::
-
-    session = Session()
-    v = Vertex(Point(3, 4), Point(5, 6))
-    session.add(v)
-
-    v2 = session.query(Vertex).filter(Vertex.start == Point(3, 4))
-
-The "equals" comparison operation by default produces an AND of all
-corresponding columns equated to one another. This can be changed using
-the ``comparator_factory``, described in :ref:`custom_comparators`.
-Below we illustrate the "greater than" operator, implementing 
-the same expression that the base "greater than" does::
-
-    from sqlalchemy.orm.properties import CompositeProperty
-    from sqlalchemy import sql
-
-    class PointComparator(CompositeProperty.Comparator):
-        def __gt__(self, other):
-            """redefine the 'greater than' operation"""
-
-            return sql.and_(*[a>b for a, b in
-                              zip(self.__clause_element__().clauses,
-                                  other.__composite_values__())])
-
-    maper(Vertex, vertices, properties={
-        'start': composite(Point, vertices.c.x1, vertices.c.y1,
-                                    comparator_factory=PointComparator),
-        'end': composite(Point, vertices.c.x2, vertices.c.y2,
-                                    comparator_factory=PointComparator)
-    })
-
-Controlling Ordering
----------------------
-
-The ORM does not generate ordering for any query unless explicitly configured.
-
-The "default" ordering for a collection, which applies to list-based
-collections, can be configured using the ``order_by`` keyword argument on
-:func:`~sqlalchemy.orm.relationship`::
-
-    mapper(Address, addresses_table)
-
-    # order address objects by address id
-    mapper(User, users_table, properties={
-        'addresses': relationship(Address, order_by=addresses_table.c.address_id)
-    })
-
-Note that when using joined eager loaders with relationships, the tables used
-by the eager load's join are anonymously aliased. You can only order by these
-columns if you specify it at the :func:`~sqlalchemy.orm.relationship` level.
-To control ordering at the query level based on a related table, you
-``join()`` to that relationship, then order by it::
-
-    session.query(User).join('addresses').order_by(Address.street)
-
-Ordering for rows loaded through :class:`~sqlalchemy.orm.query.Query` is
-usually specified using the ``order_by()`` generative method. There is also an
-option to set a default ordering for Queries which are against a single mapped
-entity and where there was no explicit ``order_by()`` stated, which is the
-``order_by`` keyword argument to ``mapper()``::
-
-    # order by a column
-    mapper(User, users_table, order_by=users_table.c.user_id)
-
-    # order by multiple items
-    mapper(User, users_table, order_by=[users_table.c.user_id, users_table.c.user_name.desc()])
-
-Above, a :class:`~sqlalchemy.orm.query.Query` issued for the ``User`` class
-will use the value of the mapper's ``order_by`` setting if the
-:class:`~sqlalchemy.orm.query.Query` itself has no ordering specified.
-
-.. _datamapping_inheritance:
-
-Mapping Class Inheritance Hierarchies
---------------------------------------
-
-SQLAlchemy supports three forms of inheritance:  *single table inheritance*, where several types of classes are stored in one table, *concrete table inheritance*, where each type of class is stored in its own table, and *joined table inheritance*, where the parent/child classes are stored in their own tables that are joined together in a select.  Whereas support for single and joined table inheritance is strong, concrete table inheritance is a less common scenario with some particular problems so is not quite as flexible.
-
-When mappers are configured in an inheritance relationship, SQLAlchemy has the ability to load elements "polymorphically", meaning that a single query can return objects of multiple types.
-
-For the following sections, assume this class relationship:
-
-.. sourcecode:: python+sql
-
-    class Employee(object):
-        def __init__(self, name):
-            self.name = name
-        def __repr__(self):
-            return self.__class__.__name__ + " " + self.name
-
-    class Manager(Employee):
-        def __init__(self, name, manager_data):
-            self.name = name
-            self.manager_data = manager_data
-        def __repr__(self):
-            return self.__class__.__name__ + " " + self.name + " " +  self.manager_data
-
-    class Engineer(Employee):
-        def __init__(self, name, engineer_info):
-            self.name = name
-            self.engineer_info = engineer_info
-        def __repr__(self):
-            return self.__class__.__name__ + " " + self.name + " " +  self.engineer_info
-
-Joined Table Inheritance
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-In joined table inheritance, each class along a particular classes' list of parents is represented by a unique table.  The total set of attributes for a particular instance is represented as a join along all tables in its inheritance path.  Here, we first define a table to represent the ``Employee`` class.  This table will contain a primary key column (or columns), and a column for each attribute that's represented by ``Employee``.  In this case it's just ``name``::
-
-    employees = Table('employees', metadata,
-       Column('employee_id', Integer, primary_key=True),
-       Column('name', String(50)),
-       Column('type', String(30), nullable=False)
-    )
-
-The table also has a column called ``type``.  It is strongly advised in both single- and joined- table inheritance scenarios that the root table contains a column whose sole purpose is that of the **discriminator**; it stores a value which indicates the type of object represented within the row.  The column may be of any desired datatype.  While there are some "tricks" to work around the requirement that there be a discriminator column, they are more complicated to configure when one wishes to load polymorphically.
-
-Next we define individual tables for each of ``Engineer`` and ``Manager``, which contain columns that represent the attributes unique to the subclass they represent.  Each table also must contain a primary key column (or columns), and in most cases a foreign key reference to the parent table.  It is  standard practice that the same column is used for both of these roles, and that the column is also named the same as that of the parent table.  However this is optional in SQLAlchemy; separate columns may be used for primary key and parent-relationship, the column may be named differently than that of the parent, and even a custom join condition can be specified between parent and child tables instead of using a foreign key::
-
-    engineers = Table('engineers', metadata,
-       Column('employee_id', Integer, ForeignKey('employees.employee_id'), primary_key=True),
-       Column('engineer_info', String(50)),
-    )
-
-    managers = Table('managers', metadata,
-       Column('employee_id', Integer, ForeignKey('employees.employee_id'), primary_key=True),
-       Column('manager_data', String(50)),
-    )
-
-One natural effect of the joined table inheritance configuration is that the identity of any mapped object can be determined entirely from the base table.  This has obvious advantages, so SQLAlchemy always considers the primary key columns of a joined inheritance class to be those of the base table only, unless otherwise manually configured.  In other words, the ``employee_id`` column of both the ``engineers`` and ``managers`` table is not used to locate the ``Engineer`` or ``Manager`` object itself - only the value in ``employees.employee_id`` is considered, and the primary key in this case is non-composite.  ``engineers.employee_id`` and ``managers.employee_id`` are still of course critical to the proper operation of the pattern overall as they are used to locate the joined row, once the parent row has been determined, either through a distinct SELECT statement or all at once within a JOIN.
-
-We then configure mappers as usual, except we use some additional arguments to indicate the inheritance relationship, the polymorphic discriminator column, and the **polymorphic identity** of each class; this is the value that will be stored in the polymorphic discriminator column.
-
-.. sourcecode:: python+sql
-
-    mapper(Employee, employees, polymorphic_on=employees.c.type, polymorphic_identity='employee')
-    mapper(Engineer, engineers, inherits=Employee, polymorphic_identity='engineer')
-    mapper(Manager, managers, inherits=Employee, polymorphic_identity='manager')
-
-And that's it.  Querying against ``Employee`` will return a combination of ``Employee``, ``Engineer`` and ``Manager`` objects.   Newly saved ``Engineer``, ``Manager``, and ``Employee`` objects will automatically populate the ``employees.type`` column with ``engineer``, ``manager``, or ``employee``, as appropriate.
-
-Basic Control of Which Tables are Queried
-++++++++++++++++++++++++++++++++++++++++++
-
-The :func:`~sqlalchemy.orm.query.Query.with_polymorphic` method of
-:class:`~sqlalchemy.orm.query.Query` affects the specific subclass tables
-which the Query selects from. Normally, a query such as this:
-
-.. sourcecode:: python+sql
-
-    session.query(Employee).all()
-
-...selects only from the ``employees`` table. When loading fresh from the
-database, our joined-table setup will query from the parent table only, using
-SQL such as this:
-
-.. sourcecode:: python+sql
-
-    {opensql}
-    SELECT employees.employee_id AS employees_employee_id, employees.name AS employees_name, employees.type AS employees_type
-    FROM employees
-    []
-
-As attributes are requested from those ``Employee`` objects which are
-represented in either the ``engineers`` or ``managers`` child tables, a second
-load is issued for the columns in that related row, if the data was not
-already loaded. So above, after accessing the objects you'd see further SQL
-issued along the lines of:
-
-.. sourcecode:: python+sql
-
-    {opensql}
-    SELECT managers.employee_id AS managers_employee_id, managers.manager_data AS managers_manager_data
-    FROM managers
-    WHERE ? = managers.employee_id
-    [5]
-    SELECT engineers.employee_id AS engineers_employee_id, engineers.engineer_info AS engineers_engineer_info
-    FROM engineers
-    WHERE ? = engineers.employee_id
-    [2]
-
-This behavior works well when issuing searches for small numbers of items,
-such as when using :meth:`.Query.get`, since the full range of joined tables are not
-pulled in to the SQL statement unnecessarily. But when querying a larger span
-of rows which are known to be of many types, you may want to actively join to
-some or all of the joined tables. The ``with_polymorphic`` feature of
-:class:`~sqlalchemy.orm.query.Query` and ``mapper`` provides this.
-
-Telling our query to polymorphically load ``Engineer`` and ``Manager``
-objects:
-
-.. sourcecode:: python+sql
-
-    query = session.query(Employee).with_polymorphic([Engineer, Manager])
-
-produces a query which joins the ``employees`` table to both the ``engineers`` and ``managers`` tables like the following:
-
-.. sourcecode:: python+sql
-
-    query.all()
-    {opensql}
-    SELECT employees.employee_id AS employees_employee_id, engineers.employee_id AS engineers_employee_id, managers.employee_id AS managers_employee_id, employees.name AS employees_name, employees.type AS employees_type, engineers.engineer_info AS engineers_engineer_info, managers.manager_data AS managers_manager_data
-    FROM employees LEFT OUTER JOIN engineers ON employees.employee_id = engineers.employee_id LEFT OUTER JOIN managers ON employees.employee_id = managers.employee_id
-    []
-
-:func:`~sqlalchemy.orm.query.Query.with_polymorphic` accepts a single class or
-mapper, a list of classes/mappers, or the string ``'*'`` to indicate all
-subclasses:
-
-.. sourcecode:: python+sql
-
-    # join to the engineers table
-    query.with_polymorphic(Engineer)
-
-    # join to the engineers and managers tables
-    query.with_polymorphic([Engineer, Manager])
-
-    # join to all subclass tables
-    query.with_polymorphic('*')
-
-It also accepts a second argument ``selectable`` which replaces the automatic
-join creation and instead selects directly from the selectable given. This
-feature is normally used with "concrete" inheritance, described later, but can
-be used with any kind of inheritance setup in the case that specialized SQL
-should be used to load polymorphically:
-
-.. sourcecode:: python+sql
-
-    # custom selectable
-    query.with_polymorphic([Engineer, Manager], employees.outerjoin(managers).outerjoin(engineers))
-
-:func:`~sqlalchemy.orm.query.Query.with_polymorphic` is also needed
-when you wish to add filter criteria that are specific to one or more
-subclasses; it makes the subclasses' columns available to the WHERE clause:
-
-.. sourcecode:: python+sql
-
-    session.query(Employee).with_polymorphic([Engineer, Manager]).\
-        filter(or_(Engineer.engineer_info=='w', Manager.manager_data=='q'))
-
-Note that if you only need to load a single subtype, such as just the
-``Engineer`` objects, :func:`~sqlalchemy.orm.query.Query.with_polymorphic` is
-not needed since you would query against the ``Engineer`` class directly.
-
-The mapper also accepts ``with_polymorphic`` as a configurational argument so
-that the joined-style load will be issued automatically. This argument may be
-the string ``'*'``, a list of classes, or a tuple consisting of either,
-followed by a selectable.
-
-.. sourcecode:: python+sql
-
-    mapper(Employee, employees, polymorphic_on=employees.c.type, \
-        polymorphic_identity='employee', with_polymorphic='*')
-    mapper(Engineer, engineers, inherits=Employee, polymorphic_identity='engineer')
-    mapper(Manager, managers, inherits=Employee, polymorphic_identity='manager')
-
-The above mapping will produce a query similar to that of
-``with_polymorphic('*')`` for every query of ``Employee`` objects.
-
-Using :func:`~sqlalchemy.orm.query.Query.with_polymorphic` with
-:class:`~sqlalchemy.orm.query.Query` will override the mapper-level
-``with_polymorphic`` setting.
-
-Advanced Control of Which Tables are Queried
-++++++++++++++++++++++++++++++++++++++++++++
-
-The :meth:`.Query.with_polymorphic` method and configuration works fine for
-simplistic scenarios. However, it currently does not work with any
-:class:`.Query` that selects against individual columns or against multiple
-classes - it also has to be called at the outset of a query.
-
-For total control of how :class:`.Query` joins along inheritance relationships,
-use the :class:`.Table` objects directly and construct joins manually.  For example, to 
-query the name of employees with particular criterion::
-
-    session.query(Employee.name).\
-        outerjoin((engineer, engineer.c.employee_id==Employee.c.employee_id)).\
-        outerjoin((manager, manager.c.employee_id==Employee.c.employee_id)).\
-        filter(or_(Engineer.engineer_info=='w', Manager.manager_data=='q'))
-
-The base table, in this case the "employees" table, isn't always necessary. A
-SQL query is always more efficient with fewer joins. Here, if we wanted to
-just load information specific to managers or engineers, we can instruct
-:class:`.Query` to use only those tables. The ``FROM`` clause is determined by
-what's specified in the :meth:`.Session.query`, :meth:`.Query.filter`, or
-:meth:`.Query.select_from` methods::
-
-    session.query(Manager.manager_data).select_from(manager)
-    
-    session.query(engineer.c.id).filter(engineer.c.engineer_info==manager.c.manager_data)
-
-Creating Joins to Specific Subtypes
-++++++++++++++++++++++++++++++++++++
-
-The :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` method is a
-helper which allows the construction of joins along
-:func:`~sqlalchemy.orm.relationship` paths while narrowing the criterion to
-specific subclasses. Suppose the ``employees`` table represents a collection
-of employees which are associated with a ``Company`` object. We'll add a
-``company_id`` column to the ``employees`` table and a new table
-``companies``:
-
-.. sourcecode:: python+sql
-
-    companies = Table('companies', metadata,
-       Column('company_id', Integer, primary_key=True),
-       Column('name', String(50))
-       )
-
-    employees = Table('employees', metadata,
-      Column('employee_id', Integer, primary_key=True),
-      Column('name', String(50)),
-      Column('type', String(30), nullable=False),
-      Column('company_id', Integer, ForeignKey('companies.company_id'))
-    )
-
-    class Company(object):
-        pass
-
-    mapper(Company, companies, properties={
-        'employees': relationship(Employee)
-    })
-
-When querying from ``Company`` onto the ``Employee`` relationship, the ``join()`` method as well as the ``any()`` and ``has()`` operators will create a join from ``companies`` to ``employees``, without including ``engineers`` or ``managers`` in the mix.  If we wish to have criterion which is specifically against the ``Engineer`` class, we can tell those methods to join or subquery against the joined table representing the subclass using the :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` operator:
-
-.. sourcecode:: python+sql
-
-    session.query(Company).join(Company.employees.of_type(Engineer)).filter(Engineer.engineer_info=='someinfo')
-
-A longhand version of this would involve spelling out the full target selectable within a 2-tuple:
-
-.. sourcecode:: python+sql
-
-    session.query(Company).join((employees.join(engineers), Company.employees)).filter(Engineer.engineer_info=='someinfo')
-
-Currently, :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` accepts a single class argument.  It may be expanded later on to accept multiple classes.  For now, to join to any group of subclasses, the longhand notation allows this flexibility:
-
-.. sourcecode:: python+sql
-
-    session.query(Company).join((employees.outerjoin(engineers).outerjoin(managers), Company.employees)).\
-        filter(or_(Engineer.engineer_info=='someinfo', Manager.manager_data=='somedata'))
-
-The ``any()`` and ``has()`` operators also can be used with :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` when the embedded criterion is in terms of a subclass:
-
-.. sourcecode:: python+sql
-
-    session.query(Company).filter(Company.employees.of_type(Engineer).any(Engineer.engineer_info=='someinfo')).all()
-
-Note that the ``any()`` and ``has()`` are both shorthand for a correlated EXISTS query.  To build one by hand looks like:
-
-.. sourcecode:: python+sql
-
-    session.query(Company).filter(
-        exists([1],
-            and_(Engineer.engineer_info=='someinfo', employees.c.company_id==companies.c.company_id),
-            from_obj=employees.join(engineers)
-        )
-    ).all()
-
-The EXISTS subquery above selects from the join of ``employees`` to ``engineers``, and also specifies criterion which correlates the EXISTS subselect back to the parent ``companies`` table.
-
-Single Table Inheritance
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Single table inheritance is where the attributes of the base class as well as all subclasses are represented within a single table.  A column is present in the table for every attribute mapped to the base class and all subclasses; the columns which correspond to a single subclass are nullable.  This configuration looks much like joined-table inheritance except there's only one table.  In this case, a ``type`` column is required, as there would be no other way to discriminate between classes.  The table is specified in the base mapper only; for the inheriting classes, leave their ``table`` parameter blank:
-
-.. sourcecode:: python+sql
-
-    employees_table = Table('employees', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('manager_data', String(50)),
-        Column('engineer_info', String(50)),
-        Column('type', String(20), nullable=False)
-    )
-
-    employee_mapper = mapper(Employee, employees_table, \
-        polymorphic_on=employees_table.c.type, polymorphic_identity='employee')
-    manager_mapper = mapper(Manager, inherits=employee_mapper, polymorphic_identity='manager')
-    engineer_mapper = mapper(Engineer, inherits=employee_mapper, polymorphic_identity='engineer')
-
-Note that the mappers for the derived classes Manager and Engineer omit the specification of their associated table, as it is inherited from the employee_mapper. Omitting the table specification for derived mappers in single-table inheritance is required.
-
-.. _concrete_inheritance:
-
-Concrete Table Inheritance
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-This form of inheritance maps each class to a distinct table, as below:
-
-.. sourcecode:: python+sql
-
-    employees_table = Table('employees', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-    )
-
-    managers_table = Table('managers', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('manager_data', String(50)),
-    )
-
-    engineers_table = Table('engineers', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('engineer_info', String(50)),
-    )
-
-Notice in this case there is no ``type`` column. If polymorphic loading is not
-required, there's no advantage to using ``inherits`` here; you just define a
-separate mapper for each class.
-
-.. sourcecode:: python+sql
-
-    mapper(Employee, employees_table)
-    mapper(Manager, managers_table)
-    mapper(Engineer, engineers_table)
-
-To load polymorphically, the ``with_polymorphic`` argument is required, along
-with a selectable indicating how rows should be loaded. In this case we must
-construct a UNION of all three tables. SQLAlchemy includes a helper function
-to create these called :func:`~sqlalchemy.orm.util.polymorphic_union`, which
-will map all the different columns into a structure of selects with the same
-numbers and names of columns, and also generate a virtual ``type`` column for
-each subselect:
-
-.. sourcecode:: python+sql
-
-    pjoin = polymorphic_union({
-        'employee': employees_table,
-        'manager': managers_table,
-        'engineer': engineers_table
-    }, 'type', 'pjoin')
-
-    employee_mapper = mapper(Employee, employees_table, with_polymorphic=('*', pjoin), \
-        polymorphic_on=pjoin.c.type, polymorphic_identity='employee')
-    manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, \
-        concrete=True, polymorphic_identity='manager')
-    engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, \
-        concrete=True, polymorphic_identity='engineer')
-
-Upon select, the polymorphic union produces a query like this:
-
-.. sourcecode:: python+sql
-
-    session.query(Employee).all()
-    {opensql}
-    SELECT pjoin.type AS pjoin_type, pjoin.manager_data AS pjoin_manager_data, pjoin.employee_id AS pjoin_employee_id,
-    pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
-    FROM (
-        SELECT employees.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
-        CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
-        FROM employees
-    UNION ALL
-        SELECT managers.employee_id AS employee_id, managers.manager_data AS manager_data, managers.name AS name,
-        CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
-        FROM managers
-    UNION ALL
-        SELECT engineers.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
-        engineers.engineer_info AS engineer_info, 'engineer' AS type
-        FROM engineers
-    ) AS pjoin
-    []
-
-Using Relationships with Inheritance
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Both joined-table and single table inheritance scenarios produce mappings which are usable in :func:`~sqlalchemy.orm.relationship` functions; that is, it's possible to map a parent object to a child object which is polymorphic.  Similarly, inheriting mappers can have :func:`~sqlalchemy.orm.relationship` objects of their own at any level, which are inherited to each child class.  The only requirement for relationships is that there is a table relationship between parent and child.  An example is the following modification to the joined table inheritance example, which sets a bi-directional relationship between ``Employee`` and ``Company``:
-
-.. sourcecode:: python+sql
-
-    employees_table = Table('employees', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('company_id', Integer, ForeignKey('companies.company_id'))
-    )
-
-    companies = Table('companies', metadata,
-       Column('company_id', Integer, primary_key=True),
-       Column('name', String(50)))
-
-    class Company(object):
-        pass
-
-    mapper(Company, companies, properties={
-       'employees': relationship(Employee, backref='company')
-    })
-
-SQLAlchemy has a lot of experience in this area; the optimized "outer join" approach can be used freely for parent and child relationships, eager loads are fully useable, :func:`~sqlalchemy.orm.aliased` objects and other techniques are fully supported as well.
-
-In a concrete inheritance scenario, mapping relationships is more difficult since the distinct classes do not share a table.  In this case, you *can* establish a relationship from parent to child if a join condition can be constructed from parent to child, if each child table contains a foreign key to the parent:
-
-.. sourcecode:: python+sql
-
-    companies = Table('companies', metadata,
-       Column('id', Integer, primary_key=True),
-       Column('name', String(50)))
-
-    employees_table = Table('employees', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('company_id', Integer, ForeignKey('companies.id'))
-    )
-
-    managers_table = Table('managers', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('manager_data', String(50)),
-        Column('company_id', Integer, ForeignKey('companies.id'))
-    )
-
-    engineers_table = Table('engineers', metadata,
-        Column('employee_id', Integer, primary_key=True),
-        Column('name', String(50)),
-        Column('engineer_info', String(50)),
-        Column('company_id', Integer, ForeignKey('companies.id'))
-    )
-
-    mapper(Employee, employees_table, with_polymorphic=('*', pjoin), polymorphic_on=pjoin.c.type, polymorphic_identity='employee')
-    mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
-    mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')
-    mapper(Company, companies, properties={
-        'employees': relationship(Employee)
-    })
-
-The big limitation with concrete table inheritance is that :func:`~sqlalchemy.orm.relationship` objects placed on each concrete mapper do **not** propagate to child mappers.  If you want to have the same :func:`~sqlalchemy.orm.relationship` objects set up on all concrete mappers, they must be configured manually on each.  To configure back references in such a configuration the ``back_populates`` keyword may be used instead of ``backref``, such as below where both ``A(object)`` and ``B(A)`` bidirectionally reference ``C``::
-
-    ajoin = polymorphic_union({
-            'a':a_table,
-            'b':b_table
-        }, 'type', 'ajoin')
-
-    mapper(A, a_table, with_polymorphic=('*', ajoin),
-        polymorphic_on=ajoin.c.type, polymorphic_identity='a',
-        properties={
-            'some_c':relationship(C, back_populates='many_a')
-    })
-    mapper(B, b_table,inherits=A, concrete=True,
-        polymorphic_identity='b',
-        properties={
-            'some_c':relationship(C, back_populates='many_a')
-    })
-    mapper(C, c_table, properties={
-        'many_a':relationship(A, collection_class=set, back_populates='some_c'),
-    })
-
-
-.. _maptojoin:
-
-Mapping a Class against Multiple Tables
-----------------------------------------
-
-Mappers can be constructed against arbitrary relational units (called ``Selectables``) as well as plain ``Tables``.  For example, The ``join`` keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.
-
-.. sourcecode:: python+sql
-
-    from sqlalchemy.sql import join
-    
-    class AddressUser(object):
-        pass
-
-    # define a Join
-    j = join(users_table, addresses_table)
-
-    # map to it - the identity of an AddressUser object will be
-    # based on (user_id, address_id) since those are the primary keys involved
-    mapper(AddressUser, j, properties={
-        'user_id': [users_table.c.user_id, addresses_table.c.user_id]
-    })
-
-A second example:
-
-.. sourcecode:: python+sql
-
-    from sqlalchemy.sql import join
-
-    # many-to-many join on an association table
-    j = join(users_table, userkeywords,
-            users_table.c.user_id==userkeywords.c.user_id).join(keywords,
-               userkeywords.c.keyword_id==keywords.c.keyword_id)
-
-    # a class
-    class KeywordUser(object):
-        pass
-
-    # map to it - the identity of a KeywordUser object will be
-    # (user_id, keyword_id) since those are the primary keys involved
-    mapper(KeywordUser, j, properties={
-        'user_id': [users_table.c.user_id, userkeywords.c.user_id],
-        'keyword_id': [userkeywords.c.keyword_id, keywords.c.keyword_id]
-    })
-
-In both examples above, "composite" columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.
-
-Mapping a Class against Arbitrary Selects
-------------------------------------------
-
-Similar to mapping against a join, a plain select() object can be used with a mapper as well.  Below, an example select which contains two aggregate functions and a group_by is mapped to a class:
-
-.. sourcecode:: python+sql
-
-    from sqlalchemy.sql import select
-
-    s = select([customers,
-                func.count(orders).label('order_count'),
-                func.max(orders.price).label('highest_order')],
-                customers.c.customer_id==orders.c.customer_id,
-                group_by=[c for c in customers.c]
-                ).alias('somealias')
-    class Customer(object):
-        pass
-
-    mapper(Customer, s)
-
-Above, the "customers" table is joined against the "orders" table to produce a full row for each customer row, the total count of related rows in the "orders" table, and the highest price in the "orders" table, grouped against the full set of columns in the "customers" table.  That query is then mapped against the Customer class.  New instances of Customer will contain attributes for each column in the "customers" table as well as an "order_count" and "highest_order" attribute.  Updates to the Customer object will only be reflected in the "customers" table and not the "orders" table.  This is because the primary key columns of the "orders" table are not represented in this mapper and therefore the table is not affected by save or delete operations.
-
-Multiple Mappers for One Class
--------------------------------
-
-The first mapper created for a certain class is known as that class's "primary mapper."  Other mappers can be created as well on the "load side" - these are called **secondary mappers**.   This is a mapper that must be constructed with the keyword argument ``non_primary=True``, and represents a load-only mapper.  Objects that are loaded with a secondary mapper will have their save operation processed by the primary mapper.  It is also invalid to add new :func:`~sqlalchemy.orm.relationship` objects to a non-primary mapper. To use this mapper with the Session, specify it to the :class:`~sqlalchemy.orm.session.Session.query` method:
-
-example:
-
-.. sourcecode:: python+sql
-
-    # primary mapper
-    mapper(User, users_table)
-
-    # make a secondary mapper to load User against a join
-    othermapper = mapper(User, users_table.join(someothertable), non_primary=True)
-
-    # select
-    result = session.query(othermapper).select()
-
-The "non primary mapper" is a rarely needed feature of SQLAlchemy; in most cases, the :class:`~sqlalchemy.orm.query.Query` object can produce any kind of query that's desired.  It's recommended that a straight :class:`~sqlalchemy.orm.query.Query` be used in place of a non-primary mapper unless the mapper approach is absolutely needed.  Current use cases for the "non primary mapper" are when you want to map the class to a particular select statement or view to which additional query criterion can be added, and for when the particular mapped select statement or view is to be placed in a :func:`~sqlalchemy.orm.relationship` of a parent mapper.
-
-Multiple "Persistence" Mappers for One Class
----------------------------------------------
-
-The non_primary mapper defines alternate mappers for the purposes of loading objects.  What if we want the same class to be *persisted* differently, such as to different tables ?   SQLAlchemy
-refers to this as the "entity name" pattern, and in Python one can use a recipe which creates
-anonymous subclasses which are distinctly mapped.  See the recipe at `Entity Name <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/EntityName>`_.
-
-Constructors and Object Initialization
----------------------------------------
-
-Mapping imposes no restrictions or requirements on the constructor (``__init__``) method for the class. You are free to require any arguments for the function
-that you wish, assign attributes to the instance that are unknown to the ORM, and generally do anything else you would normally do when writing a constructor
-for a Python class.
-
-The SQLAlchemy ORM does not call ``__init__`` when recreating objects from database rows. The ORM's process is somewhat akin to the Python standard library's
-``pickle`` module, invoking the low level ``__new__`` method and then quietly restoring attributes directly on the instance rather than calling ``__init__``.
-
-If you need to do some setup on database-loaded instances before they're ready to use, you can use the ``@reconstructor`` decorator to tag a method as the ORM
-counterpart to ``__init__``. SQLAlchemy will call this method with no arguments every time it loads or reconstructs one of your instances. This is useful for
-recreating transient properties that are normally assigned in your ``__init__``::
-
-    from sqlalchemy import orm
-
-    class MyMappedClass(object):
-        def __init__(self, data):
-            self.data = data
-            # we need stuff on all instances, but not in the database.
-            self.stuff = []
-
-        @orm.reconstructor
-        def init_on_load(self):
-            self.stuff = []
-
-When ``obj = MyMappedClass()`` is executed, Python calls the ``__init__`` method as normal and the ``data`` argument is required. When instances are loaded
-during a :class:`~sqlalchemy.orm.query.Query` operation as in ``query(MyMappedClass).one()``, ``init_on_load`` is called instead.
-
-Any method may be tagged as the :func:`~sqlalchemy.orm.reconstructor`, even the ``__init__`` method. SQLAlchemy will call the reconstructor method with no arguments. Scalar
-(non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet
-available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush()
-operation, so the activity within a reconstructor should be conservative.
-
-While the ORM does not call your ``__init__`` method, it will modify the class's ``__init__`` slightly. The method is lightly wrapped to act as a trigger for
-the ORM, allowing mappers to be compiled automatically and will fire a :func:`~sqlalchemy.orm.interfaces.MapperExtension.init_instance` event that :class:`~sqlalchemy.orm.interfaces.MapperExtension` objects may listen for.
-:class:`~sqlalchemy.orm.interfaces.MapperExtension` objects can also listen for a ``reconstruct_instance`` event, analogous to the :func:`~sqlalchemy.orm.reconstructor` decorator above.
-
-.. _extending_mapper:
-
-Extending Mapper
------------------
-
-Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperExtension class.  This class is just a series of "hooks" where various functionality takes place.  An application can make its own MapperExtension objects, overriding only the methods it needs.  Methods that are not overridden return the special value ``sqlalchemy.orm.EXT_CONTINUE`` to allow processing to continue to the next MapperExtension or simply proceed normally if there are no more extensions.
-
-API documentation for MapperExtension: :class:`sqlalchemy.orm.interfaces.MapperExtension`
-
-To use MapperExtension, make your own subclass of it and just send it off to a mapper::
-
-    m = mapper(User, users_table, extension=MyExtension())
-
-Multiple extensions will be chained together and processed in order; they are specified as a list::
-
-    m = mapper(User, users_table, extension=[ext1, ext2, ext3])
-
-.. _advdatamapping_relationship:
-
-Relationship Configuration
-==========================
-
-Basic Relational Patterns
---------------------------
-
-A quick walkthrough of the basic relational patterns. In this section we
-illustrate the classical mapping using :func:`mapper` in conjunction with
-:func:`relationship`. Then (by popular demand), we illustrate the declarative
-form using the :mod:`~sqlalchemy.ext.declarative` module.
-
-Note that :func:`.relationship` is historically known as
-:func:`.relation` in older versions of SQLAlchemy.
-
-One To Many
-~~~~~~~~~~~~
-
-A one to many relationship places a foreign key in the child table referencing
-the parent. SQLAlchemy creates the relationship as a collection on the parent
-object containing instances of the child object.
-
-.. sourcecode:: python+sql
-
-    parent_table = Table('parent', metadata,
-        Column('id', Integer, primary_key=True))
-
-    child_table = Table('child', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('parent_id', Integer, ForeignKey('parent.id'))
-    )
-
-    class Parent(object):
-        pass
-
-    class Child(object):
-        pass
-
-    mapper(Parent, parent_table, properties={
-        'children': relationship(Child)
-    })
-
-    mapper(Child, child_table)
-
-To establish a bi-directional relationship in one-to-many, where the "reverse" side is a many to one, specify the ``backref`` option:
-
-.. sourcecode:: python+sql
-
-    mapper(Parent, parent_table, properties={
-        'children': relationship(Child, backref='parent')
-    })
-
-    mapper(Child, child_table)
-
-``Child`` will get a ``parent`` attribute with many-to-one semantics.
-
-Declarative::
-    
-    from sqlalchemy.ext.declarative import declarative_base
-    Base = declarative_base()
-    
-    class Parent(Base):
-        __tablename__ = 'parent'
-        id = Column(Integer, primary_key=True)
-        children = relationship("Child", backref="parent")
-        
-    class Child(Base):
-        __tablename__ = 'child'
-        id = Column(Integer, primary_key=True)
-        parent_id = Column(Integer, ForeignKey('parent.id'))
-        
-
-Many To One
-~~~~~~~~~~~~
-
-Many to one places a foreign key in the parent table referencing the child.
-The mapping setup is identical to one-to-many, however SQLAlchemy creates the
-relationship as a scalar attribute on the parent object referencing a single
-instance of the child object.
-
-.. sourcecode:: python+sql
-
-    parent_table = Table('parent', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('child_id', Integer, ForeignKey('child.id')))
-
-    child_table = Table('child', metadata,
-        Column('id', Integer, primary_key=True),
-        )
-
-    class Parent(object):
-        pass
-
-    class Child(object):
-        pass
-
-    mapper(Parent, parent_table, properties={
-        'child': relationship(Child)
-    })
-
-    mapper(Child, child_table)
-
-Backref behavior is available here as well, where ``backref="parents"`` will
-place a one-to-many collection on the ``Child`` class::
-
-    mapper(Parent, parent_table, properties={
-        'child': relationship(Child, backref="parents")
-    })
-
-Declarative::
-
-    from sqlalchemy.ext.declarative import declarative_base
-    Base = declarative_base()
-
-    class Parent(Base):
-        __tablename__ = 'parent'
-        id = Column(Integer, primary_key=True)
-        child_id = Column(Integer, ForeignKey('child.id'))
-        child = relationship("Child", backref="parents")
-        
-    class Child(Base):
-        __tablename__ = 'child'
-        id = Column(Integer, primary_key=True)
-
-One To One
-~~~~~~~~~~~
-
-One To One is essentially a bi-directional relationship with a scalar
-attribute on both sides. To achieve this, the ``uselist=False`` flag indicates
-the placement of a scalar attribute instead of a collection on the "many" side
-of the relationship. To convert one-to-many into one-to-one::
-
-    parent_table = Table('parent', metadata,
-        Column('id', Integer, primary_key=True)
-    )
-
-    child_table = Table('child', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('parent_id', Integer, ForeignKey('parent.id'))
-    )
-
-    mapper(Parent, parent_table, properties={
-        'child': relationship(Child, uselist=False, backref='parent')
-    })
-    
-    mapper(Child, child_table)
-
-Or to turn a one-to-many backref into one-to-one, use the :func:`.backref` function
-to provide arguments for the reverse side::
-    
-    from sqlalchemy.orm import backref
-    
-    parent_table = Table('parent', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('child_id', Integer, ForeignKey('child.id'))
-    )
-
-    child_table = Table('child', metadata,
-        Column('id', Integer, primary_key=True)
-    )
-
-    mapper(Parent, parent_table, properties={
-        'child': relationship(Child, backref=backref('parent', uselist=False))
-    })
-
-    mapper(Child, child_table)
-
-The second example above as declarative::
-
-    from sqlalchemy.ext.declarative import declarative_base
-    Base = declarative_base()
-
-    class Parent(Base):
-        __tablename__ = 'parent'
-        id = Column(Integer, primary_key=True)
-        child_id = Column(Integer, ForeignKey('child.id'))
-        child = relationship("Child", backref=backref("parent", uselist=False))
-        
-    class Child(Base):
-        __tablename__ = 'child'
-        id = Column(Integer, primary_key=True)
-    
-Many To Many
-~~~~~~~~~~~~~
-
-Many to Many adds an association table between two classes. The association
-table is indicated by the ``secondary`` argument to
-:func:`.relationship`.
-
-.. sourcecode:: python+sql
-
-    left_table = Table('left', metadata,
-        Column('id', Integer, primary_key=True)
-    )
-
-    right_table = Table('right', metadata,
-        Column('id', Integer, primary_key=True)
-    )
-
-    association_table = Table('association', metadata,
-        Column('left_id', Integer, ForeignKey('left.id')),
-        Column('right_id', Integer, ForeignKey('right.id'))
-    )
-
-    mapper(Parent, left_table, properties={
-        'children': relationship(Child, secondary=association_table)
-    })
-
-    mapper(Child, right_table)
-
-For a bi-directional relationship, both sides of the relationship contain a
-collection.  The ``backref`` keyword will automatically use
-the same ``secondary`` argument for the reverse relationship:
-
-.. sourcecode:: python+sql
-
-    mapper(Parent, left_table, properties={
-        'children': relationship(Child, secondary=association_table, 
-                                        backref='parents')
-    })
-
-With declarative, we still use the :class:`.Table` for the ``secondary`` 
-argument.  A class is not mapped to this table, so it remains in its 
-plain schematic form::
-
-    from sqlalchemy.ext.declarative import declarative_base
-    Base = declarative_base()
-
-    association_table = Table('association', Base.metadata,
-        Column('left_id', Integer, ForeignKey('left.id')),
-        Column('right_id', Integer, ForeignKey('right.id'))
-    )
-    
-    class Parent(Base):
-        __tablename__ = 'left'
-        id = Column(Integer, primary_key=True)
-        children = relationship("Child", 
-                        secondary=association_table, 
-                        backref="parents")
-        
-    class Child(Base):
-        __tablename__ = 'right'
-        id = Column(Integer, primary_key=True)
-    
-.. _association_pattern:
-
-Association Object
-~~~~~~~~~~~~~~~~~~
-
-The association object pattern is a variant on many-to-many: it specifically
-is used when your association table contains additional columns beyond those
-which are foreign keys to the left and right tables. Instead of using the
-``secondary`` argument, you map a new class directly to the association table.
-The left side of the relationship references the association object via
-one-to-many, and the association class references the right side via
-many-to-one.
-
-.. sourcecode:: python+sql
-
-    left_table = Table('left', metadata,
-        Column('id', Integer, primary_key=True)
-    )
-
-    right_table = Table('right', metadata,
-        Column('id', Integer, primary_key=True)
-    )
-
-    association_table = Table('association', metadata,
-        Column('left_id', Integer, ForeignKey('left.id'), primary_key=True),
-        Column('right_id', Integer, ForeignKey('right.id'), primary_key=True),
-        Column('data', String(50))
-    )
-
-    mapper(Parent, left_table, properties={
-        'children':relationship(Association)
-    })
-
-    mapper(Association, association_table, properties={
-        'child':relationship(Child)
-    })
-
-    mapper(Child, right_table)
-
-The bi-directional version adds backrefs to both relationships:
-
-.. sourcecode:: python+sql
-
-    mapper(Parent, left_table, properties={
-        'children':relationship(Association, backref="parent")
-    })
-
-    mapper(Association, association_table, properties={
-        'child':relationship(Child, backref="parent_assocs")
-    })
-
-    mapper(Child, right_table)
-
-Declarative::
-
-    from sqlalchemy.ext.declarative import declarative_base
-    Base = declarative_base()
-
-    class Association(Base):
-        __tablename__ = 'association'
-        left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
-        right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
-        child = relationship("Child", backref="parent_assocs")
-        
-    class Parent(Base):
-        __tablename__ = 'left'
-        id = Column(Integer, primary_key=True)
-        children = relationship(Association, backref="parent")
-        
-    class Child(Base):
-        __tablename__ = 'right'
-        id = Column(Integer, primary_key=True)
-        
-Working with the association pattern in its direct form requires that child
-objects are associated with an association instance before being appended to
-the parent; similarly, access from parent to child goes through the
-association object:
-
-.. sourcecode:: python+sql
-
-    # create parent, append a child via association
-    p = Parent()
-    a = Association()
-    a.child = Child()
-    p.children.append(a)
-
-    # iterate through child objects via association, including association
-    # attributes
-    for assoc in p.children:
-        print assoc.data
-        print assoc.child
-
-To enhance the association object pattern such that direct
-access to the ``Association`` object is optional, SQLAlchemy
-provides the :ref:`associationproxy` extension. This
-extension allows the configuration of attributes which will
-access two "hops" with a single access, one "hop" to the
-associated object, and a second to a target attribute.
-
-.. note:: When using the association object pattern, it is
-  advisable that the association-mapped table not be used
-  as the ``secondary`` argument on a :func:`.relationship`
-  elsewhere, unless that :func:`.relationship` contains
-  the option ``viewonly=True``.   SQLAlchemy otherwise 
-  may attempt to emit redundant INSERT and DELETE 
-  statements on the same table, if similar state is detected
-  on the related attribute as well as the associated
-  object.
-
-Adjacency List Relationships
------------------------------
-
-The **adjacency list** pattern is a common relational pattern whereby a table
-contains a foreign key reference to itself. This is the most common and simple
-way to represent hierarchical data in flat tables. The other way is the
-"nested sets" model, sometimes called "modified preorder". Despite what many
-online articles say about modified preorder, the adjacency list model is
-probably the most appropriate pattern for the large majority of hierarchical
-storage needs, for reasons of concurrency, reduced complexity, and that
-modified preorder has little advantage over an application which can fully
-load subtrees into the application space.
-
-SQLAlchemy commonly refers to an adjacency list relationship as a
-**self-referential mapper**. In this example, we'll work with a single table
-called ``nodes`` to represent a tree structure::
-
-    nodes = Table('nodes', metadata,
-        Column('id', Integer, primary_key=True),
-        Column('parent_id', Integer, ForeignKey('nodes.id')),
-        Column('data', String(50)),
-        )
-
-A graph such as the following::
-