Commits

Mike Bayer committed bb1fb0f

almost through.

  • Participants
  • Parent commits 5473297

Comments (0)

Files changed (58)

File doc/build/core/compiler.rst

+.. _sqlalchemy.ext.compiler_toplevel:
+
+Custom SQL Constructs and Compilation Extension
+===============================================
+
+.. automodule:: sqlalchemy.ext.compiler
+    :members:

File doc/build/core/connections.rst

+=====================================
+Working with Engines and Connections
+=====================================
+
+.. module:: sqlalchemy.engine.base
+
+Recall from the beginning of :ref:`engines_toplevel` that the :class:`.Engine` provides a
+``connect()`` method which returns a
+:class:`~sqlalchemy.engine.base.Connection` object.
+:class:`~sqlalchemy.engine.base.Connection` is a *proxy* object which
+maintains a reference to a DBAPI connection instance. The ``close()`` method
+on :class:`~sqlalchemy.engine.base.Connection` does not actually close the
+DBAPI connection, but instead returns it to the connection pool referenced by
+the :class:`~sqlalchemy.engine.base.Engine`.
+:class:`~sqlalchemy.engine.base.Connection` will also automatically return its
+resources to the connection pool when the object is garbage collected, i.e.
+its ``__del__()`` method is called. When using the standard C implementation
+of Python, this method is usually called immediately as soon as the object is
+dereferenced. With other Python implementations such as Jython, this is not so
+guaranteed.
+
+The ``execute()`` methods on both :class:`~sqlalchemy.engine.base.Engine` and
+:class:`~sqlalchemy.engine.base.Connection` can also receive SQL clause
+constructs as well::
+
+    connection = engine.connect()
+    result = connection.execute(select([table1], table1.c.col1==5))
+    for row in result:
+        print row['col1'], row['col2']
+    connection.close()
+
+The above SQL construct is known as a ``select()``. The full range of SQL
+constructs available are described in :ref:`sqlexpression_toplevel`.
+
+Both :class:`~sqlalchemy.engine.base.Connection` and
+:class:`~sqlalchemy.engine.base.Engine` fulfill an interface known as
+:class:`~sqlalchemy.engine.base.Connectable` which specifies common
+functionality between the two objects, namely being able to call ``connect()``
+to return a :class:`~sqlalchemy.engine.base.Connection` object
+(:class:`~sqlalchemy.engine.base.Connection` just returns itself), and being
+able to call ``execute()`` to get a result set. Following this, most
+SQLAlchemy functions and objects which accept an
+:class:`~sqlalchemy.engine.base.Engine` as a parameter or attribute with which
+to execute SQL will also accept a :class:`~sqlalchemy.engine.base.Connection`.
+This argument is named ``bind``::
+
+    engine = create_engine('sqlite:///:memory:')
+
+    # specify some Table metadata
+    metadata = MetaData()
+    table = Table('sometable', metadata, Column('col1', Integer))
+
+    # create the table with the Engine
+    table.create(bind=engine)
+
+    # drop the table with a Connection off the Engine
+    connection = engine.connect()
+    table.drop(bind=connection)
+
+.. index::
+   single: thread safety; connections
+
+Connection facts:
+
+* the Connection object is **not thread-safe**. While a Connection can be
+  shared among threads using properly synchronized access, this is also not
+  recommended as many DBAPIs have issues with, if not outright disallow,
+  sharing of connection state between threads.
+* The Connection object represents a single dbapi connection checked out from
+  the connection pool. In this state, the connection pool has no affect upon
+  the connection, including its expiration or timeout state. For the
+  connection pool to properly manage connections, **connections should be
+  returned to the connection pool (i.e. ``connection.close()``) whenever the
+  connection is not in use**. If your application has a need for management of
+  multiple connections or is otherwise long running (this includes all web
+  applications, threaded or not), don't hold a single connection open at the
+  module level.
+
+Connection API
+===============
+
+.. autoclass:: Connection
+   :members:
+   :undoc-members:
+
+.. autoclass:: Connectable
+   :members:
+
+Engine API
+===========
+
+.. autoclass:: Engine
+   :members:
+
+Result Object API
+=================
+
+.. autoclass:: sqlalchemy.engine.base.ResultProxy
+    :members:
+    
+.. autoclass:: sqlalchemy.engine.base.RowProxy
+    :members:
+
+Using Connection-level Transactions
+===================================
+
+The :class:`~sqlalchemy.engine.base.Connection` object provides a ``begin()``
+method which returns a :class:`~sqlalchemy.engine.base.Transaction` object.
+This object is usually used within a try/except clause so that it is
+guaranteed to ``rollback()`` or ``commit()``::
+
+    trans = connection.begin()
+    try:
+        r1 = connection.execute(table1.select())
+        connection.execute(table1.insert(), col1=7, col2='this is some data')
+        trans.commit()
+    except:
+        trans.rollback()
+        raise
+
+The :class:`~sqlalchemy.engine.base.Transaction` object also handles "nested"
+behavior by keeping track of the outermost begin/commit pair. In this example,
+two functions both issue a transaction on a Connection, but only the outermost
+Transaction object actually takes effect when it is committed.
+
+.. sourcecode:: python+sql
+
+    # method_a starts a transaction and calls method_b
+    def method_a(connection):
+        trans = connection.begin() # open a transaction
+        try:
+            method_b(connection)
+            trans.commit()  # transaction is committed here
+        except:
+            trans.rollback() # this rolls back the transaction unconditionally
+            raise
+
+    # method_b also starts a transaction
+    def method_b(connection):
+        trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
+        try:
+            connection.execute("insert into mytable values ('bat', 'lala')")
+            connection.execute(mytable.insert(), col1='bat', col2='lala')
+            trans.commit()  # transaction is not committed yet
+        except:
+            trans.rollback() # this rolls back the transaction unconditionally
+            raise
+
+    # open a Connection and call method_a
+    conn = engine.connect()
+    method_a(conn)
+    conn.close()
+
+Above, ``method_a`` is called first, which calls ``connection.begin()``. Then
+it calls ``method_b``. When ``method_b`` calls ``connection.begin()``, it just
+increments a counter that is decremented when it calls ``commit()``. If either
+``method_a`` or ``method_b`` calls ``rollback()``, the whole transaction is
+rolled back. The transaction is not committed until ``method_a`` calls the
+``commit()`` method. This "nesting" behavior allows the creation of functions
+which "guarantee" that a transaction will be used if one was not already
+available, but will automatically participate in an enclosing transaction if
+one exists.
+
+Note that SQLAlchemy's Object Relational Mapper also provides a way to control
+transaction scope at a higher level; this is described in
+:ref:`unitofwork_transaction`.
+
+.. index::
+   single: thread safety; transactions
+
+Transaction Facts:
+
+* the Transaction object, just like its parent Connection, is **not thread-safe**.
+
+.. autoclass:: Transaction
+    :members:
+
+
+Understanding Autocommit
+========================
+
+The previous transaction example illustrates how to use
+:class:`~sqlalchemy.engine.base.Transaction` so that several executions can
+take part in the same transaction. What happens when we issue an INSERT,
+UPDATE or DELETE call without using
+:class:`~sqlalchemy.engine.base.Transaction`? The answer is **autocommit**.
+While many DBAPIs implement a flag called ``autocommit``, the current
+SQLAlchemy behavior is such that it implements its own autocommit. This is
+achieved by detecting statements which represent data-changing operations,
+i.e. INSERT, UPDATE, DELETE, etc., and then issuing a COMMIT automatically if
+no transaction is in progress. The detection is based on compiled statement
+attributes, or in the case of a text-only statement via regular expressions.
+
+.. sourcecode:: python+sql
+
+    conn = engine.connect()
+    conn.execute("INSERT INTO users VALUES (1, 'john')")  # autocommits
+
+.. _dbengine_implicit:
+
+Connectionless Execution, Implicit Execution
+=============================================
+
+Recall from the first section we mentioned executing with and without a
+:class:`~sqlalchemy.engine.base.Connection`. ``Connectionless`` execution
+refers to calling the ``execute()`` method on an object which is not a
+:class:`~sqlalchemy.engine.base.Connection`, which could be on the
+:class:`~sqlalchemy.engine.base.Engine` itself, or could be a constructed SQL
+object. When we say "implicit", we mean that we are calling the ``execute()``
+method on an object which is neither a
+:class:`~sqlalchemy.engine.base.Connection` nor an
+:class:`~sqlalchemy.engine.base.Engine` object; this can only be used with
+constructed SQL objects which have their own ``execute()`` method, and can be
+"bound" to an :class:`~sqlalchemy.engine.base.Engine`. A description of
+"constructed SQL objects" may be found in :ref:`sqlexpression_toplevel`.
+
+A summary of all three methods follows below. First, assume the usage of the
+following :class:`~sqlalchemy.schema.MetaData` and
+:class:`~sqlalchemy.schema.Table` objects; while we haven't yet introduced
+these concepts, for now you only need to know that we are representing a
+database table, and are creating an "executable" SQL construct which issues a
+statement to the database. These objects are described in
+:ref:`metadata_toplevel`.
+
+.. sourcecode:: python+sql
+
+    meta = MetaData()
+    users_table = Table('users', meta,
+        Column('id', Integer, primary_key=True),
+        Column('name', String(50))
+    )
+
+Explicit execution delivers the SQL text or constructed SQL expression to the
+``execute()`` method of :class:`~sqlalchemy.engine.base.Connection`:
+
+.. sourcecode:: python+sql
+
+    engine = create_engine('sqlite:///file.db')
+    connection = engine.connect()
+    result = connection.execute(users_table.select())
+    for row in result:
+        # ....
+    connection.close()
+
+Explicit, connectionless execution delivers the expression to the
+``execute()`` method of :class:`~sqlalchemy.engine.base.Engine`:
+
+.. sourcecode:: python+sql
+
+    engine = create_engine('sqlite:///file.db')
+    result = engine.execute(users_table.select())
+    for row in result:
+        # ....
+    result.close()
+
+Implicit execution is also connectionless, and calls the ``execute()`` method
+on the expression itself, utilizing the fact that either an
+:class:`~sqlalchemy.engine.base.Engine` or
+:class:`~sqlalchemy.engine.base.Connection` has been *bound* to the expression
+object (binding is discussed further in the next section,
+:ref:`metadata_toplevel`):
+
+.. sourcecode:: python+sql
+
+    engine = create_engine('sqlite:///file.db')
+    meta.bind = engine
+    result = users_table.select().execute()
+    for row in result:
+        # ....
+    result.close()
+
+In both "connectionless" examples, the
+:class:`~sqlalchemy.engine.base.Connection` is created behind the scenes; the
+:class:`~sqlalchemy.engine.base.ResultProxy` returned by the ``execute()``
+call references the :class:`~sqlalchemy.engine.base.Connection` used to issue
+the SQL statement. When we issue ``close()`` on the
+:class:`~sqlalchemy.engine.base.ResultProxy`, or if the result set object
+falls out of scope and is garbage collected, the underlying
+:class:`~sqlalchemy.engine.base.Connection` is closed for us, resulting in the
+DBAPI connection being returned to the pool.
+
+.. _threadlocal_strategy:
+
+Using the Threadlocal Execution Strategy
+-----------------------------------------
+
+The "threadlocal" engine strategy is used by non-ORM applications which wish
+to bind a transaction to the current thread, such that all parts of the
+application can participate in that transaction implicitly without the need to
+explicitly reference a :class:`~sqlalchemy.engine.base.Connection`.
+"threadlocal" is designed for a very specific pattern of use, and is not
+appropriate unless this very specfic pattern, described below, is what's
+desired. It has **no impact** on the "thread safety" of SQLAlchemy components
+or one's application. It also should not be used when using an ORM
+:class:`~sqlalchemy.orm.session.Session` object, as the
+:class:`~sqlalchemy.orm.session.Session` itself represents an ongoing
+transaction and itself handles the job of maintaining connection and
+transactional resources.
+
+Enabling ``threadlocal`` is achieved as follows:
+
+.. sourcecode:: python+sql
+
+    db = create_engine('mysql://localhost/test', strategy='threadlocal')
+
+When the engine above is used in a "connectionless" style, meaning
+``engine.execute()`` is called, a DBAPI connection is retrieved from the
+connection pool and then associated with the current thread. Subsequent
+operations on the :class:`~sqlalchemy.engine.base.Engine` while the DBAPI
+connection remains checked out will make use of the *same* DBAPI connection
+object. The connection stays allocated until all returned
+:class:`~sqlalchemy.engine.base.ResultProxy` objects are closed, which occurs
+for a particular :class:`~sqlalchemy.engine.base.ResultProxy` after all
+pending results are fetched, or immediately for an operation which returns no
+rows (such as an INSERT).
+
+.. sourcecode:: python+sql
+
+    # execute one statement and receive results.  r1 now references a DBAPI connection resource.
+    r1 = db.execute("select * from table1")
+
+    # execute a second statement and receive results.  r2 now references the *same* resource as r1
+    r2 = db.execute("select * from table2")
+
+    # fetch a row on r1 (assume more results are pending)
+    row1 = r1.fetchone()
+
+    # fetch a row on r2 (same)
+    row2 = r2.fetchone()
+
+    # close r1.  the connection is still held by r2.
+    r1.close()
+
+    # close r2.  with no more references to the underlying connection resources, they
+    # are returned to the pool.
+    r2.close()
+
+The above example does not illustrate any pattern that is particularly useful,
+as it is not a frequent occurence that two execute/result fetching operations
+"leapfrog" one another. There is a slight savings of connection pool checkout
+overhead between the two operations, and an implicit sharing of the same
+transactional context, but since there is no explicitly declared transaction,
+this association is short lived.
+
+The real usage of "threadlocal" comes when we want several operations to occur
+within the scope of a shared transaction. The
+:class:`~sqlalchemy.engine.base.Engine` now has ``begin()``, ``commit()`` and
+``rollback()`` methods which will retrieve a connection resource from the pool
+and establish a new transaction, maintaining the connection against the
+current thread until the transaction is committed or rolled back:
+
+.. sourcecode:: python+sql
+
+    db.begin()
+    try:
+        call_operation1()
+        call_operation2()
+        db.commit()
+    except:
+        db.rollback()
+
+``call_operation1()`` and ``call_operation2()`` can make use of the
+:class:`~sqlalchemy.engine.base.Engine` as a global variable, using the
+"connectionless" execution style, and their operations will participate in the
+same transaction:
+
+.. sourcecode:: python+sql
+
+    def call_operation1():
+        engine.execute("insert into users values (?, ?)", 1, "john")
+
+    def call_operation2():
+        users.update(users.c.user_id==5).execute(name='ed')
+
+When using threadlocal, operations that do call upon the ``engine.connect()``
+method will receive a :class:`~sqlalchemy.engine.base.Connection` that is
+**outside** the scope of the transaction. This can be used for operations such
+as logging the status of an operation regardless of transaction success:
+
+.. sourcecode:: python+sql
+
+    db.begin()
+    conn = db.connect()
+    try:
+        conn.execute(log_table.insert(), message="Operation started")
+        call_operation1()
+        call_operation2()
+        db.commit()
+        conn.execute(log_table.insert(), message="Operation succeeded")
+    except:
+        db.rollback()
+        conn.execute(log_table.insert(), message="Operation failed")
+    finally:
+        conn.close()
+
+Functions which are written to use an explicit
+:class:`~sqlalchemy.engine.base.Connection` object, but wish to participate in
+the threadlocal transaction, can receive their
+:class:`~sqlalchemy.engine.base.Connection` object from the
+``contextual_connect()`` method, which returns a
+:class:`~sqlalchemy.engine.base.Connection` that is **inside** the scope of
+the transaction:
+
+.. sourcecode:: python+sql
+
+    conn = db.contextual_connect()
+    call_operation3(conn)
+    conn.close()
+
+Calling ``close()`` on the "contextual" connection does not release the
+connection resources to the pool if other resources are making use of it. A
+resource-counting mechanism is employed so that the connection is released
+back to the pool only when all users of that connection, including the
+transaction established by ``engine.begin()``, have been completed.
+
+So remember - if you're not sure if you need to use ``strategy="threadlocal"``
+or not, the answer is **no** ! It's driven by a specific programming pattern
+that is generally not the norm.
+

File doc/build/core/engines.rst

+.. _engines_toplevel:
+
+================
+Database Engines
+================
+The **Engine** is the starting point for any SQLAlchemy application. It's
+"home base" for the actual database and its DBAPI, delivered to the SQLAlchemy
+application through a connection pool and a **Dialect**, which describes how
+to talk to a specific kind of database/DBAPI combination.
+
+The general structure is this::
+
+                                         +-----------+                        __________
+                                     /---|   Pool    |---\                   (__________)
+                 +-------------+    /    +-----------+    \     +--------+   |          |
+    connect() <--|   Engine    |---x                       x----| DBAPI  |---| database |
+                 +-------------+    \    +-----------+    /     +--------+   |          |
+                                     \---|  Dialect  |---/                   |__________|
+                                         +-----------+                       (__________)
+
+Where above, a :class:`~sqlalchemy.engine.base.Engine` references both a
+:class:`~sqlalchemy.engine.base.Dialect` and :class:`~sqlalchemy.pool.Pool`,
+which together interpret the DBAPI's module functions as well as the behavior
+of the database.
+
+Creating an engine is just a matter of issuing a single call,
+:func:`create_engine()`::
+
+    engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
+
+The above engine invokes the ``postgresql`` dialect and a connection pool
+which references ``localhost:5432``.
+
+Note that the appropriate usage of :func:`create_engine()` is once per
+particular configuration, held globally for the lifetime of a single
+application process (not including child processes via ``fork()`` - these
+would require a new engine). A single :class:`~sqlalchemy.engine.base.Engine`
+manages connections on behalf of the process and is intended to be called upon
+in a concurrent fashion. Creating engines for each particular operation is not
+the intended usage.
+
+The engine can be used directly to issue SQL to the database. The most generic
+way is to use connections, which you get via the ``connect()`` method::
+
+    connection = engine.connect()
+    result = connection.execute("select username from users")
+    for row in result:
+        print "username:", row['username']
+    connection.close()
+
+The connection is an instance of :class:`~sqlalchemy.engine.base.Connection`,
+which is a **proxy** object for an actual DBAPI connection. The returned
+result is an instance of :class:`~sqlalchemy.engine.ResultProxy`, which acts
+very much like a DBAPI cursor.
+
+When you say ``engine.connect()``, a new
+:class:`~sqlalchemy.engine.base.Connection` object is created, and a DBAPI
+connection is retrieved from the connection pool. Later, when you call
+``connection.close()``, the DBAPI connection is returned to the pool; nothing
+is actually "closed" from the perspective of the database.
+
+To execute some SQL more quickly, you can skip the
+:class:`~sqlalchemy.engine.base.Connection` part and just say::
+
+    result = engine.execute("select username from users")
+    for row in result:
+        print "username:", row['username']
+    result.close()
+
+Where above, the ``execute()`` method on the
+:class:`~sqlalchemy.engine.base.Engine` does the ``connect()`` part for you,
+and returns the :class:`~sqlalchemy.engine.base.ResultProxy` directly. The
+actual :class:`~sqlalchemy.engine.base.Connection` is *inside* the
+:class:`~sqlalchemy.engine.base.ResultProxy`, waiting for you to finish
+reading the result. In this case, when you ``close()`` the
+:class:`~sqlalchemy.engine.base.ResultProxy`, the underlying
+:class:`~sqlalchemy.engine.base.Connection` is closed, which returns the DBAPI
+connection to the pool.
+
+To summarize the above two examples, when you use a
+:class:`~sqlalchemy.engine.base.Connection` object, it's known as **explicit
+execution**. When you don't see the
+:class:`~sqlalchemy.engine.base.Connection` object, but you still use the
+``execute()`` method on the :class:`~sqlalchemy.engine.base.Engine`, it's
+called **explicit, connectionless execution**. A third variant of execution
+also exists called **implicit execution**; this will be described later.
+
+The :class:`~sqlalchemy.engine.base.Engine` and
+:class:`~sqlalchemy.engine.base.Connection` can do a lot more than what we
+illustrated above; SQL strings are only its most rudimentary function. Later
+chapters will describe how "constructed SQL" expressions can be used with
+engines; in many cases, you don't have to deal with the
+:class:`~sqlalchemy.engine.base.Engine` at all after it's created. The Object
+Relational Mapper (ORM), an optional feature of SQLAlchemy, also uses the
+:class:`~sqlalchemy.engine.base.Engine` in order to get at connections; that's
+also a case where you can often create the engine once, and then forget about
+it.
+
+.. _supported_dbapis:
+
+Supported Databases
+====================
+
+SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various 
+backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package.  A 
+SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
+
+The table below summarizes the state of DBAPI support in SQLAlchemy 0.6.  The values 
+translate as:
+
+* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.   
+* yes / OS platform - The DBAPI supports that platform.
+* no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.  
+* no / OS platform - The DBAPI does not support that platform.
+* partial - the DBAPI is partially usable on the target platform but has major unresolved issues.
+* development - a development version of the dialect exists, but is not yet usable.
+* thirdparty - the dialect itself is maintained by a third party, who should be consulted for
+  information on current support.
+* \* - indicates the given DBAPI is the "default" for SQLAlchemy, i.e. when just the database name is specified
+
+=========================  ===========================  ===========  ===========   ===========  =================  ============
+Driver                     Connect string               Py2K         Py3K          Jython       Unix               Windows
+=========================  ===========================  ===========  ===========   ===========  =================  ============
+**DB2/Informix IDS**
+ibm-db_                    thirdparty                   thirdparty   thirdparty    thirdparty   thirdparty         thirdparty
+**Firebird**
+kinterbasdb_               ``firebird+kinterbasdb``\*   yes          development   no           yes                yes
+**Informix**
+informixdb_                ``informix+informixdb``\*    development  development   no           unknown            unknown
+**MaxDB**
+sapdb_                     ``maxdb+sapdb``\*            development  development   no           yes                unknown
+**Microsoft Access**
+pyodbc_                    ``access+pyodbc``\*          development  development   no           unknown            yes
+**Microsoft SQL Server**
+adodbapi_                  ``mssql+adodbapi``           development  development   no           no                 yes
+`jTDS JDBC Driver`_        ``mssql+zxjdbc``             no           no            development  yes                yes
+mxodbc_                    ``mssql+mxodbc``             yes          development   no           yes with FreeTDS_  yes
+pyodbc_                    ``mssql+pyodbc``\*           yes          development   no           yes with FreeTDS_  yes
+pymssql_                   ``mssql+pymssql``            yes          development   no           yes                yes
+**MySQL**
+`MySQL Connector/J`_       ``mysql+zxjdbc``             no           no            yes          yes                yes
+`MySQL Connector/Python`_  ``mysql+mysqlconnector``     yes          partial       no           yes                yes
+mysql-python_              ``mysql+mysqldb``\*          yes          development   no           yes                yes
+OurSQL_                    ``mysql+oursql``             yes          partial       no           yes                yes
+**Oracle**
+cx_oracle_                 ``oracle+cx_oracle``\*       yes          development   no           yes                yes
+`Oracle JDBC Driver`_      ``oracle+zxjdbc``            no           no            yes          yes                yes
+**Postgresql**
+pg8000_                    ``postgresql+pg8000``        yes          yes           no           yes                yes
+`PostgreSQL JDBC Driver`_  ``postgresql+zxjdbc``        no           no            yes          yes                yes
+psycopg2_                  ``postgresql+psycopg2``\*    yes          development   no           yes                yes
+pypostgresql_              ``postgresql+pypostgresql``  no           yes           no           yes                yes
+**SQLite**
+pysqlite_                  ``sqlite+pysqlite``\*        yes          yes           no           yes                yes
+sqlite3_                   ``sqlite+pysqlite``\*        yes          yes           no           yes                yes
+**Sybase ASE**
+mxodbc_                    ``sybase+mxodbc``            development  development   no           yes                yes
+pyodbc_                    ``sybase+pyodbc``\*          partial      development   no           unknown            unknown
+python-sybase_             ``sybase+pysybase``          partial      development   no           yes                yes
+=========================  ===========================  ===========  ===========   ===========  =================  ============
+
+.. _psycopg2: http://www.initd.org/
+.. _pg8000: http://pybrary.net/pg8000/
+.. _pypostgresql: http://python.projects.postgresql.org/
+.. _mysql-python: http://sourceforge.net/projects/mysql-python
+.. _MySQL Connector/Python: https://launchpad.net/myconnpy
+.. _OurSQL: http://packages.python.org/oursql/
+.. _PostgreSQL JDBC Driver: http://jdbc.postgresql.org/
+.. _sqlite3: http://docs.python.org/library/sqlite3.html
+.. _pysqlite: http://pypi.python.org/pypi/pysqlite/
+.. _MySQL Connector/J: http://dev.mysql.com/downloads/connector/j/
+.. _cx_Oracle: http://cx-oracle.sourceforge.net/
+.. _Oracle JDBC Driver: http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
+.. _kinterbasdb:  http://firebirdsql.org/index.php?op=devel&sub=python
+.. _pyodbc: http://code.google.com/p/pyodbc/
+.. _mxodbc: http://www.egenix.com/products/python/mxODBC/
+.. _FreeTDS: http://www.freetds.org/
+.. _adodbapi: http://adodbapi.sourceforge.net/
+.. _pymssql: http://code.google.com/p/pymssql/
+.. _jTDS JDBC Driver: http://jtds.sourceforge.net/
+.. _ibm-db: http://code.google.com/p/ibm-db/
+.. _informixdb: http://informixdb.sourceforge.net/
+.. _sapdb: http://www.sapdb.org/sapdbapi.html
+.. _python-sybase: http://python-sybase.sourceforge.net/
+
+Further detail on dialects is available at :ref:`sqlalchemy.dialects_toplevel`
+as well as additional notes on the wiki at `Database Notes
+<http://www.sqlalchemy.org/trac/wiki/DatabaseNotes>`_
+
+
+.. _create_engine_args:
+
+Database Engine Options
+========================
+
+Keyword options can also be specified to :func:`~sqlalchemy.create_engine`,
+following the string URL as follows:
+
+.. sourcecode:: python+sql
+
+    db = create_engine('postgresql://...', encoding='latin1', echo=True)
+
+.. autofunction:: sqlalchemy.create_engine
+
+.. autofunction:: sqlalchemy.engine_from_config
+
+Database Urls
+=============
+
+SQLAlchemy indicates the source of an Engine strictly via `RFC-1738
+<http://rfc.net/rfc1738.html>`_ style URLs, combined with optional keyword
+arguments to specify options for the Engine. The form of the URL is:
+
+    dialect+driver://username:password@host:port/database
+
+Dialect names include the identifying name of the SQLAlchemy dialect which
+include ``sqlite``, ``mysql``, ``postgresql``, ``oracle``, ``mssql``, and
+``firebird``. The drivername is the name of the DBAPI to be used to connect to
+the database using all lowercase letters. If not specified, a "default" DBAPI
+will be imported if available - this default is typically the most widely
+known driver available for that backend (i.e. cx_oracle, pysqlite/sqlite3,
+psycopg2, mysqldb). For Jython connections, specify the `zxjdbc` driver, which
+is the JDBC-DBAPI bridge included with Jython.
+
+.. sourcecode:: python+sql
+
+    # postgresql - psycopg2 is the default driver.
+    pg_db = create_engine('postgresql://scott:tiger@localhost/mydatabase')
+    pg_db = create_engine('postgresql+psycopg2://scott:tiger@localhost/mydatabase')
+    pg_db = create_engine('postgresql+pg8000://scott:tiger@localhost/mydatabase')
+    pg_db = create_engine('postgresql+pypostgresql://scott:tiger@localhost/mydatabase')
+
+    # postgresql on Jython
+    pg_db = create_engine('postgresql+zxjdbc://scott:tiger@localhost/mydatabase')
+
+    # mysql - MySQLdb (mysql-python) is the default driver
+    mysql_db = create_engine('mysql://scott:tiger@localhost/foo')
+    mysql_db = create_engine('mysql+mysqldb://scott:tiger@localhost/foo')
+
+    # mysql on Jython
+    mysql_db = create_engine('mysql+zxjdbc://localhost/foo')
+
+    # mysql with pyodbc (buggy)
+    mysql_db = create_engine('mysql+pyodbc://scott:tiger@some_dsn')
+
+    # oracle - cx_oracle is the default driver
+    oracle_db = create_engine('oracle://scott:tiger@127.0.0.1:1521/sidname')
+
+    # oracle via TNS name
+    oracle_db = create_engine('oracle+cx_oracle://scott:tiger@tnsname')
+
+    # mssql using ODBC datasource names.  PyODBC is the default driver.
+    mssql_db = create_engine('mssql://mydsn')
+    mssql_db = create_engine('mssql+pyodbc://mydsn')
+    mssql_db = create_engine('mssql+adodbapi://mydsn')
+    mssql_db = create_engine('mssql+pyodbc://username:password@mydsn')
+
+SQLite connects to file based databases. The same URL format is used, omitting
+the hostname, and using the "file" portion as the filename of the database.
+This has the effect of four slashes being present for an absolute file path::
+
+    # sqlite://<nohostname>/<path>
+    # where <path> is relative:
+    sqlite_db = create_engine('sqlite:///foo.db')
+
+    # or absolute, starting with a slash:
+    sqlite_db = create_engine('sqlite:////absolute/path/to/foo.db')
+
+To use a SQLite ``:memory:`` database, specify an empty URL::
+
+    sqlite_memory_db = create_engine('sqlite://')
+
+The :class:`~sqlalchemy.engine.base.Engine` will ask the connection pool for a
+connection when the ``connect()`` or ``execute()`` methods are called. The
+default connection pool, :class:`~sqlalchemy.pool.QueuePool`, as well as the
+default connection pool used with SQLite,
+:class:`~sqlalchemy.pool.SingletonThreadPool`, will open connections to the
+database on an as-needed basis. As concurrent statements are executed,
+:class:`~sqlalchemy.pool.QueuePool` will grow its pool of connections to a
+default size of five, and will allow a default "overflow" of ten. Since the
+:class:`~sqlalchemy.engine.base.Engine` is essentially "home base" for the
+connection pool, it follows that you should keep a single
+:class:`~sqlalchemy.engine.base.Engine` per database established within an
+application, rather than creating a new one for each connection.
+
+.. autoclass:: sqlalchemy.engine.url.URL
+    :members:
+
+Custom DBAPI connect() arguments
+=================================
+
+Custom arguments used when issuing the ``connect()`` call to the underlying
+DBAPI may be issued in three distinct ways. String-based arguments can be
+passed directly from the URL string as query arguments:
+
+.. sourcecode:: python+sql
+
+    db = create_engine('postgresql://scott:tiger@localhost/test?argument1=foo&argument2=bar')
+
+If SQLAlchemy's database connector is aware of a particular query argument, it
+may convert its type from string to its proper type.
+
+:func:`~sqlalchemy.create_engine` also takes an argument ``connect_args`` which is an additional dictionary that will be passed to ``connect()``.  This can be used when arguments of a type other than string are required, and SQLAlchemy's database connector has no type conversion logic present for that parameter:
+
+.. sourcecode:: python+sql
+
+    db = create_engine('postgresql://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})
+
+The most customizable connection method of all is to pass a ``creator``
+argument, which specifies a callable that returns a DBAPI connection:
+
+.. sourcecode:: python+sql
+
+    def connect():
+        return psycopg.connect(user='scott', host='localhost')
+
+    db = create_engine('postgresql://', creator=connect)
+
+
+
+.. _dbengine_logging:
+
+Configuring Logging
+====================
+
+Python's standard `logging
+<http://www.python.org/doc/lib/module-logging.html>`_ module is used to
+implement informational and debug log output with SQLAlchemy. This allows
+SQLAlchemy's logging to integrate in a standard way with other applications
+and libraries. The ``echo`` and ``echo_pool`` flags that are present on
+:func:`~sqlalchemy.create_engine`, as well as the ``echo_uow`` flag used on
+:class:`~sqlalchemy.orm.session.Session`, all interact with regular loggers.
+
+This section assumes familiarity with the above linked logging module. All
+logging performed by SQLAlchemy exists underneath the ``sqlalchemy``
+namespace, as used by ``logging.getLogger('sqlalchemy')``. When logging has
+been configured (i.e. such as via ``logging.basicConfig()``), the general
+namespace of SA loggers that can be turned on is as follows:
+
+* ``sqlalchemy.engine`` - controls SQL echoing.  set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
+* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects.  See the documentation of individual dialects for details. 
+* ``sqlalchemy.pool`` - controls connection pool logging.  set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
+* ``sqlalchemy.orm`` - controls logging of various ORM functions.  set to ``logging.INFO`` for configurational logging as well as unit of work dumps, ``logging.DEBUG`` for extensive logging during query and flush() operations.  Subcategories of ``sqlalchemy.orm`` include:
+    * ``sqlalchemy.orm.attributes`` - logs certain instrumented attribute operations, such as triggered callables
+    * ``sqlalchemy.orm.mapper`` - logs Mapper configuration and operations
+    * ``sqlalchemy.orm.unitofwork`` - logs flush() operations, including dependency sort graphs and other operations
+    * ``sqlalchemy.orm.strategies`` - logs relationship loader operations (i.e. lazy and eager loads)
+    * ``sqlalchemy.orm.sync`` - logs synchronization of attributes from parent to child instances during a flush()
+
+For example, to log SQL queries as well as unit of work debugging:
+
+.. sourcecode:: python+sql
+
+    import logging
+
+    logging.basicConfig()
+    logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
+    logging.getLogger('sqlalchemy.orm.unitofwork').setLevel(logging.DEBUG)
+
+By default, the log level is set to ``logging.ERROR`` within the entire
+``sqlalchemy`` namespace so that no log operations occur, even within an
+application that has logging enabled otherwise.
+
+The ``echo`` flags present as keyword arguments to
+:func:`~sqlalchemy.create_engine` and others as well as the ``echo`` property
+on :class:`~sqlalchemy.engine.base.Engine`, when set to ``True``, will first
+attempt to ensure that logging is enabled. Unfortunately, the ``logging``
+module provides no way of determining if output has already been configured
+(note we are referring to if a logging configuration has been set up, not just
+that the logging level is set). For this reason, any ``echo=True`` flags will
+result in a call to ``logging.basicConfig()`` using sys.stdout as the
+destination. It also sets up a default format using the level name, timestamp,
+and logger name. Note that this configuration has the affect of being
+configured **in addition** to any existing logger configurations. Therefore,
+**when using Python logging, ensure all echo flags are set to False at all
+times**, to avoid getting duplicate log lines.
+
+The logger name of instance such as an :class:`~sqlalchemy.engine.base.Engine`
+or :class:`~sqlalchemy.pool.Pool` defaults to using a truncated hex identifier
+string. To set this to a specific name, use the "logging_name" and
+"pool_logging_name" keyword arguments with :func:`sqlalchemy.create_engine`.
+

File doc/build/core/index.rst

+.. _core_toplevel:
+
 SQLAlchemy Core
 ===============
 
     tutorial
     expression_api
     engines
+    connections
     pooling
-    connections
     schema
     types
+    interfaces
     compiler
     
     

File doc/build/core/interfaces.rst

+.. currentmodule:: sqlalchemy.interfaces
+
+.. _interfaces_core_toplevel:
+
+
+Core Event Interfaces
+====================
+
+This section describes the various categories of events which can be intercepted
+in SQLAlchemy core, including execution and connection pool events.
+
+For ORM event documentation, see :ref:`interfaces_orm_toplevel`.
+
+A new version of this API with a significantly more flexible and consistent
+interface will be available in version 0.7.
+
+Execution, Connection and Cursor Events
+---------------------------------------
+
+.. autoclass:: ConnectionProxy
+   :members:
+   :undoc-members:
+
+Connection Pool Events
+----------------------
+
+.. autoclass:: PoolListener
+   :members:
+   :undoc-members:
+
+

File doc/build/core/pooling.rst

+.. _pooling_toplevel:
+
+Connection Pooling
+==================
+
+.. module:: sqlalchemy.pool
+
+SQLAlchemy ships with a connection pooling framework that integrates
+with the Engine system and can also be used on its own to manage plain
+DB-API connections.
+
+At the base of any database helper library is a system for efficiently
+acquiring connections to the database.  Since the establishment of a
+database connection is typically a somewhat expensive operation, an
+application needs a way to get at database connections repeatedly
+without incurring the full overhead each time.  Particularly for
+server-side web applications, a connection pool is the standard way to
+maintain a group or "pool" of active database connections which are
+reused from request to request in a single server process.
+
+Connection Pool Configuration
+-----------------------------
+
+The :class:`~sqlalchemy.engine.Engine` returned by the
+:func:`~sqlalchemy.create_engine` function in most cases has a :class:`QueuePool`
+integrated, pre-configured with reasonable pooling defaults.  If
+you're reading this section to simply enable pooling- congratulations!
+You're already done.
+
+The most common :class:`QueuePool` tuning parameters can be passed
+directly to :func:`~sqlalchemy.create_engine` as keyword arguments:
+``pool_size``, ``max_overflow``, ``pool_recycle`` and
+``pool_timeout``.  For example::
+
+  engine = create_engine('postgresql://me@localhost/mydb',
+                         pool_size=20, max_overflow=0)
+
+In the case of SQLite, a :class:`SingletonThreadPool` is provided instead,
+to provide compatibility with SQLite's restricted threading model.
+
+
+Custom Pool Construction
+------------------------
+
+:class:`Pool` instances may be created directly for your own use or to
+supply to :func:`sqlalchemy.create_engine` via the ``pool=``
+keyword argument.
+
+Constructing your own pool requires supplying a callable function the
+Pool can use to create new connections.  The function will be called
+with no arguments.
+
+Through this method, custom connection schemes can be made, such as a
+using connections from another library's pool, or making a new
+connection that automatically executes some initialization commands::
+
+    import sqlalchemy.pool as pool
+    import psycopg2
+
+    def getconn():
+        c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
+        # execute an initialization function on the connection before returning
+        c.cursor.execute("setup_encodings()")
+        return c
+
+    p = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
+
+Or with SingletonThreadPool::
+
+    import sqlalchemy.pool as pool
+    import sqlite
+
+    p = pool.SingletonThreadPool(lambda: sqlite.connect(filename='myfile.db'))
+
+
+Builtin Pool Implementations
+----------------------------
+
+.. autoclass:: AssertionPool
+   :members:
+   :show-inheritance:
+
+.. autoclass:: NullPool
+   :members:
+   :show-inheritance:
+
+.. autoclass:: sqlalchemy.pool.Pool
+   :members:
+   :show-inheritance:
+   :undoc-members:
+   :inherited-members:
+
+.. autoclass:: sqlalchemy.pool.QueuePool
+   :members:
+   :show-inheritance:
+
+.. autoclass:: SingletonThreadPool
+   :members:
+   :show-inheritance:
+
+.. autoclass:: StaticPool
+   :members:
+   :show-inheritance:
+
+
+Pooling Plain DB-API Connections
+--------------------------------
+
+Any :pep:`249` DB-API module can be "proxied" through the connection
+pool transparently.  Usage of the DB-API is exactly as before, except
+the ``connect()`` method will consult the pool.  Below we illustrate
+this with ``psycopg2``::
+
+    import sqlalchemy.pool as pool
+    import psycopg2 as psycopg
+
+    psycopg = pool.manage(psycopg)
+
+    # then connect normally
+    connection = psycopg.connect(database='test', username='scott',
+                                 password='tiger')
+
+This produces a :class:`_DBProxy` object which supports the same
+``connect()`` function as the original DB-API module.  Upon
+connection, a connection proxy object is returned, which delegates its
+calls to a real DB-API connection object.  This connection object is
+stored persistently within a connection pool (an instance of
+:class:`Pool`) that corresponds to the exact connection arguments sent
+to the ``connect()`` function.
+
+The connection proxy supports all of the methods on the original
+connection object, most of which are proxied via ``__getattr__()``.
+The ``close()`` method will return the connection to the pool, and the
+``cursor()`` method will return a proxied cursor object.  Both the
+connection proxy and the cursor proxy will also return the underlying
+connection to the pool after they have both been garbage collected,
+which is detected via weakref callbacks  (``__del__`` is not used).
+
+Additionally, when connections are returned to the pool, a
+``rollback()`` is issued on the connection unconditionally.  This is
+to release any locks still held by the connection that may have
+resulted from normal activity.
+
+By default, the ``connect()`` method will return the same connection
+that is already checked out in the current thread.  This allows a
+particular connection to be used in a given thread without needing to
+pass it around between functions.  To disable this behavior, specify
+``use_threadlocal=False`` to the ``manage()`` function.
+
+.. autofunction:: sqlalchemy.pool.manage
+
+.. autofunction:: sqlalchemy.pool.clear_managers
+

File doc/build/core/schema.rst

+.. _metadata_toplevel:
+
+==========================
+Schema Definition Language
+==========================
+
+.. module:: sqlalchemy.schema
+
+Describing Databases with MetaData
+==================================
+
+The core of SQLAlchemy's query and object mapping operations are supported by
+*database metadata*, which is comprised of Python objects that describe tables
+and other schema-level objects. These objects are at the core of three major
+types of operations - issuing CREATE and DROP statements (known as *DDL*),
+constructing SQL queries, and expressing information about structures that
+already exist within the database.
+
+Database metadata can be expressed by explicitly naming the various components
+and their properties, using constructs such as
+:class:`~sqlalchemy.schema.Table`, :class:`~sqlalchemy.schema.Column`,
+:class:`~sqlalchemy.schema.ForeignKey` and
+:class:`~sqlalchemy.schema.Sequence`, all of which are imported from the
+``sqlalchemy.schema`` package. It can also be generated by SQLAlchemy using a
+process called *reflection*, which means you start with a single object such
+as :class:`~sqlalchemy.schema.Table`, assign it a name, and then instruct
+SQLAlchemy to load all the additional information related to that name from a
+particular engine source.
+
+A key feature of SQLAlchemy's database metadata constructs is that they are
+designed to be used in a *declarative* style which closely resembles that of
+real DDL. They are therefore most intuitive to those who have some background
+in creating real schema generation scripts.
+
+A collection of metadata entities is stored in an object aptly named
+:class:`~sqlalchemy.schema.MetaData`::
+
+    from sqlalchemy import *
+
+    metadata = MetaData()
+
+:class:`~sqlalchemy.schema.MetaData` is a container object that keeps together
+many different features of a database (or multiple databases) being described.
+
+To represent a table, use the :class:`~sqlalchemy.schema.Table` class. Its two
+primary arguments are the table name, then the
+:class:`~sqlalchemy.schema.MetaData` object which it will be associated with.
+The remaining positional arguments are mostly
+:class:`~sqlalchemy.schema.Column` objects describing each column::
+
+    user = Table('user', metadata,
+        Column('user_id', Integer, primary_key = True),
+        Column('user_name', String(16), nullable = False),
+        Column('email_address', String(60)),
+        Column('password', String(20), nullable = False)
+    )
+
+Above, a table called ``user`` is described, which contains four columns. The
+primary key of the table consists of the ``user_id`` column. Multiple columns
+may be assigned the ``primary_key=True`` flag which denotes a multi-column
+primary key, known as a *composite* primary key.
+
+Note also that each column describes its datatype using objects corresponding
+to genericized types, such as :class:`~sqlalchemy.types.Integer` and
+:class:`~sqlalchemy.types.String`. SQLAlchemy features dozens of types of
+varying levels of specificity as well as the ability to create custom types.
+Documentation on the type system can be found at :ref:`types`.
+
+Accessing Tables and Columns
+----------------------------
+
+The :class:`~sqlalchemy.schema.MetaData` object contains all of the schema
+constructs we've associated with it. It supports a few methods of accessing
+these table objects, such as the ``sorted_tables`` accessor which returns a
+list of each :class:`~sqlalchemy.schema.Table` object in order of foreign key
+dependency (that is, each table is preceded by all tables which it
+references)::
+
+    >>> for t in metadata.sorted_tables:
+    ...    print t.name
+    user
+    user_preference
+    invoice
+    invoice_item
+
+In most cases, individual :class:`~sqlalchemy.schema.Table` objects have been
+explicitly declared, and these objects are typically accessed directly as
+module-level variables in an application. Once a
+:class:`~sqlalchemy.schema.Table` has been defined, it has a full set of
+accessors which allow inspection of its properties. Given the following
+:class:`~sqlalchemy.schema.Table` definition::
+
+    employees = Table('employees', metadata,
+        Column('employee_id', Integer, primary_key=True),
+        Column('employee_name', String(60), nullable=False),
+        Column('employee_dept', Integer, ForeignKey("departments.department_id"))
+    )
+
+Note the :class:`~sqlalchemy.schema.ForeignKey` object used in this table -
+this construct defines a reference to a remote table, and is fully described
+in :ref:`metadata_foreignkeys`. Methods of accessing information about this
+table include::
+
+    # access the column "EMPLOYEE_ID":
+    employees.columns.employee_id
+
+    # or just
+    employees.c.employee_id
+
+    # via string
+    employees.c['employee_id']
+
+    # iterate through all columns
+    for c in employees.c:
+        print c
+
+    # get the table's primary key columns
+    for primary_key in employees.primary_key:
+        print primary_key
+
+    # get the table's foreign key objects:
+    for fkey in employees.foreign_keys:
+        print fkey
+
+    # access the table's MetaData:
+    employees.metadata
+
+    # access the table's bound Engine or Connection, if its MetaData is bound:
+    employees.bind
+
+    # access a column's name, type, nullable, primary key, foreign key
+    employees.c.employee_id.name
+    employees.c.employee_id.type
+    employees.c.employee_id.nullable
+    employees.c.employee_id.primary_key
+    employees.c.employee_dept.foreign_keys
+
+    # get the "key" of a column, which defaults to its name, but can
+    # be any user-defined string:
+    employees.c.employee_name.key
+
+    # access a column's table:
+    employees.c.employee_id.table is employees
+
+    # get the table related by a foreign key
+    list(employees.c.employee_dept.foreign_keys)[0].column.table
+
+.. _metadata_binding:
+
+
+Creating and Dropping Database Tables
+-------------------------------------
+
+Once you've defined some :class:`~sqlalchemy.schema.Table` objects, assuming
+you're working with a brand new database one thing you might want to do is
+issue CREATE statements for those tables and their related constructs (as an
+aside, it's also quite possible that you *don't* want to do this, if you
+already have some preferred methodology such as tools included with your
+database or an existing scripting system - if that's the case, feel free to
+skip this section - SQLAlchemy has no requirement that it be used to create
+your tables).
+
+The usual way to issue CREATE is to use
+:func:`~sqlalchemy.schema.MetaData.create_all` on the
+:class:`~sqlalchemy.schema.MetaData` object. This method will issue queries
+that first check for the existence of each individual table, and if not found
+will issue the CREATE statements:
+
+    .. sourcecode:: python+sql
+
+        engine = create_engine('sqlite:///:memory:')
+
+        metadata = MetaData()
+
+        user = Table('user', metadata,
+            Column('user_id', Integer, primary_key = True),
+            Column('user_name', String(16), nullable = False),
+            Column('email_address', String(60), key='email'),
+            Column('password', String(20), nullable = False)
+        )
+
+        user_prefs = Table('user_prefs', metadata,
+            Column('pref_id', Integer, primary_key=True),
+            Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
+            Column('pref_name', String(40), nullable=False),
+            Column('pref_value', String(100))
+        )
+
+        {sql}metadata.create_all(engine)
+        PRAGMA table_info(user){}
+        CREATE TABLE user(
+                user_id INTEGER NOT NULL PRIMARY KEY,
+                user_name VARCHAR(16) NOT NULL,
+                email_address VARCHAR(60),
+                password VARCHAR(20) NOT NULL
+        )
+        PRAGMA table_info(user_prefs){}
+        CREATE TABLE user_prefs(
+                pref_id INTEGER NOT NULL PRIMARY KEY,
+                user_id INTEGER NOT NULL REFERENCES user(user_id),
+                pref_name VARCHAR(40) NOT NULL,
+                pref_value VARCHAR(100)
+        )
+
+:func:`~sqlalchemy.schema.MetaData.create_all` creates foreign key constraints
+between tables usually inline with the table definition itself, and for this
+reason it also generates the tables in order of their dependency. There are
+options to change this behavior such that ``ALTER TABLE`` is used instead.
+
+Dropping all tables is similarly achieved using the
+:func:`~sqlalchemy.schema.MetaData.drop_all` method. This method does the
+exact opposite of :func:`~sqlalchemy.schema.MetaData.create_all` - the
+presence of each table is checked first, and tables are dropped in reverse
+order of dependency.
+
+Creating and dropping individual tables can be done via the ``create()`` and
+``drop()`` methods of :class:`~sqlalchemy.schema.Table`. These methods by
+default issue the CREATE or DROP regardless of the table being present:
+
+.. sourcecode:: python+sql
+
+    engine = create_engine('sqlite:///:memory:')
+
+    meta = MetaData()
+
+    employees = Table('employees', meta,
+        Column('employee_id', Integer, primary_key=True),
+        Column('employee_name', String(60), nullable=False, key='name'),
+        Column('employee_dept', Integer, ForeignKey("departments.department_id"))
+    )
+    {sql}employees.create(engine)
+    CREATE TABLE employees(
+    employee_id SERIAL NOT NULL PRIMARY KEY,
+    employee_name VARCHAR(60) NOT NULL,
+    employee_dept INTEGER REFERENCES departments(department_id)
+    )
+    {}
+
+``drop()`` method:
+
+.. sourcecode:: python+sql
+
+    {sql}employees.drop(engine)
+    DROP TABLE employees
+    {}
+
+To enable the "check first for the table existing" logic, add the
+``checkfirst=True`` argument to ``create()`` or ``drop()``::
+
+    employees.create(engine, checkfirst=True)
+    employees.drop(engine, checkfirst=False)
+
+
+Binding MetaData to an Engine or Connection
+--------------------------------------------
+
+Notice in the previous section the creator/dropper methods accept an argument
+for the database engine in use. When a schema construct is combined with an
+:class:`~sqlalchemy.engine.base.Engine` object, or an individual
+:class:`~sqlalchemy.engine.base.Connection` object, we call this the *bind*.
+In the above examples the bind is associated with the schema construct only
+for the duration of the operation. However, the option exists to persistently
+associate a bind with a set of schema constructs via the
+:class:`~sqlalchemy.schema.MetaData` object's ``bind`` attribute::
+
+    engine = create_engine('sqlite://')
+
+    # create MetaData
+    meta = MetaData()
+
+    # bind to an engine
+    meta.bind = engine
+
+We can now call methods like :func:`~sqlalchemy.schema.MetaData.create_all`
+without needing to pass the :class:`~sqlalchemy.engine.base.Engine`::
+
+    meta.create_all()
+
+The MetaData's bind is used for anything that requires an active connection,
+such as loading the definition of a table from the database automatically
+(called *reflection*)::
+
+    # describe a table called 'users', query the database for its columns
+    users_table = Table('users', meta, autoload=True)
+
+As well as for executing SQL constructs that are derived from that MetaData's table objects::
+
+    # generate a SELECT statement and execute
+    result = users_table.select().execute()
+
+Binding the MetaData to the Engine is a **completely optional** feature. The
+above operations can be achieved without the persistent bind using
+parameters::
+
+    # describe a table called 'users', query the database for its columns
+    users_table = Table('users', meta, autoload=True, autoload_with=engine)
+
+    # generate a SELECT statement and execute
+    result = engine.execute(users_table.select())
+
+Should you use bind ? It's probably best to start without it, and wait for a
+specific need to arise. Bind is useful if:
+
+* You aren't using the ORM, are usually using "connectionless" execution, and
+  find yourself constantly needing to specify the same
+  :class:`~sqlalchemy.engine.base.Engine` object throughout the entire
+  application. Bind can be used here to provide "implicit" execution.
+* Your application has multiple schemas that correspond to different engines.
+  Using one :class:`~sqlalchemy.schema.MetaData` for each schema, bound to
+  each engine, provides a decent place to delineate between the schemas. The
+  ORM will also integrate with this approach, where the :class:`Session` will
+  naturally use the engine that is bound to each table via its metadata
+  (provided the :class:`Session` itself has no ``bind`` configured.).
+
+Alternatively, the ``bind`` attribute of :class:`~sqlalchemy.schema.MetaData`
+is *confusing* if:
+
+* Your application talks to multiple database engines at different times,
+  which use the *same* set of :class:`Table` objects. It's usually confusing
+  and unnecessary to begin to create "copies" of :class:`Table` objects just
+  so that different engines can be used for different operations. An example
+  is an application that writes data to a "master" database while performing
+  read-only operations from a "read slave". A global
+  :class:`~sqlalchemy.schema.MetaData` object is *not* appropriate for
+  per-request switching like this, although a
+  :class:`~sqlalchemy.schema.ThreadLocalMetaData` object is.
+* You are using the ORM :class:`Session` to handle which class/table is bound
+  to which engine, or you are using the :class:`Session` to manage switching
+  between engines. Its a good idea to keep the "binding of tables to engines"
+  in one place - either using :class:`~sqlalchemy.schema.MetaData` only (the
+  :class:`Session` can of course be present, it just has no ``bind``
+  configured), or using :class:`Session` only (the ``bind`` attribute of
+  :class:`~sqlalchemy.schema.MetaData` is left empty).
+
+Specifying the Schema Name
+---------------------------
+
+Some databases support the concept of multiple schemas.  A :class:`~sqlalchemy.schema.Table` can reference this by specifying the ``schema`` keyword argument::
+
+    financial_info = Table('financial_info', meta,
+        Column('id', Integer, primary_key=True),
+        Column('value', String(100), nullable=False),
+        schema='remote_banks'
+    )
+
+Within the :class:`~sqlalchemy.schema.MetaData` collection, this table will be identified by the combination of ``financial_info`` and ``remote_banks``.  If another table called ``financial_info`` is referenced without the ``remote_banks`` schema, it will refer to a different :class:`~sqlalchemy.schema.Table`.  :class:`~sqlalchemy.schema.ForeignKey` objects can specify references to columns in this table using the form ``remote_banks.financial_info.id``.
+
+The ``schema`` argument should be used for any name qualifiers required, including Oracle's "owner" attribute and similar.  It also can accommodate a dotted name for longer schemes::
+
+    schema="dbo.scott"
+
+Backend-Specific Options
+------------------------
+
+:class:`~sqlalchemy.schema.Table` supports database-specific options.   For example, MySQL has different table backend types, including "MyISAM" and "InnoDB".   This can be expressed with :class:`~sqlalchemy.schema.Table` using ``mysql_engine``::
+
+    addresses = Table('engine_email_addresses', meta,
+        Column('address_id', Integer, primary_key = True),
+        Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
+        Column('email_address', String(20)),
+        mysql_engine='InnoDB'
+    )
+
+Other backends may support table-level options as well. 
+
+API Constructs
+--------------
+
+.. autoclass:: Column
+    :members:
+    :undoc-members:
+    :show-inheritance:
+
+.. autoclass:: MetaData
+    :members:
+    :undoc-members:
+    :show-inheritance:
+
+.. autoclass:: Table
+    :members:
+    :undoc-members:
+    :show-inheritance:
+
+.. autoclass:: ThreadLocalMetaData
+    :members:
+    :undoc-members:
+    :show-inheritance:
+
+.. _metadata_reflection:
+
+Reflecting Database Objects
+===========================
+
+A :class:`~sqlalchemy.schema.Table` object can be instructed to load
+information about itself from the corresponding database schema object already
+existing within the database. This process is called *reflection*. Most simply
+you need only specify the table name, a :class:`~sqlalchemy.schema.MetaData`
+object, and the ``autoload=True`` flag. If the
+:class:`~sqlalchemy.schema.MetaData` is not persistently bound, also add the
+``autoload_with`` argument::
+
+    >>> messages = Table('messages', meta, autoload=True, autoload_with=engine)
+    >>> [c.name for c in messages.columns]
+    ['message_id', 'message_name', 'date']
+
+The above operation will use the given engine to query the database for
+information about the ``messages`` table, and will then generate
+:class:`~sqlalchemy.schema.Column`, :class:`~sqlalchemy.schema.ForeignKey`,
+and other objects corresponding to this information as though the
+:class:`~sqlalchemy.schema.Table` object were hand-constructed in Python.
+
+When tables are reflected, if a given table references another one via foreign
+key, a second :class:`~sqlalchemy.schema.Table` object is created within the
+:class:`~sqlalchemy.schema.MetaData` object representing the connection.
+Below, assume the table ``shopping_cart_items`` references a table named
+``shopping_carts``. Reflecting the ``shopping_cart_items`` table has the
+effect such that the ``shopping_carts`` table will also be loaded::
+
+    >>> shopping_cart_items = Table('shopping_cart_items', meta, autoload=True, autoload_with=engine)
+    >>> 'shopping_carts' in meta.tables:
+    True
+
+The :class:`~sqlalchemy.schema.MetaData` has an interesting "singleton-like"
+behavior such that if you requested both tables individually,
+:class:`~sqlalchemy.schema.MetaData` will ensure that exactly one
+:class:`~sqlalchemy.schema.Table` object is created for each distinct table
+name. The :class:`~sqlalchemy.schema.Table` constructor actually returns to
+you the already-existing :class:`~sqlalchemy.schema.Table` object if one
+already exists with the given name. Such as below, we can access the already
+generated ``shopping_carts`` table just by naming it::
+
+    shopping_carts = Table('shopping_carts', meta)
+
+Of course, it's a good idea to use ``autoload=True`` with the above table
+regardless. This is so that the table's attributes will be loaded if they have
+not been already. The autoload operation only occurs for the table if it
+hasn't already been loaded; once loaded, new calls to
+:class:`~sqlalchemy.schema.Table` with the same name will not re-issue any
+reflection queries.
+
+Overriding Reflected Columns
+-----------------------------
+
+Individual columns can be overridden with explicit values when reflecting
+tables; this is handy for specifying custom datatypes, constraints such as
+primary keys that may not be configured within the database, etc.::
+
+    >>> mytable = Table('mytable', meta,
+    ... Column('id', Integer, primary_key=True),   # override reflected 'id' to have primary key
+    ... Column('mydata', Unicode(50)),    # override reflected 'mydata' to be Unicode
+    ... autoload=True)
+
+Reflecting Views
+-----------------
+
+The reflection system can also reflect views. Basic usage is the same as that
+of a table::
+
+    my_view = Table("some_view", metadata, autoload=True)
+
+Above, ``my_view`` is a :class:`~sqlalchemy.schema.Table` object with
+:class:`~sqlalchemy.schema.Column` objects representing the names and types of
+each column within the view "some_view".
+
+Usually, it's desired to have at least a primary key constraint when
+reflecting a view, if not foreign keys as well. View reflection doesn't
+extrapolate these constraints.
+
+Use the "override" technique for this, specifying explicitly those columns
+which are part of the primary key or have foreign key constraints::
+
+    my_view = Table("some_view", metadata,
+                    Column("view_id", Integer, primary_key=True),
+                    Column("related_thing", Integer, ForeignKey("othertable.thing_id")),
+                    autoload=True
+    )
+
+Reflecting All Tables at Once
+-----------------------------
+
+The :class:`~sqlalchemy.schema.MetaData` object can also get a listing of
+tables and reflect the full set. This is achieved by using the
+:func:`~sqlalchemy.schema.MetaData.reflect` method. After calling it, all
+located tables are present within the :class:`~sqlalchemy.schema.MetaData`
+object's dictionary of tables::
+
+    meta = MetaData()
+    meta.reflect(bind=someengine)
+    users_table = meta.tables['users']
+    addresses_table = meta.tables['addresses']
+
+``metadata.reflect()`` also provides a handy way to clear or delete all the rows in a database::
+
+    meta = MetaData()
+    meta.reflect(bind=someengine)
+    for table in reversed(meta.sorted_tables):
+        someengine.execute(table.delete())
+
+Fine Grained Reflection with Inspector
+--------------------------------------
+
+A low level interface which provides a backend-agnostic system of loading
+lists of schema, table, column, and constraint descriptions from a given
+database is also available. This is known as the "Inspector"::
+
+    from sqlalchemy import create_engine
+    from sqlalchemy.engine import reflection
+    engine = create_engine('...')
+    insp = reflection.Inspector.from_engine(engine)
+    print insp.get_table_names()
+
+.. autoclass:: sqlalchemy.engine.reflection.Inspector
+    :members:
+    :undoc-members:
+    :show-inheritance:
+
+Column Insert/Update Defaults
+==============================
+
+SQLAlchemy provides a very rich featureset regarding column level events which
+take place during INSERT and UPDATE statements. Options include:
+
+* Scalar values used as defaults during INSERT and UPDATE operations
+* Python functions which execute upon INSERT and UPDATE operations
+* SQL expressions which are embedded in INSERT statements (or in some cases execute beforehand)
+* SQL expressions which are embedded in UPDATE statements
+* Server side default values used during INSERT
+* Markers for server-side triggers used during UPDATE
+
+The general rule for all insert/update defaults is that they only take effect
+if no value for a particular column is passed as an ``execute()`` parameter;
+otherwise, the given value is used.
+
+Scalar Defaults
+---------------
+
+The simplest kind of default is a scalar value used as the default value of a column::
+
+    Table("mytable", meta,
+        Column("somecolumn", Integer, default=12)
+    )
+
+Above, the value "12" will be bound as the column value during an INSERT if no
+other value is supplied.
+
+A scalar value may also be associated with an UPDATE statement, though this is
+not very common (as UPDATE statements are usually looking for dynamic
+defaults)::
+
+    Table("mytable", meta,
+        Column("somecolumn", Integer, onupdate=25)
+    )
+
+
+Python-Executed Functions
+-------------------------
+
+The ``default`` and ``onupdate`` keyword arguments also accept Python
+functions. These functions are invoked at the time of insert or update if no
+other value for that column is supplied, and the value returned is used for
+the column's value. Below illustrates a crude "sequence" that assigns an
+incrementing counter to a primary key column::
+
+    # a function which counts upwards
+    i = 0
+    def mydefault():
+        global i
+        i += 1
+        return i
+
+    t = Table("mytable", meta,
+        Column('id', Integer, primary_key=True, default=mydefault),
+    )
+
+It should be noted that for real "incrementing sequence" behavior, the
+built-in capabilities of the database should normally be used, which may
+include sequence objects or other autoincrementing capabilities. For primary
+key columns, SQLAlchemy will in most cases use these capabilities
+automatically. See the API documentation for
+:class:`~sqlalchemy.schema.Column` including the ``autoincrement`` flag, as
+well as the section on :class:`~sqlalchemy.schema.Sequence` later in this
+chapter for background on standard primary key generation techniques.
+
+To illustrate onupdate, we assign the Python ``datetime`` function ``now`` to
+the ``onupdate`` attribute::
+
+    import datetime
+
+    t = Table("mytable", meta,
+        Column('id', Integer, primary_key=True),
+
+        # define 'last_updated' to be populated with datetime.now()
+        Column('last_updated', DateTime, onupdate=datetime.datetime.now),
+    )
+
+When an update statement executes and no value is passed for ``last_updated``,
+the ``datetime.datetime.now()`` Python function is executed and its return
+value used as the value for ``last_updated``. Notice that we provide ``now``
+as the function itself without calling it (i.e. there are no parenthesis
+following) - SQLAlchemy will execute the function at the time the statement
+executes.
+
+Context-Sensitive Default Functions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Python functions used by ``default`` and ``onupdate`` may also make use of
+the current statement's context in order to determine a value. The `context`
+of a statement is an internal SQLAlchemy object which contains all information
+about the statement being executed, including its source expression, the
+parameters associated with it and the cursor. The typical use case for this
+context with regards to default generation is to have access to the other
+values being inserted or updated on the row. To access the context, provide a
+function that accepts a single ``context`` argument::
+
+    def mydefault(context):
+        return context.current_parameters['counter'] + 12
+
+    t = Table('mytable', meta,
+        Column('counter', Integer),
+        Column('counter_plus_twelve', Integer, default=mydefault, onupdate=mydefault)
+    )
+
+Above we illustrate a default function which will execute for all INSERT and
+UPDATE statements where a value for ``counter_plus_twelve`` was otherwise not
+provided, and the value will be that of whatever value is present in the
+execution for the ``counter`` column, plus the number 12.
+
+While the context object passed to the default function has many attributes,
+the ``current_parameters`` member is a special member provided only during the
+execution of a default function for the purposes of deriving defaults from its
+existing values. For a single statement that is executing many sets of bind
+parameters, the user-defined function is called for each set of parameters,
+and ``current_parameters`` will be provided with each individual parameter set
+for each execution.
+
+SQL Expressions
+---------------
+
+The "default" and "onupdate" keywords may also be passed SQL expressions, including select statements or direct function calls::
+
+    t = Table("mytable", meta,
+        Column('id', Integer, primary_key=True),
+
+        # define 'create_date' to default to now()
+        Column('create_date', DateTime, default=func.now()),
+
+        # define 'key' to pull its default from the 'keyvalues' table
+        Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1)),
+
+        # define 'last_modified' to use the current_timestamp SQL function on update
+        Column('last_modified', DateTime, onupdate=func.utc_timestamp())
+        )
+
+Above, the ``create_date`` column will be populated with the result of the
+``now()`` SQL function (which, depending on backend, compiles into ``NOW()``
+or ``CURRENT_TIMESTAMP`` in most cases) during an INSERT statement, and the
+``key`` column with the result of a SELECT subquery from another table. The
+``last_modified`` column will be populated with the value of
+``UTC_TIMESTAMP()``, a function specific to MySQL, when an UPDATE statement is
+emitted for this table.
+
+Note that when using ``func`` functions, unlike when using Python `datetime`
+functions we *do* call the function, i.e. with parenthesis "()" - this is
+because what we want in this case is the return value of the function, which
+is the SQL expression construct that will be rendered into the INSERT or
+UPDATE statement.
+
+The above SQL functions are usually executed "inline" with the INSERT or
+UPDATE statement being executed, meaning, a single statement is executed which
+embeds the given expressions or subqueries within the VALUES or SET clause of
+the statement. Although in some cases, the function is "pre-executed" in a
+SELECT statement of its own beforehand. This happens when all of the following
+is true:
+
+* the column is a primary key column
+
+* the database dialect does not support a usable ``cursor.lastrowid`` accessor
+(or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as
+well as some MySQL dialects.
+
+* the dialect does not support the "RETURNING" clause or similar, or the
+``implicit_returning`` flag is set to ``False`` for the dialect. Dialects
+which support RETURNING currently include Postgresql, Oracle, Firebird, and
+MS-SQL.
+
+* the statement is a single execution, i.e. only supplies one set of
+parameters and doesn't use "executemany" behavior
+
+* the ``inline=True`` flag is not set on the
+:class:`~sqlalchemy.sql.expression.Insert()` or
+:class:`~sqlalchemy.sql.expression.Update()` construct, and the statement has
+not defined an explicit `returning()` clause.
+
+Whether or not the default generation clause "pre-executes" is not something
+that normally needs to be considered, unless it is being addressed for
+performance reasons.
+
+When the statement is executed with a single set of parameters (that is, it is
+not an "executemany" style execution), the returned
+:class:`~sqlalchemy.engine.base.ResultProxy` will contain a collection
+accessible via ``result.postfetch_cols()`` which contains a list of all
+:class:`~sqlalchemy.schema.Column` objects which had an inline-executed
+default. Similarly, all parameters which were bound to the statement,
+including all Python and SQL expressions which were pre-executed, are present
+in the ``last_inserted_params()`` or ``last_updated_params()`` collections on
+:class:`~sqlalchemy.engine.base.ResultProxy`. The ``inserted_primary_key``
+collection contains a list of primary key values for the row inserted (a list
+so that single-column and composite-column primary keys are represented in the
+same format).
+
+Server Side Defaults
+--------------------
+
+A variant on the SQL expression default is the ``server_default``, which gets
+placed in the CREATE TABLE statement during a ``create()`` operation:
+
+.. sourcecode:: python+sql
+
+    t = Table('test', meta,
+        Column('abc', String(20), server_default='abc'),
+        Column('created_at', DateTime, server_default=text("sysdate"))
+    )
+
+A create call for the above table will produce::
+
+    CREATE TABLE test (
+        abc varchar(20) default 'abc',
+        created_at datetime default sysdate
+    )
+
+The behavior of ``server_default`` is similar to that of a regular SQL
+default; if it's placed on a primary key column for a database which doesn't
+have a way to "postfetch" the ID, and the statement is not "inlined", the SQL
+expression is pre-executed; otherwise, SQLAlchemy lets the default fire off on
+the database side normally.
+
+Triggered Columns
+------------------
+
+Columns with values set by a database trigger or other external process may be
+called out with a marker::
+
+    t = Table('test', meta,
+        Column('abc', String(20), server_default=FetchedValue()),
+        Column('def', String(20), server_onupdate=FetchedValue())
+    )
+
+These markers do not emit a "default" clause when the table is created,
+however they do set the same internal flags as a static ``server_default``
+clause, providing hints to higher-level tools that a "post-fetch" of these
+rows should be performed after an insert or update.
+
+Defining Sequences
+-------------------
+
+SQLAlchemy represents database sequences using the
+:class:`~sqlalchemy.schema.Sequence` object, which is considered to be a
+special case of "column default". It only has an effect on databases which
+have explicit support for sequences, which currently includes Postgresql,
+Oracle, and Firebird. The :class:`~sqlalchemy.schema.Sequence` object is
+otherwise ignored.
+
+The :class:`~sqlalchemy.schema.Sequence` may be placed on any column as a
+"default" generator to be used during INSERT operations, and can also be
+configured to fire off during UPDATE operations if desired. It is most
+commonly used in conjunction with a single integer primary key column::
+
+    table = Table("cartitems", meta,
+        Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
+        Column("description", String(40)),
+        Column("createdate", DateTime())
+    )
+
+Where above, the table "cartitems" is associated with a sequence named
+"cart_id_seq". When INSERT statements take place for "cartitems", and no value
+is passed for the "cart_id" column, the "cart_id_seq" sequence will be used to
+generate a value.
+
+When the :class:`~sqlalchemy.schema.Sequence` is associated with a table,
+CREATE and DROP statements issued for that table will also issue CREATE/DROP
+for the sequence object as well, thus "bundling" the sequence object with its
+parent table.
+
+The :class:`~sqlalchemy.schema.Sequence` object also implements special
+functionality to accommodate Postgresql's SERIAL datatype. The SERIAL type in
+PG automatically generates a sequence that is used implicitly during inserts.
+This means that if a :class:`~sqlalchemy.schema.Table` object defines a
+:class:`~sqlalchemy.schema.Sequence` on its primary key column so that it
+works with Oracle and Firebird, the :class:`~sqlalchemy.schema.Sequence` would
+get in the way of the "implicit" sequence that PG would normally use. For this
+use case, add the flag ``optional=True`` to the
+:class:`~sqlalchemy.schema.Sequence` object - this indicates that the
+:class:`~sqlalchemy.schema.Sequence` should only be used if the database
+provides no other option for generating primary key identifiers.
+
+The :class:`~sqlalchemy.schema.Sequence` object also has the ability to be
+executed standalone like a SQL expression, which has the effect of calling its
+"next value" function::
+
+    seq = Sequence('some_sequence')
+    nextid = connection.execute(seq)
+
+API Constructs
+--------------
+
+.. autoclass:: ColumnDefault
+    :show-inheritance:
+
+.. autoclass:: DefaultClause
+    :show-inheritance:
+
+.. autoclass:: DefaultGenerator
+    :show-inheritance:
+
+.. autoclass:: FetchedValue
+    :show-inheritance:
+
+.. autoclass:: PassiveDefault
+    :show-inheritance:
+
+.. autoclass:: Sequence
+    :show-inheritance:
+
+Defining Constraints and Indexes
+=================================
+
+.. _metadata_foreignkeys:
+
+Defining Foreign Keys
+---------------------
+
+A *foreign key* in SQL is a table-level construct that constrains one or more
+columns in that table to only allow values that are present in a different set
+of columns, typically but not always located on a different table. We call the
+columns which are constrained the *foreign key* columns and the columns which
+they are constrained towards the *referenced* columns. The referenced columns
+almost always define the primary key for their owning table, though there are
+exceptions to this. The foreign key is the "joint" that connects together
+pairs of rows which have a relationship with each other, and SQLAlchemy
+assigns very deep importance to this concept in virtually every area of its
+operation.
+
+In SQLAlchemy as well as in DDL, foreign key constraints can be defined as
+additional attributes within the table clause, or for single-column foreign
+keys they may optionally be specified within the definition of a single
+column. The single column foreign key is more common, and at the column level
+is specified by constructing a :class:`~sqlalchemy.schema.ForeignKey` object
+as an argument to a :class:`~sqlalchemy.schema.Column` object::
+
+    user_preference = Table('user_preference', metadata,
+        Column('pref_id', Integer, primary_key=True),
+        Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
+        Column('pref_name', String(40), nullable=False),
+        Column('pref_value', String(100))
+    )
+
+Above, we define a new table ``user_preference`` for which each row must
+contain a value in the ``user_id`` column that also exists in the ``user``
+table's ``user_id`` column.
+
+The argument to :class:`~sqlalchemy.schema.ForeignKey` is most commonly a
+string of the form *<tablename>.<columnname>*, or for a table in a remote
+schema or "owner" of the form *<schemaname>.<tablename>.<columnname>*. It may
+also be an actual :class:`~sqlalchemy.schema.Column` object, which as we'll
+see later is accessed from an existing :class:`~sqlalchemy.schema.Table`
+object via its ``c`` collection::
+
+    ForeignKey(user.c.user_id)
+
+The advantage to using a string is that the in-python linkage between ``user``
+and ``user_preference`` is resolved only when first needed, so that table
+objects can be easily spread across multiple modules and defined in any order.
+
+Foreign keys may also be defined at the table level, using the
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` object. This object can
+describe a single- or multi-column foreign key. A multi-column foreign key is
+known as a *composite* foreign key, and almost always references a table that
+has a composite primary key. Below we define a table ``invoice`` which has a
+composite primary key::
+
+    invoice = Table('invoice', metadata,
+        Column('invoice_id', Integer, primary_key=True),
+        Column('ref_num', Integer, primary_key=True),
+        Column('description', String(60), nullable=False)
+    )
+
+And then a table ``invoice_item`` with a composite foreign key referencing
+``invoice``::
+
+    invoice_item = Table('invoice_item', metadata,
+        Column('item_id', Integer, primary_key=True),
+        Column('item_name', String(60), nullable=False),
+        Column('invoice_id', Integer, nullable=False),
+        Column('ref_num', Integer, nullable=False),
+        ForeignKeyConstraint(['invoice_id', 'ref_num'], ['invoice.invoice_id', 'invoice.ref_num'])
+    )
+
+It's important to note that the
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` is the only way to define a
+composite foreign key. While we could also have placed individual
+:class:`~sqlalchemy.schema.ForeignKey` objects on both the
+``invoice_item.invoice_id`` and ``invoice_item.ref_num`` columns, SQLAlchemy
+would not be aware that these two values should be paired together - it would
+be two individual foreign key constraints instead of a single composite
+foreign key referencing two columns.
+
+Creating/Dropping Foreign Key Constraints via ALTER
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In all the above examples, the :class:`~sqlalchemy.schema.ForeignKey` object
+causes the "REFERENCES" keyword to be added inline to a column definition
+within a "CREATE TABLE" statement when
+:func:`~sqlalchemy.schema.MetaData.create_all` is issued, and
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` invokes the "CONSTRAINT"
+keyword inline with "CREATE TABLE". There are some cases where this is
+undesireable, particularly when two tables reference each other mutually, each
+with a foreign key referencing the other. In such a situation at least one of
+the foreign key constraints must be generated after both tables have been
+built. To support such a scheme, :class:`~sqlalchemy.schema.ForeignKey` and
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` offer the flag
+``use_alter=True``. When using this flag, the constraint will be generated
+using a definition similar to "ALTER TABLE <tablename> ADD CONSTRAINT <name>
+...". Since a name is required, the ``name`` attribute must also be specified.
+For example::
+
+    node = Table('node', meta,
+        Column('node_id', Integer, primary_key=True),
+        Column('primary_element', Integer,
+            ForeignKey('element.element_id', use_alter=True, name='fk_node_element_id')
+        )
+    )
+
+    element = Table('element', meta,
+        Column('element_id', Integer, primary_key=True),
+        Column('parent_node_id', Integer),
+        ForeignKeyConstraint(
+            ['parent_node_id'],
+            ['node.node_id'],
+            use_alter=True,
+            name='fk_element_parent_node_id'
+        )
+    )
+
+ON UPDATE and ON DELETE
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Most databases support *cascading* of foreign key values, that is the when a
+parent row is updated the new value is placed in child rows, or when the
+parent row is deleted all corresponding child rows are set to null or deleted.
+In data definition language these are specified using phrases like "ON UPDATE
+CASCADE", "ON DELETE CASCADE", and "ON DELETE SET NULL", corresponding to
+foreign key constraints. The phrase after "ON UPDATE" or "ON DELETE" may also
+other allow other phrases that are specific to the database in use. The
+:class:`~sqlalchemy.schema.ForeignKey` and
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` objects support the
+generation of this clause via the ``onupdate`` and ``ondelete`` keyword
+arguments. The value is any string which will be output after the appropriate
+"ON UPDATE" or "ON DELETE" phrase::
+
+    child = Table('child', meta,
+        Column('id', Integer,
+                ForeignKey('parent.id', onupdate="CASCADE", ondelete="CASCADE"),
+                primary_key=True
+        )
+    )
+