Commits

Mike Bayer  committed 08c63ce

add changelogs

  • Participants
  • Parent commits d576b5f

Comments (0)

Files changed (8)

File doc/build/builder/changelog.py

 import re
 from sphinx.util.compat import Directive
 from docutils.statemachine import StringList
-from docutils import nodes
+from docutils import nodes, utils
 import textwrap
 import itertools
 import collections
         [line.strip() for line in textwrap.dedent(text).split("\n")]
     )
 
+
+def make_ticket_link(name, rawtext, text, lineno, inliner,
+                      options={}, content=[]):
+    env = inliner.document.settings.env
+    render_ticket = env.config.changelog_render_ticket or "%s"
+    prefix = "#%s"
+    if render_ticket:
+        ref = render_ticket % text
+        node = nodes.reference(rawtext, prefix % text, refuri=ref, **options)
+    else:
+        node = nodes.Text(prefix % text, prefix % text)
+    return [node], []
+
 def setup(app):
     app.add_directive('changelog', ChangeLogDirective)
     app.add_directive('change', ChangeDirective)
             None,
             'env'
         )
+    app.add_role('ticket', make_ticket_link)

File doc/build/changelog/index.rst

 Migration Guides
 ----------------
 
-SQLAlchemy migration guides are currently available on the wiki.
+SQLAlchemy migration guides are now available within the main documentation.
 
-* `Version 0.8 <http://www.sqlalchemy.org/trac/wiki/08Migration>`_
+.. toctree::
+	:maxdepth: 1
 
-* `Version 0.7 <http://www.sqlalchemy.org/trac/wiki/07Migration>`_
-
-* `Version 0.6 <http://www.sqlalchemy.org/trac/wiki/06Migration>`_
-
-* `Version 0.5 <http://www.sqlalchemy.org/trac/wiki/05Migration>`_
+	migration_08
+	migration_07
+	migration_06
+	migration_05
+	migration_04
 
 Change logs
 -----------

File doc/build/changelog/migration_04.rst

+=============================
+What's new in SQLAlchemy 0.4?
+=============================
+
+.. admonition:: About this Document
+
+    This document describes changes between SQLAlchemy version 0.3,
+    last released October 14, 2007, and SQLAlchemy version 0.4,
+    last released October 12, 2008.
+
+    Document date:  March 21, 2008
+
+First Things First
+==================
+
+If you're using any ORM features, make sure you import from
+``sqlalchemy.orm``:
+
+::
+
+    from sqlalchemy import *
+    from sqlalchemy.orm import *
+
+Secondly, anywhere you used to say ``engine=``,
+``connectable=``, ``bind_to=``, ``something.engine``,
+``metadata.connect()``, use ``bind``:
+
+::
+
+    myengine = create_engine('sqlite://')
+
+    meta = MetaData(myengine)
+
+    meta2 = MetaData()
+    meta2.bind = myengine
+
+    session = create_session(bind=myengine)
+
+    statement = select([table], bind=myengine)
+
+Got those ?  Good!  You're now (95%) 0.4 compatible.  If
+you're using 0.3.10, you can make these changes immediately;
+they'll work there too.
+
+Module Imports
+==============
+
+In 0.3, "``from sqlachemy import *``" would import all of
+sqlachemy's sub-modules into your namespace. Version 0.4 no
+longer imports sub-modules into the namespace. This may mean
+you need to add extra imports into your code.
+
+In 0.3, this code worked:
+
+::
+
+    from sqlalchemy import *
+
+    class UTCDateTime(types.TypeDecorator):
+        pass
+
+In 0.4, one must do:
+
+::
+
+    from sqlalchemy import *
+    from sqlalchemy import types
+
+    class UTCDateTime(types.TypeDecorator):
+        pass
+
+Object Relational Mapping
+=========================
+
+Querying
+--------
+
+New Query API
+^^^^^^^^^^^^^
+
+Query is standardized on the generative interface (old
+interface is still there, just deprecated).   While most of
+the generative interface is available in 0.3, the 0.4 Query
+has the inner guts to match the generative outside, and has
+a lot more tricks.  All result narrowing is via ``filter()``
+and ``filter_by()``, limiting/offset is either through array
+slices or ``limit()``/``offset()``, joining is via
+``join()`` and ``outerjoin()`` (or more manually, through
+``select_from()`` as well as manually-formed criteria).
+
+To avoid deprecation warnings, you must make some changes to
+your 03 code
+
+User.query.get_by( \**kwargs )
+
+::
+
+    User.query.filter_by(**kwargs).first()
+
+User.query.select_by( \**kwargs )
+
+::
+
+    User.query.filter_by(**kwargs).all()
+
+User.query.select()
+
+::
+
+    User.query.filter(xxx).all()
+
+New Property-Based Expression Constructs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By far the most palpable difference within the ORM is that
+you can now construct your query criterion using class-based
+attributes directly.  The ".c." prefix is no longer needed
+when working with mapped classes:
+
+::
+
+    session.query(User).filter(and_(User.name == 'fred', User.id > 17))
+
+While simple column-based comparisons are no big deal, the
+class attributes have some new "higher level" constructs
+available, including what was previously only available in
+``filter_by()``:
+
+::
+
+    # comparison of scalar relations to an instance
+    filter(Address.user == user)
+
+    # return all users who contain a particular address
+    filter(User.addresses.contains(address))
+
+    # return all users who *dont* contain the address
+    filter(~User.address.contains(address))
+
+    # return all users who contain a particular address with
+    # the email_address like '%foo%'
+    filter(User.addresses.any(Address.email_address.like('%foo%')))
+
+    # same, email address equals 'foo@bar.com'.  can fall back to keyword
+    # args for simple comparisons
+    filter(User.addresses.any(email_address = 'foo@bar.com'))
+
+    # return all Addresses whose user attribute has the username 'ed'
+    filter(Address.user.has(name='ed'))
+
+    # return all Addresses whose user attribute has the username 'ed'
+    # and an id > 5 (mixing clauses with kwargs)
+    filter(Address.user.has(User.id > 5, name='ed'))
+
+The ``Column`` collection remains available on mapped
+classes in the ``.c`` attribute.  Note that property-based
+expressions are only available with mapped properties of
+mapped classes.  ``.c`` is still used to access columns in
+regular tables and selectable objects produced from SQL
+Expressions.
+
+Automatic Join Aliasing
+^^^^^^^^^^^^^^^^^^^^^^^
+
+We've had join() and outerjoin() for a while now:
+
+::
+
+    session.query(Order).join('items')...
+
+Now you can alias them:
+
+::
+
+    session.query(Order).join('items', aliased=True).
+       filter(Item.name='item 1').join('items', aliased=True).filter(Item.name=='item 3')
+
+The above will create two joins from orders->items using
+aliases.  the ``filter()`` call subsequent to each will
+adjust its table criterion to that of the alias.  To get at
+the ``Item`` objects, use ``add_entity()`` and target each
+join with an ``id``:
+
+::
+
+    session.query(Order).join('items', id='j1', aliased=True).
+    filter(Item.name == 'item 1').join('items', aliased=True, id='j2').
+    filter(Item.name == 'item 3').add_entity(Item, id='j1').add_entity(Item, id='j2')
+
+Returns tuples in the form: ``(Order, Item, Item)``.
+
+Self-referential Queries
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+So query.join() can make aliases now.  What does that give
+us ?  Self-referential queries !   Joins can be done without
+any ``Alias`` objects:
+
+::
+
+    # standard self-referential TreeNode mapper with backref
+    mapper(TreeNode, tree_nodes, properties={
+        'children':relation(TreeNode, backref=backref('parent', remote_side=tree_nodes.id))
+    })
+
+    # query for node with child containing "bar" two levels deep
+    session.query(TreeNode).join(["children", "children"], aliased=True).filter_by(name='bar')
+
+To add criterion for each table along the way in an aliased
+join, you can use ``from_joinpoint`` to keep joining against
+the same line of aliases:
+
+::
+
+    # search for the treenode along the path "n1/n12/n122"
+
+    # first find a Node with name="n122"
+    q = sess.query(Node).filter_by(name='n122')
+
+    # then join to parent with "n12"
+    q = q.join('parent', aliased=True).filter_by(name='n12')
+
+    # join again to the next parent with 'n1'.  use 'from_joinpoint'
+    # so we join from the previous point, instead of joining off the
+    # root table
+    q = q.join('parent', aliased=True, from_joinpoint=True).filter_by(name='n1')
+
+    node = q.first()
+
+``query.populate_existing()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The eager version of ``query.load()`` (or
+``session.refresh()``).  Every instance loaded from the
+query, including all eagerly loaded items, get refreshed
+immediately if already present in the session:
+
+::
+
+    session.query(Blah).populate_existing().all()
+
+Relations
+---------
+
+SQL Clauses Embedded in Updates/Inserts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For inline execution of SQL clauses, embedded right in the
+UPDATE or INSERT, during a ``flush()``:
+
+::
+
+
+    myobject.foo = mytable.c.value + 1
+
+    user.pwhash = func.md5(password)
+
+    order.hash = text("select hash from hashing_table")
+
+The column-attribute is set up with a deferred loader after
+the operation, so that it issues the SQL to load the new
+value when you next access.
+
+Self-referential and Cyclical Eager Loading
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Since our alias-fu has improved, ``relation()`` can join
+along the same table \*any number of times*; you tell it how
+deep you want to go.  Lets show the self-referential
+``TreeNode`` more clearly:
+
+::
+
+    nodes = Table('nodes', metadata,
+         Column('id', Integer, primary_key=True),
+         Column('parent_id', Integer, ForeignKey('nodes.id')),
+         Column('name', String(30)))
+
+    class TreeNode(object):
+        pass
+
+    mapper(TreeNode, nodes, properties={
+        'children':relation(TreeNode, lazy=False, join_depth=3)
+    })
+
+So what happens when we say:
+
+::
+
+    create_session().query(TreeNode).all()
+
+?  A join along aliases, three levels deep off the parent:
+
+::
+
+    SELECT
+    nodes_3.id AS nodes_3_id, nodes_3.parent_id AS nodes_3_parent_id, nodes_3.name AS nodes_3_name,
+    nodes_2.id AS nodes_2_id, nodes_2.parent_id AS nodes_2_parent_id, nodes_2.name AS nodes_2_name,
+    nodes_1.id AS nodes_1_id, nodes_1.parent_id AS nodes_1_parent_id, nodes_1.name AS nodes_1_name,
+    nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.name AS nodes_name
+    FROM nodes LEFT OUTER JOIN nodes AS nodes_1 ON nodes.id = nodes_1.parent_id
+    LEFT OUTER JOIN nodes AS nodes_2 ON nodes_1.id = nodes_2.parent_id
+    LEFT OUTER JOIN nodes AS nodes_3 ON nodes_2.id = nodes_3.parent_id
+    ORDER BY nodes.oid, nodes_1.oid, nodes_2.oid, nodes_3.oid
+
+Notice the nice clean alias names too.  The joining doesn't
+care if it's against the same immediate table or some other
+object which then cycles back to the beginining.  Any kind
+of chain of eager loads can cycle back onto itself when
+``join_depth`` is specified.  When not present, eager
+loading automatically stops when it hits a cycle.
+
+Composite Types
+^^^^^^^^^^^^^^^
+
+This is one from the Hibernate camp.  Composite Types let
+you define a custom datatype that is composed of more than
+one column (or one column, if you wanted).   Lets define a
+new type, ``Point``.  Stores an x/y coordinate:
+
+::
+
+    class Point(object):
+        def __init__(self, x, y):
+            self.x = x
+            self.y = y
+        def __composite_values__(self):
+            return self.x, self.y
+        def __eq__(self, other):
+            return other.x == self.x and other.y == self.y
+        def __ne__(self, other):
+            return not self.__eq__(other)
+
+The way the ``Point`` object is defined is specific to a
+custom type; constructor takes a list of arguments, and the
+``__composite_values__()`` method produces a sequence of
+those arguments.  The order will match up to our mapper, as
+we'll see in a moment.
+
+Let's create a table of vertices storing two points per row:
+
+::
+
+    vertices = Table('vertices', metadata,
+        Column('id', Integer, primary_key=True),
+        Column('x1', Integer),
+        Column('y1', Integer),
+        Column('x2', Integer),
+        Column('y2', Integer),
+        )
+
+Then, map it !  We'll create a ``Vertex`` object which
+stores two ``Point`` objects:
+
+::
+
+    class Vertex(object):
+        def __init__(self, start, end):
+            self.start = start
+            self.end = end
+
+    mapper(Vertex, vertices, properties={
+        'start':composite(Point, vertices.c.x1, vertices.c.y1),
+        'end':composite(Point, vertices.c.x2, vertices.c.y2)
+    })
+
+Once you've set up your composite type, it's usable just
+like any other type:
+
+::
+
+
+    v = Vertex(Point(3, 4), Point(26,15))
+    session.save(v)
+    session.flush()
+
+    # works in queries too
+    q = session.query(Vertex).filter(Vertex.start == Point(3, 4))
+
+If you'd like to define the way the mapped attributes
+generate SQL clauses when used in expressions, create your
+own ``sqlalchemy.orm.PropComparator`` subclass, defining any
+of the common operators (like ``__eq__()``, ``__le__()``,
+etc.), and send it in to ``composite()``.  Composite types
+work as primary keys too, and are usable in ``query.get()``:
+
+::
+
+    # a Document class which uses a composite Version
+    # object as primary key
+    document = query.get(Version(1, 'a'))
+
+``dynamic_loader()`` relations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``relation()`` that returns a live ``Query`` object for
+all read operations.  Write operations are limited to just
+``append()`` and ``remove()``, changes to the collection are
+not visible until the session is flushed.  This feature is
+particularly handy with an "autoflushing" session which will
+flush before each query.
+
+::
+
+    mapper(Foo, foo_table, properties={
+        'bars':dynamic_loader(Bar, backref='foo', <other relation() opts>)
+    })
+
+    session = create_session(autoflush=True)
+    foo = session.query(Foo).first()
+
+    foo.bars.append(Bar(name='lala'))
+
+    for bar in foo.bars.filter(Bar.name=='lala'):
+        print bar
+
+    session.commit()
+
+New Options: ``undefer_group()``, ``eagerload_all()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A couple of query options which are handy.
+``undefer_group()`` marks a whole group of "deferred"
+columns as undeferred:
+
+::
+
+    mapper(Class, table, properties={
+        'foo' : deferred(table.c.foo, group='group1'),
+        'bar' : deferred(table.c.bar, group='group1'),
+        'bat' : deferred(table.c.bat, group='group1'),
+    )
+
+    session.query(Class).options(undefer_group('group1')).filter(...).all()
+
+and ``eagerload_all()`` sets a chain of attributes to be
+eager in one pass:
+
+::
+
+    mapper(Foo, foo_table, properties={
+       'bar':relation(Bar)
+    })
+    mapper(Bar, bar_table, properties={
+       'bat':relation(Bat)
+    })
+    mapper(Bat, bat_table)
+
+    # eager load bar and bat
+    session.query(Foo).options(eagerload_all('bar.bat')).filter(...).all()
+
+New Collection API
+^^^^^^^^^^^^^^^^^^
+
+Collections are no longer proxied by an
+{{{InstrumentedList}}} proxy, and access to members, methods
+and attributes is direct.   Decorators now intercept objects
+entering and leaving the collection, and it is now possible
+to easily write a custom collection class that manages its
+own membership.  Flexible decorators also replace the named
+method interface of custom collections in 0.3, allowing any
+class to be easily adapted to use as a collection container.
+
+Dictionary-based collections are now much easier to use and
+fully ``dict``-like.  Changing ``__iter__`` is no longer
+needed for ``dict``s, and new built-in ``dict`` types cover
+many needs:
+
+::
+
+    # use a dictionary relation keyed by a column
+    relation(Item, collection_class=column_mapped_collection(items.c.keyword))
+    # or named attribute
+    relation(Item, collection_class=attribute_mapped_collection('keyword'))
+    # or any function you like
+    relation(Item, collection_class=mapped_collection(lambda entity: entity.a + entity.b))
+
+Existing 0.3 ``dict``-like and freeform object derived
+collection classes will need to be updated for the new API.
+In most cases this is simply a matter of adding a couple
+decorators to the class definition.
+
+Mapped Relations from External Tables/Subqueries
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This feature quietly appeared in 0.3 but has been improved
+in 0.4 thanks to better ability to convert subqueries
+against a table into subqueries against an alias of that
+table; this is key for eager loading, aliased joins in
+queries, etc.  It reduces the need to create mappers against
+select statements when you just need to add some extra
+columns or subqueries:
+
+::
+
+    mapper(User, users, properties={
+           'fullname': column_property((users.c.firstname + users.c.lastname).label('fullname')),
+           'numposts': column_property(
+                select([func.count(1)], users.c.id==posts.c.user_id).correlate(users).label('posts')
+           )
+        })
+
+a typical query looks like:
+
+::
+
+    SELECT (SELECT count(1) FROM posts WHERE users.id = posts.user_id) AS count,
+    users.firstname || users.lastname AS fullname,
+    users.id AS users_id, users.firstname AS users_firstname, users.lastname AS users_lastname
+    FROM users ORDER BY users.oid
+
+Horizontal Scaling (Sharding) API
+---------------------------------
+
+[browser:/sqlalchemy/trunk/examples/sharding/attribute_shard
+.py]
+
+Sessions
+--------
+
+New Session Create Paradigm; SessionContext, assignmapper Deprecated
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+That's right, the whole shebang is being replaced with two
+configurational functions.  Using both will produce the most
+0.1-ish feel we've had since 0.1 (i.e., the least amount of
+typing).
+
+Configure your own ``Session`` class right where you define
+your ``engine`` (or anywhere):
+
+::
+
+    from sqlalchemy import create_engine
+    from sqlalchemy.orm import sessionmaker
+
+    engine = create_engine('myengine://')
+    Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
+
+    # use the new Session() freely
+    sess = Session()
+    sess.save(someobject)
+    sess.flush()
+
+
+If you need to post-configure your Session, say with an
+engine, add it later with ``configure()``:
+
+::
+
+    Session.configure(bind=create_engine(...))
+
+All the behaviors of ``SessionContext`` and the ``query``
+and ``__init__`` methods of ``assignmapper`` are moved into
+the new ``scoped_session()`` function, which is compatible
+with both ``sessionmaker`` as well as ``create_session()``:
+
+::
+
+    from sqlalchemy.orm import scoped_session, sessionmaker
+
+    Session = scoped_session(sessionmaker(autoflush=True, transactional=True))
+    Session.configure(bind=engine)
+
+    u = User(name='wendy')
+
+    sess = Session()
+    sess.save(u)
+    sess.commit()
+
+    # Session constructor is thread-locally scoped.  Everyone gets the same
+    # Session in the thread when scope="thread".
+    sess2 = Session()
+    assert sess is sess2
+
+
+When using a thread-local ``Session``, the returned class
+has all of ``Session's`` interface implemented as
+classmethods, and "assignmapper"'s functionality is
+available using the ``mapper`` classmethod.  Just like the
+old ``objectstore`` days....
+
+::
+
+
+    # "assignmapper"-like functionality available via ScopedSession.mapper
+    Session.mapper(User, users_table)
+
+    u = User(name='wendy')
+
+    Session.commit()
+
+
+Sessions are again Weak Referencing By Default
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The weak_identity_map flag is now set to ``True`` by default
+on Session.  Instances which are externally deferenced and
+fall out of scope are removed from the session
+automatically.   However, items which have "dirty" changes
+present will remain strongly referenced until those changes
+are flushed at which case the object reverts to being weakly
+referenced (this works for 'mutable' types, like picklable
+attributes, as well).  Setting weak_identity_map to
+``False`` restores the old strong-referencing behavior for
+those of you using the session like a cache.
+
+Auto-Transactional Sessions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+As you might have noticed above, we are calling ``commit()``
+on ``Session``.  The flag ``transactional=True`` means the
+``Session`` is always in a transaction, ``commit()``
+persists permanently.
+
+Auto-Flushing Sessions
+^^^^^^^^^^^^^^^^^^^^^^
+
+Also, ``autoflush=True`` means the ``Session`` will
+``flush()`` before each ``query`` as well as when you call
+``flush()`` or ``commit()``.  So now this will work:
+
+::
+
+    Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
+
+    u = User(name='wendy')
+
+    sess = Session()
+    sess.save(u)
+
+    # wendy is flushed, comes right back from a query
+    wendy = sess.query(User).filter_by(name='wendy').one()
+
+Transactional methods moved onto sessions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``commit()`` and ``rollback()``, as well as ``begin()`` are
+now directly on ``Session``.  No more need to use
+``SessionTransaction`` for anything (it remains in the
+background).
+
+::
+
+    Session = sessionmaker(autoflush=True, transactional=False)
+
+    sess = Session()
+    sess.begin()
+
+    # use the session
+
+    sess.commit() # commit transaction
+
+Sharing a ``Session`` with an enclosing engine-level (i.e.
+non-ORM) transaction is easy:
+
+::
+
+    Session = sessionmaker(autoflush=True, transactional=False)
+
+    conn = engine.connect()
+    trans = conn.begin()
+    sess = Session(bind=conn)
+
+    # ... session is transactional
+
+    # commit the outermost transaction
+    trans.commit()
+
+Nested Session Transactions with SAVEPOINT
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Available at the Engine and ORM level.  ORM docs so far:
+
+http://www.sqlalchemy.org/docs/04/session.html#unitofwork_ma
+naging
+
+Two-Phase Commit Sessions
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Available at the Engine and ORM level.  ORM docs so far:
+
+http://www.sqlalchemy.org/docs/04/session.html#unitofwork_ma
+naging
+
+Inheritance
+-----------
+
+Polymorphic Inheritance with No Joins or Unions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+New docs for inheritance:  http://www.sqlalchemy.org/docs/04
+/mappers.html#advdatamapping_mapper_inheritance_joined
+
+Better Polymorphic Behavior with ``get()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+All classes within a joined-table inheritance hierarchy get
+an ``_instance_key`` using the base class, i.e.
+``(BaseClass, (1, ), None)``.  That way when you call
+``get()`` a ``Query`` against the base class, it can locate
+subclass instances in the current identity map without
+querying the database.
+
+Types
+-----
+
+Custom Subclasses of ``sqlalchemy.types.TypeDecorator``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There is a `New API <http://www.sqlalchemy.org/docs/04/types
+.html#types_custom>`_ for subclassing a TypeDecorator.
+Using the 0.3 API causes compilation errors in some cases.
+
+SQL Expressions
+===============
+
+All New, Deterministic Label/Alias Generation
+---------------------------------------------
+
+All the "anonymous" labels and aliases use a simple
+<name>_<number> format now.  SQL is much easier to read and
+is compatible with plan optimizer caches.  Just check out
+some of the examples in the tutorials:
+http://www.sqlalchemy.org/docs/04/ormtutorial.html
+http://www.sqlalchemy.org/docs/04/sqlexpression.html
+
+Generative select() Constructs
+------------------------------
+
+This is definitely the way to go with ``select()``.  See htt
+p://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_transf
+orm .
+
+New Operator System
+-------------------
+
+SQL operators and more or less every SQL keyword there is
+are now abstracted into the compiler layer.  They now act
+intelligently and are type/backend aware, see: http://www.sq
+lalchemy.org/docs/04/sqlexpression.html#sql_operators
+
+All ``type`` Keyword Arguments Renamed to ``type_``
+---------------------------------------------------
+
+Just like it says:
+
+::
+
+       b = bindparam('foo', type_=String)
+
+in_ Function Changed to Accept Sequence or Selectable
+-----------------------------------------------------
+
+The in_ function now takes a sequence of values or a
+selectable as its sole argument. The previous API of passing
+in values as positional arguments still works, but is now
+deprecated. This means that
+
+::
+
+    my_table.select(my_table.c.id.in_(1,2,3)
+    my_table.select(my_table.c.id.in_(*listOfIds)
+
+should be changed to
+
+::
+
+    my_table.select(my_table.c.id.in_([1,2,3])
+    my_table.select(my_table.c.id.in_(listOfIds)
+
+Schema and Reflection
+=====================
+
+``MetaData``, ``BoundMetaData``, ``DynamicMetaData``...
+-------------------------------------------------------
+
+In the 0.3.x series, ``BoundMetaData`` and
+``DynamicMetaData`` were deprecated in favor of ``MetaData``
+and ``ThreadLocalMetaData``.  The older names have been
+removed in 0.4.  Updating is simple:
+
+::
+
+    +-------------------------------------+-------------------------+
+    |If You Had                           | Now Use                 |
+    +=====================================+=========================+
+    | ``MetaData``                        | ``MetaData``            |
+    +-------------------------------------+-------------------------+
+    | ``BoundMetaData``                   | ``MetaData``            |
+    +-------------------------------------+-------------------------+
+    | ``DynamicMetaData`` (with one       | ``MetaData``            |
+    | engine or threadlocal=False)        |                         |
+    +-------------------------------------+-------------------------+
+    | ``DynamicMetaData``                 | ``ThreadLocalMetaData`` |
+    | (with different engines per thread) |                         |
+    +-------------------------------------+-------------------------+
+
+The seldom-used ``name`` parameter to ``MetaData`` types has
+been removed.  The ``ThreadLocalMetaData`` constructor now
+takes no arguments.  Both types can now be bound to an
+``Engine`` or a single ``Connection``.
+
+One Step Multi-Table Reflection
+-------------------------------
+
+You can now load table definitions and automatically create
+``Table`` objects from an entire database or schema in one
+pass:
+
+::
+
+    >>> metadata = MetaData(myengine, reflect=True)
+    >>> metadata.tables.keys()
+    ['table_a', 'table_b', 'table_c', '...']
+
+``MetaData`` also gains a ``.reflect()`` method enabling
+finer control over the loading process, including
+specification of a subset of available tables to load.
+
+SQL Execution
+=============
+
+``engine``, ``connectable``, and ``bind_to`` are all now ``bind``
+-----------------------------------------------------------------
+
+``Transactions``, ``NestedTransactions`` and ``TwoPhaseTransactions``
+---------------------------------------------------------------------
+
+Connection Pool Events
+----------------------
+
+The connection pool now fires events when new DB-API
+connections are created, checked out and checked back into
+the pool.   You can use these to execute session-scoped SQL
+setup statements on fresh connections, for example.
+
+Oracle Engine Fixed
+-------------------
+
+In 0.3.11, there were bugs in the Oracle Engine on how
+Primary Keys are handled.  These bugs could cause programs
+that worked fine with other engines, such as sqlite, to fail
+when using the Oracle Engine.  In 0.4, the Oracle Engine has
+been reworked, fixing these Primary Key problems.
+
+Out Parameters for Oracle
+-------------------------
+
+::
+
+    result = engine.execute(text("begin foo(:x, :y, :z); end;", bindparams=[bindparam('x', Numeric), outparam('y', Numeric), outparam('z', Numeric)]), x=5)
+    assert result.out_parameters == {'y':10, 'z':75}
+
+Connection-bound ``MetaData``, ``Sessions``
+-------------------------------------------
+
+``MetaData`` and ``Session`` can be explicitly bound to a
+connection:
+
+::
+
+    conn = engine.connect()
+    sess = create_session(bind=conn)
+
+Faster, More Foolproof ``ResultProxy`` Objects
+----------------------------------------------
+

File doc/build/changelog/migration_05.rst

+=============================
+What's new in SQLAlchemy 0.5?
+=============================
+
+.. admonition:: About this Document
+
+    This document describes changes between SQLAlchemy version 0.4,
+    last released October 12, 2008, and SQLAlchemy version 0.5,
+    last released January 16, 2010.
+
+    Document date: August 4, 2009
+
+
+This guide documents API changes which affect users
+migrating their applications from the 0.4 series of
+SQLAlchemy to 0.5.   It's also recommended for those working
+from  `Essential SQLAlchemy
+<http://oreilly.com/catalog/9780596516147/>`_, which only
+covers 0.4 and seems to even have some old 0.3isms in it.
+Note that SQLAlchemy 0.5 removes many behaviors which were
+deprecated throughout the span of the 0.4 series, and also
+deprecates more behaviors specific to 0.4.
+
+Major Documentation Changes
+===========================
+
+Some sections of the documentation have been completely
+rewritten and can serve as an introduction to new ORM
+features.  The ``Query`` and ``Session`` objects in
+particular have some distinct differences in API and
+behavior which fundamentally change many of the basic ways
+things are done, particularly with regards to constructing
+highly customized ORM queries and dealing with stale session
+state, commits and rollbacks.
+
+* `ORM Tutorial
+  <http://www.sqlalchemy.org/docs/05/ormtutorial.html>`_
+
+* `Session Documentation
+  <http://www.sqlalchemy.org/docs/05/session.html>`_
+
+Deprecations Source
+===================
+
+Another source of information is documented within a series
+of unit tests illustrating up to date usages of some common
+``Query`` patterns; this file can be viewed at
+[source:sqlalchemy/trunk/test/orm/test_deprecations.py].
+
+Requirements Changes
+====================
+
+* Python 2.4 or higher is required.  The SQLAlchemy 0.4 line
+  is the last version with Python 2.3 support.
+
+Object Relational Mapping
+=========================
+
+* **Column level expressions within Query.** - as detailed
+  in the `tutorial
+  <http://www.sqlalchemy.org/docs/05/ormtutorial.html>`_,
+  ``Query`` has the capability to create specific SELECT
+  statements, not just those against full rows:
+
+  ::
+
+      session.query(User.name, func.count(Address.id).label("numaddresses")).join(Address).group_by(User.name)
+
+  The tuples returned by any multi-column/entity query are
+  *named*' tuples:
+
+  ::
+
+      for row in session.query(User.name, func.count(Address.id).label('numaddresses')).join(Address).group_by(User.name):
+         print "name", row.name, "number", row.numaddresses
+
+  ``Query`` has a ``statement`` accessor, as well as a
+  ``subquery()`` method which allow ``Query`` to be used to
+  create more complex combinations:
+
+  ::
+
+      subq = session.query(Keyword.id.label('keyword_id')).filter(Keyword.name.in_(['beans', 'carrots'])).subquery()
+      recipes = session.query(Recipe).filter(exists().
+         where(Recipe.id==recipe_keywords.c.recipe_id).
+         where(recipe_keywords.c.keyword_id==subq.c.keyword_id)
+      )
+
+* **Explicit ORM aliases are recommended for aliased joins**
+  - The ``aliased()`` function produces an "alias" of a
+  class, which allows fine-grained control of aliases in
+  conjunction with ORM queries.  While a table-level alias
+  (i.e. ``table.alias()``) is still usable, an ORM level
+  alias retains the semantics of the ORM mapped object which
+  is significant for inheritance mappings, options, and
+  other scenarios.  E.g.:
+
+  ::
+
+      Friend = aliased(Person)
+      session.query(Person, Friend).join((Friend, Person.friends)).all()
+
+* **query.join() greatly enhanced.** - You can now specify
+  the target and ON clause for a join in multiple ways.   A
+  target class alone can be provided where SQLA will attempt
+  to form a join to it via foreign key in the same way as
+  ``table.join(someothertable)``.  A target and an explicit
+  ON condition can be provided, where the ON condition can
+  be a ``relation()`` name, an actual class descriptor, or a
+  SQL expression.  Or the old way of just a ``relation()``
+  name or class descriptor works too.   See the ORM tutorial
+  which has several examples.
+
+* **Declarative is recommended for applications which don't
+  require (and don't prefer) abstraction between tables and
+  mappers** - The [/docs/05/reference/ext/declarative.html
+  Declarative] module, which is used to combine the
+  expression of ``Table``, ``mapper()``, and user defined
+  class objects together, is highly recommended as it
+  simplifies application configuration, ensures the "one
+  mapper per class" pattern, and allows the full range of
+  configuration available to distinct ``mapper()`` calls.
+  Separate ``mapper()`` and ``Table`` usage is now referred
+  to as "classical SQLAlchemy usage" and of course is freely
+  mixable with declarative.
+
+* **The .c. attribute has been removed** from classes (i.e.
+  ``MyClass.c.somecolumn``).  As is the case in 0.4, class-
+  level properties are usable as query elements, i.e.
+  ``Class.c.propname`` is now superseded by
+  ``Class.propname``, and the ``c`` attribute continues to
+  remain on ``Table`` objects where they indicate the
+  namespace of ``Column`` objects present on the table.
+
+  To get at the Table for a mapped class (if you didn't keep
+  it around already):
+
+  ::
+
+      table = class_mapper(someclass).mapped_table
+
+  Iterate through columns:
+
+  ::
+
+      for col in table.c:
+          print col
+
+  Work with a specific column:
+
+  ::
+
+      table.c.somecolumn
+
+  The class-bound descriptors support the full set of Column
+  operators as well as the documented relation-oriented
+  operators like ``has()``, ``any()``, ``contains()``, etc.
+
+  The reason for the hard removal of ``.c.`` is that in 0.5,
+  class-bound descriptors carry potentially different
+  meaning, as well as information regarding class mappings,
+  versus plain ``Column`` objects - and there are use cases
+  where you'd specifically want to use one or the other.
+  Generally, using class-bound descriptors invokes a set of
+  mapping/polymorphic aware translations, and using table-
+  bound columns does not.  In 0.4, these translations were
+  applied across the board to all expressions, but 0.5
+  differentiates completely between columns and mapped
+  descriptors, only applying translations to the latter.  So
+  in many cases, particularly when dealing with joined table
+  inheritance configurations as well as when using
+  ``query(<columns>)``, ``Class.propname`` and
+  ``table.c.colname`` are not interchangeable.
+
+  For example, ``session.query(users.c.id, users.c.name)``
+  is different versus ``session.query(User.id, User.name)``;
+  in the latter case, the ``Query`` is aware of the mapper
+  in use and further mapper-specific operations like
+  ``query.join(<propname>)``, ``query.with_parent()`` etc.
+  may be used, but in the former case cannot.  Additionally,
+  in polymorphic inheritance scenarios, the class-bound
+  descriptors refer to the columns present in the
+  polymorphic selectable in use, not necessarily the table
+  column which directly corresponds to the descriptor.  For
+  example, a set of classes related by joined-table
+  inheritance to the ``person`` table along the
+  ``person_id`` column of each table will all have their
+  ``Class.person_id`` attribute mapped to the ``person_id``
+  column in ``person``, and not their subclass table.
+  Version 0.4 would map this behavior onto table-bound
+  ``Column`` objects automatically.  In 0.5, this automatic
+  conversion has been removed, so that you in fact *can* use
+  table-bound columns as a means to override the
+  translations which occur with polymorphic querying; this
+  allows ``Query`` to be able to create optimized selects
+  among joined-table or concrete-table inheritance setups,
+  as well as portable subqueries, etc.
+
+* **Session Now Synchronizes Automatically with
+  Transactions.** Session now synchronizes against the
+  transaction automatically by default, including autoflush
+  and autoexpire.  A transaction is present at all times
+  unless disabled using the ``autocommit`` option.  When all
+  three flags are set to their default, the Session recovers
+  gracefully after rollbacks and it's very difficult to get
+  stale data into the session.  See the new Session
+  documentation for details.
+
+* **Implicit Order By Is Removed**.  This will impact ORM
+  users who rely upon SA's "implicit ordering" behavior,
+  which states that all Query objects which don't have an
+  ``order_by()`` will ORDER BY the "id" or "oid" column of
+  the primary mapped table, and all lazy/eagerly loaded
+  collections apply a similar ordering.   In 0.5, automatic
+  ordering must be explicitly configured on ``mapper()`` and
+  ``relation()`` objects (if desired), or otherwise when
+  using ``Query``.
+
+  To convert an 0.4 mapping to 0.5, such that its ordering
+  behavior will be extremely similar to 0.4 or previous, use
+  the ``order_by`` setting on ``mapper()`` and
+  ``relation()``:
+
+  ::
+
+          mapper(User, users, properties={
+              'addresses':relation(Address, order_by=addresses.c.id)
+          }, order_by=users.c.id)
+
+  To set ordering on a backref, use the ``backref()``
+  function:
+
+  ::
+
+          'keywords':relation(Keyword, secondary=item_keywords,
+                order_by=keywords.c.name, backref=backref('items', order_by=items.c.id))
+
+  Using declarative ?  To help with the new ``order_by``
+  requirement, ``order_by`` and friends can now be set using
+  strings which are evaluated in Python later on (this works
+  **only** with declarative, not plain mappers):
+
+  ::
+
+          class MyClass(MyDeclarativeBase):
+              ...
+              'addresses':relation("Address", order_by="Address.id")
+
+  It's generally a good idea to set ``order_by`` on
+  ``relation()s`` which load list-based collections of
+  items, since that ordering cannot otherwise be affected.
+  Other than that, the best practice is to use
+  ``Query.order_by()`` to control ordering of the primary
+  entities being loaded.
+
+* **Session is now
+  autoflush=True/autoexpire=True/autocommit=False.** - To
+  set it up, just call ``sessionmaker()`` with no arguments.
+  The name ``transactional=True`` is now
+  ``autocommit=False``.  Flushes occur upon each query
+  issued (disable with ``autoflush=False``), within each
+  ``commit()`` (as always), and before each
+  ``begin_nested()`` (so rolling back to the SAVEPOINT is
+  meaningful).   All objects are expired after each
+  ``commit()`` and after each ``rollback()``.  After
+  rollback, pending objects are expunged, deleted objects
+  move back to persistent.  These defaults work together
+  very nicely and there's really no more need for old
+  techniques like ``clear()`` (which is renamed to
+  ``expunge_all()`` as well).
+
+  P.S.:  sessions are now reusable after a ``rollback()``.
+  Scalar and collection attribute changes, adds and deletes
+  are all rolled back.
+
+* **session.add() replaces session.save(), session.update(),
+  session.save_or_update().** - the
+  ``session.add(someitem)`` and ``session.add_all([list of
+  items])`` methods replace ``save()``, ``update()``, and
+  ``save_or_update()``.  Those methods will remain
+  deprecated throughout 0.5.
+
+* **backref configuration made less verbose.** - The
+  ``backref()`` function now uses the ``primaryjoin`` and
+  ``secondaryjoin`` arguments of the forwards-facing
+  ``relation()`` when they are not explicitly stated.  It's
+  no longer necessary to specify
+  ``primaryjoin``/``secondaryjoin`` in both directions
+  separately.
+
+* **Simplified polymorphic options.** - The ORM's
+  "polymorphic load" behavior has been simplified.  In 0.4,
+  mapper() had an argument called ``polymorphic_fetch``
+  which could be configured as ``select`` or ``deferred``.
+  This option is removed; the mapper will now just defer any
+  columns which were not present in the SELECT statement.
+  The actual SELECT statement used is controlled by the
+  ``with_polymorphic`` mapper argument (which is also in 0.4
+  and replaces ``select_table``), as well as the
+  ``with_polymorphic()`` method on ``Query`` (also in 0.4).
+
+  An improvement to the deferred loading of inheriting
+  classes is that the mapper now produces the "optimized"
+  version of the SELECT statement in all cases; that is, if
+  class B inherits from A, and several attributes only
+  present on class B have been expired, the refresh
+  operation will only include B's table in the SELECT
+  statement and will not JOIN to A.
+
+* The ``execute()`` method on ``Session`` converts plain
+  strings into ``text()`` constructs, so that bind
+  parameters may all be specified as ":bindname" without
+  needing to call ``text()`` explicitly.  If "raw" SQL is
+  desired here, use ``session.connection().execute("raw
+  text")``.
+
+* ``session.Query().iterate_instances()`` has been renamed
+  to just ``instances()``. The old ``instances()`` method
+  returning a list instead of an iterator no longer exists.
+  If you were relying on that behavior, you should use
+  ``list(your_query.instances())``.
+
+Extending the ORM
+=================
+
+In 0.5 we're moving forward with more ways to modify and
+extend the ORM.  Heres a summary:
+
+* **MapperExtension.** - This is the classic extension
+  class, which remains.   Methods which should rarely be
+  needed are ``create_instance()`` and
+  ``populate_instance()``.  To control the initialization of
+  an object when it's loaded from the database, use the
+  ``reconstruct_instance()`` method, or more easily the
+  ``@reconstructor`` decorator described in the
+  documentation.
+
+* **SessionExtension.** - This is an easy to use extension
+  class for session events.  In particular, it provides
+  ``before_flush()``, ``after_flush()`` and
+  ``after_flush_postexec()`` methods.  It's usage is
+  recommended over ``MapperExtension.before_XXX`` in many
+  cases since within ``before_flush()`` you can modify the
+  flush plan of the session freely, something which cannot
+  be done from within ``MapperExtension``.
+
+* **AttributeExtension.** - This class is now part of the
+  public API, and allows the interception of userland events
+  on attributes, including attribute set and delete
+  operations, and collection appends and removes.  It also
+  allows the value to be set or appended to be modified.
+  The ``@validates`` decorator, described in the
+  documentation, provides a quick way to mark any mapped
+  attributes as being "validated" by a particular class
+  method.
+
+* **Attribute Instrumentation Customization.** - An API is
+  provided for ambitious efforts to entirely replace
+  SQLAlchemy's attribute instrumentation, or just to augment
+  it in some cases.  This API was produced for the purposes
+  of the Trellis toolkit, but is available as a public API.
+  Some examples are provided in the distribution in the
+  ``/examples/custom_attributes`` directory.
+
+Schema/Types
+============
+
+* **String with no length no longer generates TEXT, it
+  generates VARCHAR** - The ``String`` type no longer
+  magically converts into a ``Text`` type when specified
+  with no length.  This only has an effect when CREATE TABLE
+  is issued, as it will issue ``VARCHAR`` with no length
+  parameter, which is not valid on many (but not all)
+  databases.  To create a TEXT (or CLOB, i.e. unbounded
+  string) column, use the ``Text`` type.
+
+* **PickleType() with mutable=True requires an __eq__()
+  method** - The ``PickleType`` type needs to compare values
+  when mutable=True.  The method of comparing
+  ``pickle.dumps()`` is inefficient and unreliable.  If an
+  incoming object does not implement ``__eq__()`` and is
+  also not ``None``, the ``dumps()`` comparison is used but
+  a warning is raised.  For types which implement
+  ``__eq__()`` which includes all dictionaries, lists, etc.,
+  comparison will use ``==`` and is now reliable by default.
+
+* **convert_bind_param() and convert_result_value() methods
+  of TypeEngine/TypeDecorator are removed.** - The O'Reilly
+  book unfortunately documented these methods even though
+  they were deprecated post 0.3.   For a user-defined type
+  which subclasses ``TypeEngine``, the ``bind_processor()``
+  and ``result_processor()`` methods should be used for
+  bind/result processing.  Any user defined type, whether
+  extending ``TypeEngine`` or ``TypeDecorator``, which uses
+  the old 0.3 style can be easily adapted to the new style
+  using the following adapter:
+
+  ::
+
+      class AdaptOldConvertMethods(object):
+          """A mixin which adapts 0.3-style convert_bind_param and
+          convert_result_value methods
+
+          """
+          def bind_processor(self, dialect):
+              def convert(value):
+                  return self.convert_bind_param(value, dialect)
+              return convert
+
+          def result_processor(self, dialect):
+              def convert(value):
+                  return self.convert_result_value(value, dialect)
+              return convert
+
+          def convert_result_value(self, value, dialect):
+              return value
+
+          def convert_bind_param(self, value, dialect):
+              return value
+
+  To use the above mixin:
+
+  ::
+
+      class MyType(AdaptOldConvertMethods, TypeEngine):
+         # ...
+
+* The ``quote`` flag on ``Column`` and ``Table`` as well as
+  the ``quote_schema`` flag on ``Table`` now control quoting
+  both positively and negatively.  The default is ``None``,
+  meaning let regular quoting rules take effect. When
+  ``True``, quoting is forced on.  When ``False``, quoting
+  is forced off.
+
+* Column ``DEFAULT`` value DDL can now be more conveniently
+  specified with ``Column(..., server_default='val')``,
+  deprecating ``Column(..., PassiveDefault('val'))``.
+  ``default=`` is now exclusively for Python-initiated
+  default values, and can coexist with server_default.  A
+  new ``server_default=FetchedValue()`` replaces the
+  ``PassiveDefault('')`` idiom for marking columns as
+  subject to influence from external triggers and has no DDL
+  side effects.
+
+* SQLite's ``DateTime``, ``Time`` and ``Date`` types now
+  **only accept datetime objects, not strings** as bind
+  parameter input.  If you'd like to create your own
+  "hybrid" type which accepts strings and returns results as
+  date objects (from whatever format you'd like), create a
+  ``TypeDecorator`` that builds on ``String``.  If you only
+  want string-based dates, just use ``String``.
+
+* Additionally, the ``DateTime`` and ``Time`` types, when
+  used with SQLite, now represent the "microseconds" field
+  of the Python ``datetime.datetime`` object in the same
+  manner as ``str(datetime)`` - as fractional seconds, not a
+  count of microseconds.  That is:
+
+  ::
+
+       dt = datetime.datetime(2008, 6, 27, 12, 0, 0, 125)  # 125 usec
+
+       # old way
+       '2008-06-27 12:00:00.125'
+
+       # new way
+       '2008-06-27 12:00:00.000125'
+
+  So if an existing SQLite file-based database intends to be
+  used across 0.4 and 0.5, you either have to upgrade the
+  datetime columns to store the new format (NOTE: please
+  test this, I'm pretty sure its correct):
+
+  ::
+
+       UPDATE mytable SET somedatecol =
+         substr(somedatecol, 0, 19) || '.' || substr((substr(somedatecol, 21, -1) / 1000000), 3, -1);
+
+  or, enable "legacy" mode as follows:
+
+  ::
+
+       from sqlalchemy.databases.sqlite import DateTimeMixin
+       DateTimeMixin.__legacy_microseconds__ = True
+
+Connection Pool no longer threadlocal by default
+================================================
+
+0.4 has an unfortunate default setting of
+"pool_threadlocal=True", leading to surprise behavior when,
+for example, using multiple Sessions within a single thread.
+This flag is now off in 0.5.   To re-enable 0.4's behavior,
+specify ``pool_threadlocal=True`` to ``create_engine()``, or
+alternatively use the "threadlocal" strategy via
+``strategy="threadlocal"``.
+
+\*args Accepted, \*args No Longer Accepted
+==========================================
+
+The policy with ``method(\*args)`` vs. ``method([args])``
+is, if the method accepts a variable-length set of items
+which represent a fixed structure, it takes ``\*args``.  If
+the method accepts a variable-length set of items that are
+data-driven, it takes ``[args]``.
+
+* The various Query.options() functions ``eagerload()``,
+  ``eagerload_all()``, ``lazyload()``, ``contains_eager()``,
+  ``defer()``, ``undefer()`` all accept variable-length
+  ``\*keys`` as their argument now, which allows a path to
+  be formulated using descriptors, ie.:
+
+  ::
+
+         query.options(eagerload_all(User.orders, Order.items, Item.keywords))
+
+  A single array argument is still accepted for backwards
+  compatibility.
+
+* Similarly, the ``Query.join()`` and ``Query.outerjoin()``
+  methods accept a variable length \*args, with a single
+  array accepted for backwards compatibility:
+
+  ::
+
+         query.join('orders', 'items')
+         query.join(User.orders, Order.items)
+
+* the ``in_()`` method on columns and similar only accepts a
+  list argument now.  It no longer accepts ``\*args``.
+
+Removed
+=======
+
+* **entity_name** - This feature was always problematic and
+  rarely used.  0.5's more deeply fleshed out use cases
+  revealed further issues with ``entity_name`` which led to
+  its removal.  If different mappings are required for a
+  single class, break the class into separate subclasses and
+  map them separately.  An example of this is at
+  [wiki:UsageRecipes/EntityName].  More information
+  regarding rationale is described at http://groups.google.c
+  om/group/sqlalchemy/browse_thread/thread/9e23a0641a88b96d?
+  hl=en .
+
+* **get()/load() cleanup**
+
+
+  The ``load()`` method has been removed.  It's
+  functionality was kind of arbitrary and basically copied
+  from Hibernate, where it's also not a particularly
+  meaningful method.
+
+  To get equivalent functionality:
+
+  ::
+
+       x = session.query(SomeClass).populate_existing().get(7)
+
+  ``Session.get(cls, id)`` and ``Session.load(cls, id)``
+  have been removed.  ``Session.get()`` is redundant vs.
+  ``session.query(cls).get(id)``.
+
+  ``MapperExtension.get()`` is also removed (as is
+  ``MapperExtension.load()``).  To override the
+  functionality of ``Query.get()``, use a subclass:
+
+  ::
+
+       class MyQuery(Query):
+           def get(self, ident):
+               # ...
+
+       session = sessionmaker(query_cls=MyQuery)()
+
+       ad1 = session.query(Address).get(1)
+
+* ``sqlalchemy.orm.relation()``
+
+
+  The following deprecated keyword arguments have been
+  removed:
+
+  foreignkey, association, private, attributeext, is_backref
+
+  In particular, ``attributeext`` is replaced with
+  ``extension`` - the ``AttributeExtension`` class is now in
+  the public API.
+
+* ``session.Query()``
+
+
+  The following deprecated functions have been removed:
+
+  list, scalar, count_by, select_whereclause, get_by,
+  select_by, join_by, selectfirst, selectone, select,
+  execute, select_statement, select_text, join_to, join_via,
+  selectfirst_by, selectone_by, apply_max, apply_min,
+  apply_avg, apply_sum
+
+  Additionally, the ``id`` keyword argument to ``join()``,
+  ``outerjoin()``, ``add_entity()`` and ``add_column()`` has
+  been removed.  To target table aliases in ``Query`` to
+  result columns, use the ``aliased`` construct:
+
+  ::
+
+      from sqlalchemy.orm import aliased
+      address_alias = aliased(Address)
+      print session.query(User, address_alias).join((address_alias, User.addresses)).all()
+
+* ``sqlalchemy.orm.Mapper``
+
+
+  * instances()
+
+
+  * get_session() - this method was not very noticeable, but
+    had the effect of associating lazy loads with a
+    particular session even if the parent object was
+    entirely detached, when an extension such as
+    ``scoped_session()`` or the old ``SessionContextExt``
+    was used.  It's possible that some applications which
+    relied upon this behavior will no longer work as
+    expected;  but the better programming practice here is
+    to always ensure objects are present within sessions if
+    database access from their attributes are required.
+
+* ``mapper(MyClass, mytable)``
+
+
+  Mapped classes no are longer instrumented with a "c" class
+  attribute; e.g. ``MyClass.c``
+
+* ``sqlalchemy.orm.collections``
+
+
+  The _prepare_instrumentation alias for
+  prepare_instrumentation has been removed.
+
+* ``sqlalchemy.orm``
+
+
+  Removed the ``EXT_PASS`` alias of ``EXT_CONTINUE``.
+
+* ``sqlalchemy.engine``
+
+
+  The alias from ``DefaultDialect.preexecute_sequences`` to
+  ``.preexecute_pk_sequences`` has been removed.
+
+  The deprecated engine_descriptors() function has been
+  removed.
+
+* ``sqlalchemy.ext.activemapper``
+
+
+  Module removed.
+
+* ``sqlalchemy.ext.assignmapper``
+
+
+  Module removed.
+
+* ``sqlalchemy.ext.associationproxy``
+
+
+  Pass-through of keyword args on the proxy's
+  ``.append(item, \**kw)`` has been removed and is now
+  simply ``.append(item)``
+
+* ``sqlalchemy.ext.selectresults``,
+  ``sqlalchemy.mods.selectresults``
+
+  Modules removed.
+
+* ``sqlalchemy.ext.declarative``
+
+
+  ``declared_synonym()`` removed.
+
+* ``sqlalchemy.ext.sessioncontext``
+
+
+  Module removed.
+
+* ``sqlalchemy.log``
+
+
+  The ``SADeprecationWarning`` alias to
+  ``sqlalchemy.exc.SADeprecationWarning`` has been removed.
+
+* ``sqlalchemy.exc``
+
+
+  ``exc.AssertionError`` has been removed and usage replaced
+  by the Python built-in of the same name.
+
+* ``sqlalchemy.databases.mysql``
+
+
+  The deprecated ``get_version_info`` dialect method has
+  been removed.
+
+Renamed or Moved
+================
+
+* ``sqlalchemy.exceptions`` is now ``sqlalchemy.exc``
+
+
+  The module may still be imported under the old name until
+  0.6.
+
+* ``FlushError``, ``ConcurrentModificationError``,
+  ``UnmappedColumnError`` -> sqlalchemy.orm.exc
+
+  These exceptions moved to the orm package.  Importing
+  'sqlalchemy.orm' will install aliases in sqlalchemy.exc
+  for compatibility until 0.6.
+
+* ``sqlalchemy.logging`` -> ``sqlalchemy.log``
+
+
+  This internal module was renamed.  No longer needs to be
+  special cased when packaging SA with py2app and similar
+  tools that scan imports.
+
+* ``session.Query().iterate_instances()`` ->
+  ``session.Query().instances()``.
+
+Deprecated
+==========
+
+* ``Session.save()``, ``Session.update()``,
+  ``Session.save_or_update()``
+
+  All three replaced by ``Session.add()``
+
+* ``sqlalchemy.PassiveDefault``
+
+
+  Use ``Column(server_default=...)`` Translates to
+  sqlalchemy.DefaultClause() under the hood.
+
+* ``session.Query().iterate_instances()``. It has been
+  renamed to ``instances()``.
+

File doc/build/changelog/migration_06.rst

+==============================
+What's New in SQLAlchemy 0.6 ?
+==============================
+
+.. admonition:: About this Document
+
+    This document describes changes between SQLAlchemy version 0.5,
+    last released January 16, 2010, and SQLAlchemy version 0.6,
+    last released May 5, 2012.
+
+    Document date:  June 6, 2010
+
+This guide documents API changes which affect users
+migrating their applications from the 0.5 series of
+SQLAlchemy to 0.6.  Note that SQLAlchemy 0.6 removes some
+behaviors which were deprecated throughout the span of the
+0.5 series, and also deprecates more behaviors specific to
+0.5.
+
+Platform Support
+================
+
+* cPython versions 2.4 and upwards throughout the 2.xx
+  series
+
+* Jython 2.5.1 - using the zxJDBC DBAPI included with
+  Jython.
+
+* cPython 3.x - see [source:sqlalchemy/trunk/README.py3k]
+  for information on how to build for python3.
+
+New Dialect System
+==================
+
+Dialect modules are now broken up into distinct
+subcomponents, within the scope of a single database
+backend.   Dialect implementations are now in the
+``sqlalchemy.dialects`` package.  The
+``sqlalchemy.databases`` package still exists as a
+placeholder to provide some level of backwards compatibility
+for simple imports.
+
+For each supported database, a sub-package exists within
+``sqlalchemy.dialects`` where several files are contained.
+Each package contains a module called ``base.py`` which
+defines the specific SQL dialect used by that database.   It
+also contains one or more "driver" modules, each one
+corresponding to a specific DBAPI - these files are named
+corresponding to the DBAPI itself, such as ``pysqlite``,
+``cx_oracle``, or ``pyodbc``.  The classes used by
+SQLAlchemy dialects are first declared in the ``base.py``
+module, defining all behavioral characteristics defined by
+the database.  These include capability mappings, such as
+"supports sequences", "supports returning", etc., type
+definitions, and SQL compilation rules.  Each "driver"
+module in turn provides subclasses of those classes as
+needed which override the default behavior to accommodate
+the additional features, behaviors, and quirks of that
+DBAPI.    For DBAPIs that support multiple backends (pyodbc,
+zxJDBC, mxODBC), the dialect module will use mixins from the
+``sqlalchemy.connectors`` package, which provide
+functionality common to that DBAPI across all backends, most
+typically dealing with connect arguments.   This means that
+connecting using pyodbc, zxJDBC or mxODBC (when implemented)
+is extremely consistent across supported backends.
+
+The URL format used by ``create_engine()`` has been enhanced
+to handle any number of DBAPIs for a particular backend,
+using a scheme that is inspired by that of JDBC.   The
+previous format still works, and will select a "default"
+DBAPI implementation, such as the Postgresql URL below that
+will use psycopg2:
+
+::
+
+    create_engine('postgresql://scott:tiger@localhost/test')
+
+However to specify a specific DBAPI backend such as pg8000,
+add it to the "protocol" section of the URL using a plus
+sign "+":
+
+::
+
+    create_engine('postgresql+pg8000://scott:tiger@localhost/test')
+
+Important Dialect Links:
+
+* Documentation on connect arguments:
+  http://www.sqlalchemy.org/docs/06/dbengine.html#create-
+  engine-url-arguments.
+
+* Reference documentation for individual dialects: http://ww
+  w.sqlalchemy.org/docs/06/reference/dialects/index.html
+
+* The tips and tricks at DatabaseNotes.
+
+
+Other notes regarding dialects:
+
+* the type system has been changed dramatically in
+  SQLAlchemy 0.6.  This has an impact on all dialects
+  regarding naming conventions, behaviors, and
+  implementations.  See the section on "Types" below.
+
+* the ``ResultProxy`` object now offers a 2x speed
+  improvement in some cases thanks to some refactorings.
+
+* the ``RowProxy``, i.e. individual result row object, is
+  now directly pickleable.
+
+* the setuptools entrypoint used to locate external dialects
+  is now called ``sqlalchemy.dialects``.  An external
+  dialect written against 0.4 or 0.5 will need to be
+  modified to work with 0.6 in any case so this change does
+  not add any additional difficulties.
+
+* dialects now receive an initialize() event on initial
+  connection to determine connection properties.
+
+* Functions and operators generated by the compiler now use
+  (almost) regular dispatch functions of the form
+  "visit_<opname>" and "visit_<funcname>_fn" to provide
+  customed processing. This replaces the need to copy the
+  "functions" and "operators" dictionaries in compiler
+  subclasses with straightforward visitor methods, and also
+  allows compiler subclasses complete control over
+  rendering, as the full _Function or _BinaryExpression
+  object is passed in.
+
+Dialect Imports
+---------------
+
+The import structure of dialects has changed.  Each dialect
+now exports its base "dialect" class as well as the full set
+of SQL types supported on that dialect via
+``sqlalchemy.dialects.<name>``.  For example, to import a
+set of PG types:
+
+::
+
+    from sqlalchemy.dialects.postgresql import INTEGER, BIGINT, SMALLINT,\
+                                                VARCHAR, MACADDR, DATE, BYTEA
+
+Above, ``INTEGER`` is actually the plain ``INTEGER`` type
+from ``sqlalchemy.types``, but the PG dialect makes it
+available in the same way as those types which are specific
+to PG, such as ``BYTEA`` and ``MACADDR``.
+
+Expression Language Changes
+===========================
+
+An Important Expression Language Gotcha
+---------------------------------------
+
+There's one quite significant behavioral change to the
+expression language which may affect some applications.
+The boolean value of Python boolean expressions, i.e.
+``==``, ``!=``, and similar, now evaluates accurately with
+regards to the two clause objects being compared.
+
+As we know, comparing a ``ClauseElement`` to any other
+object returns another ``ClauseElement``:
+
+::
+
+    >>> from sqlalchemy.sql import column
+    >>> column('foo') == 5
+    <sqlalchemy.sql.expression._BinaryExpression object at 0x1252490>
+
+This so that Python expressions produce SQL expressions when
+converted to strings:
+
+::
+
+    >>> str(column('foo') == 5)
+    'foo = :foo_1'
+
+But what happens if we say this?
+
+::
+
+    >>> if column('foo') == 5:
+    ...     print "yes"
+    ...
+
+In previous versions of SQLAlchemy, the returned
+``_BinaryExpression`` was a plain Python object which
+evaluated to ``True``.  Now it evaluates to whether or not
+the actual ``ClauseElement`` should have the same hash value
+as to that being compared.  Meaning:
+
+::
+
+    >>> bool(column('foo') == 5)
+    False
+    >>> bool(column('foo') == column('foo'))
+    False
+    >>> c = column('foo')
+    >>> bool(c == c)
+    True
+    >>>
+
+That means code such as the following:
+
+::
+
+    if expression:
+        print "the expression is:", expression
+
+Would not evaluate if ``expression`` was a binary clause.
+Since the above pattern should never be used, the base
+``ClauseElement`` now raises an exception if called in a
+boolean context:
+
+::
+
+    >>> bool(c)
+    Traceback (most recent call last):
+      File "<stdin>", line 1, in <module>
+      ...
+        raise TypeError("Boolean value of this clause is not defined")
+    TypeError: Boolean value of this clause is not defined
+
+Code that wants to check for the presence of a
+``ClauseElement`` expression should instead say:
+
+::
+
+    if expression is not None:
+        print "the expression is:", expression
+
+Keep in mind, **this applies to Table and Column objects
+too**.
+
+The rationale for the change is twofold:
+
+* Comparisons of the form ``if c1 == c2:  <do something>``
+  can actually be written now
+
+* Support for correct hashing of ``ClauseElement`` objects
+  now works on alternate platforms, namely Jython.  Up until
+  this point SQLAlchemy relied heavily on the specific
+  behavior of cPython in this regard (and still had
+  occasional problems with it).
+
+Stricter "executemany" Behavior
+-------------------------------
+
+An "executemany" in SQLAlchemy corresponds to a call to
+``execute()``, passing along a collection of bind parameter
+sets:
+
+::
+
+    connection.execute(table.insert(), {'data':'row1'}, {'data':'row2'}, {'data':'row3'})
+
+When the ``Connection`` object sends off the given
+``insert()`` construct for compilation, it passes to the
+compiler the keynames present in the first set of binds
+passed along to determine the construction of the
+statement's VALUES clause.   Users familiar with this
+construct will know that additional keys present in the
+remaining dictionaries don't have any impact.   What's
+different now is that all subsequent dictionaries need to
+include at least *every* key that is present in the first
+dictionary.  This means that a call like this no longer
+works:
+
+::
+
+    connection.execute(table.insert(),
+                            {'timestamp':today, 'data':'row1'},
+                            {'timestamp':today, 'data':'row2'},
+                            {'data':'row3'})
+
+Because the third row does not specify the 'timestamp'
+column.  Previous versions of SQLAlchemy would simply insert
+NULL for these missing columns.  However, if the
+``timestamp`` column in the above example contained a
+Python-side default value or function, it would *not* be
+used.  This because the "executemany" operation is optimized
+for maximum performance across huge numbers of parameter
+sets, and does not attempt to evaluate Python-side defaults
+for those missing keys.   Because defaults are often
+implemented either as SQL expressions which are embedded
+inline with the INSERT statement, or are server side
+expressions which again are triggered based on the structure
+of the INSERT string, which by definition cannot fire off
+conditionally based on each parameter set, it would be
+inconsistent for Python side defaults to behave differently
+vs. SQL/server side defaults.   (SQL expression based
+defaults are embedded inline as of the 0.5 series, again to
+minimize the impact of huge numbers of parameter sets).
+
+SQLAlchemy 0.6 therefore establishes predictable consistency
+by forbidding any subsequent parameter sets from leaving any
+fields blank.  That way, there's no more silent failure of
+Python side default values and functions, which additionally
+are allowed to remain consistent in their behavior versus
+SQL and server side defaults.
+
+UNION and other "compound" constructs parenthesize consistently
+---------------------------------------------------------------
+
+A rule that was designed to help SQLite has been removed,
+that of the first compound element within another compound
+(such as, a ``union()`` inside of an ``except_()``) wouldn't
+be parenthesized.   This is inconsistent and produces the
+wrong results on Postgresql, which has precedence rules
+regarding INTERSECTION, and its generally a surprise.   When
+using complex composites with SQLite, you now need to turn
+the first element into a subquery (which is also compatible
+on PG).   A new example is in the SQL expression tutorial at
+the end of
+[http://www.sqlalchemy.org/docs/06/sqlexpression.html
+#unions-and-other-set-operations].  See :ticket:`1665` and
+r6690 for more background.
+
+C Extensions for Result Fetching
+================================
+
+The ``ResultProxy`` and related elements, including most
+common "row processing" functions such as unicode
+conversion, numerical/boolean conversions and date parsing,
+have been re-implemented as optional C extensions for the
+purposes of performance.   This represents the beginning of
+SQLAlchemy's path to the "dark side" where we hope to
+continue improving performance by reimplementing critical
+sections in C.   The extensions can be built by specifying
+``--with-cextensions``, i.e. ``python setup.py --with-
+cextensions install``.
+
+The extensions have the most dramatic impact on result
+fetching using direct ``ResultProxy`` access, i.e. that
+which is returned by ``engine.execute()``,
+``connection.execute()``, or ``session.execute()``.   Within
+results returned by an ORM ``Query`` object, result fetching
+is not as high a percentage of overhead, so ORM performance
</