Commits

Mike Bayer  committed a9d7cc0

woop

  • Participants
  • Parent commits a11f965

Comments (0)

Files changed (2)

File source/migration_08.rst

 ============
 
 This guide introduces what's new in SQLAlchemy version 0.8,
-and also documents
-changes which affect users migrating their applications from
-the 0.7
-series of SQLAlchemy to 0.8.
+and also documents changes which affect users migrating
+their applications from the 0.7 series of SQLAlchemy to 0.8.
 
 SQLAlchemy releases are closing in on 1.0, and each new
-version since
-0.5 features fewer major usage changes.   Most applications
-that
-are settled into modern 0.7 patterns should be movable to
-0.8 with no changes.
-Applications that use 0.6 and even 0.5 patterns should be
-directly migratable to
-0.8 as well, though larger applications may want to test
+version since 0.5 features fewer major usage changes.   Most
+applications that are settled into modern 0.7 patterns
+should be movable to 0.8 with no changes. Applications that
+use 0.6 and even 0.5 patterns should be directly migratable
+to 0.8 as well, though larger applications may want to test
 with each interim version.
 
 Platform Support
 Status: ongoing
 
 SQLAlchemy 0.8 will target Python 2.5 and forward;
-compatibility for Python 2.4 is
-being dropped.
+compatibility for Python 2.4 is being dropped.
 
 The internals will be able to make usage of Python ternaries
-(that is, ``x if y else z``)
-which will improve things versus the usage of ``y and x or
-z``, which naturally has been
-the source of some bugs, as well as context managers (that
-is, ``with:``) and perhaps in
-some cases ``try:/except:/else:`` blocks which will help
-with code readability.
+(that is, ``x if y else z``) which will improve things
+versus the usage of ``y and x or z``, which naturally has
+been the source of some bugs, as well as context managers
+(that is, ``with:``) and perhaps in some cases
+``try:/except:/else:`` blocks which will help with code
+readability.
 
 SQLAlchemy will eventually drop 2.5 support as well - when
-2.6 is reached as the baseline,
-SQLAlchemy will move to use 2.6/3.3 in-place compatibility,
-removing the usage of the ``2to3``
-tool and maintaining a source base that works with Python 2
-and 3 at the same time.
+2.6 is reached as the baseline, SQLAlchemy will move to use
+2.6/3.3 in-place compatibility, removing the usage of the
+``2to3`` tool and maintaining a source base that works with
+Python 2 and 3 at the same time.
 
 New Features
 ============
 Status: completed, needs docs
 
 0.8 features a much improved and capable system regarding
-how ``relationship()`` determines
-how to join between two entities.  The new system includes
-these features:
+how ``relationship()`` determines how to join between two
+entities.  The new system includes these features:
 
 * The ``primaryjoin`` argument is **no longer needed** when
-  constructing a ``relationship()``
-  against a class that has multiple foreign key paths to the
-  target.  Only the ``foreign_keys``
-  argument is needed to specify those columns which should be
-  included:
+  constructing a ``relationship()``   against a class that
+  has multiple foreign key paths to the target.  Only the
+  ``foreign_keys``   argument is needed to specify those
+  columns which should be included:
 
   ::
 
-      class Parent(Base):
-          __tablename__ = 'parent'
-          id = Column(Integer, primary_key=True)
-          child_id_one = Column(Integer, ForeignKey('child.id'))
-          child_id_two = Column(Integer, ForeignKey('child.id'))
-
-          child_one = relationship("Child", foreign_keys=child_id_one)
-          child_two = relationship("Child", foreign_keys=child_id_two)
-
-      class Child(Base):
-          __tablename__ = 'child'
-          id = Column(Integer, primary_key=True)
+      
+        class Parent(Base):
+            __tablename__ = 'parent'
+            id = Column(Integer, primary_key=True)
+            child_id_one = Column(Integer, ForeignKey('child.id'))
+            child_id_two = Column(Integer, ForeignKey('child.id'))
+      
+            child_one = relationship("Child", foreign_keys=child_id_one)
+            child_two = relationship("Child", foreign_keys=child_id_two)
+      
+        class Child(Base):
+            __tablename__ = 'child'
+            id = Column(Integer, primary_key=True)
 
 * relationships against self-referential, composite foreign
-  keys where **a column points to itself**
-  are now supported.   The canonical case is as follows:
+  keys where **a column points to itself**   are now
+  supported.   The canonical case is as follows:
 
   ::
 
-      class Folder(Base):
-          __tablename__ = 'folder'
-          __table_args__ = (
-            ForeignKeyConstraint(
-                ['account_id', 'parent_id'],
-                ['folder.account_id', 'folder.folder_id']),
-          )
-
-          account_id = Column(Integer, primary_key=True)
-          folder_id = Column(Integer, primary_key=True)
-          parent_id = Column(Integer)
-          name = Column(String)
-
-          parent_folder = relationship("Folder",
-                              backref="child_folders",
-                              remote_side=[account_id, folder_id]
-                        )
+        class Folder(Base):
+            __tablename__ = 'folder'
+            __table_args__ = (
+              ForeignKeyConstraint(
+                  ['account_id', 'parent_id'],
+                  ['folder.account_id', 'folder.folder_id']),
+            )
+      
+            account_id = Column(Integer, primary_key=True)
+            folder_id = Column(Integer, primary_key=True)
+            parent_id = Column(Integer)
+            name = Column(String)
+      
+            parent_folder = relationship("Folder",
+                                backref="child_folders",
+                                remote_side=[account_id, folder_id]
+                          )
 
   Above, the ``Folder`` refers to its parent ``Folder``
-  joining from ``account_id``
-  to itself, and ``parent_id`` to ``folder_id``.  When
-  SQLAlchemy constructs an auto-join,
-  no longer can it assume all columns on the "remote" side are
-  aliased, and all columns
-  on the "local" side are not - the ``account_id`` column is
-  **on both sides**.   So the
-  internal relationship mechanics were totally rewritten to
-  support an entirely different
-  system whereby two copies of ``account_id`` are generated,
-  each containing different *annotations*'
-  to determine their role within the statement.  Note the join
-  condition within a basic eager load:
+  joining from ``account_id`` to itself, and ``parent_id``
+  to ``folder_id``.  When SQLAlchemy constructs an auto-
+  join, no longer can it assume all columns on the "remote"
+  side are aliased, and all columns on the "local" side are
+  not - the ``account_id`` column is **on both sides**.   So
+  the internal relationship mechanics were totally rewritten
+  to support an entirely different system whereby two copies
+  of ``account_id`` are generated, each containing different
+  *annotations*' to determine their role within the
+  statement.  Note the join condition within a basic eager
+  load:
 
   ::
 
-      SELECT
-          folder.account_id AS folder_account_id,
-          folder.folder_id AS folder_folder_id,
-          folder.parent_id AS folder_parent_id,
-          folder.name AS folder_name,
-          folder_1.account_id AS folder_1_account_id,
-          folder_1.folder_id AS folder_1_folder_id,
-          folder_1.parent_id AS folder_1_parent_id,
-          folder_1.name AS folder_1_name
-      FROM folder
-          LEFT OUTER JOIN folder AS folder_1
-          ON
-              folder_1.account_id = folder.account_id
-              AND folder.folder_id = folder_1.parent_id
+        SELECT
+            folder.account_id AS folder_account_id,
+            folder.folder_id AS folder_folder_id,
+            folder.parent_id AS folder_parent_id,
+            folder.name AS folder_name,
+            folder_1.account_id AS folder_1_account_id,
+            folder_1.folder_id AS folder_1_folder_id,
+            folder_1.parent_id AS folder_1_parent_id,
+            folder_1.name AS folder_1_name
+        FROM folder
+            LEFT OUTER JOIN folder AS folder_1
+            ON
+                folder_1.account_id = folder.account_id
+                AND folder.folder_id = folder_1.parent_id
+      
+        WHERE folder.folder_id = ? AND folder.account_id = ?
 
-      WHERE folder.folder_id = ? AND folder.account_id = ?
-
-* Thanks to the new relationship mechanics, new **annotation**
-  functions are provided
-  which can be used to create ``primaryjoin`` conditions
-  involving any kind of SQL function, CAST,
-  or other construct that wraps the target column.
-  Previously, a semi-public argument
+* Thanks to the new relationship mechanics, new
+  **annotation** functions are provided   which can be used
+  to create ``primaryjoin`` conditions involving any kind of
+  SQL function, CAST,   or other construct that wraps the
+  target column.  Previously, a semi-public argument
   ``_local_remote_pairs`` would be used to tell
-  ``relationship()`` unambiguously what columns
-  should be considered as corresponding to the mapping - the
-  annotations make the point
-  more directly, such as below where ``Parent`` joins to
-  ``Child`` by matching the
+  ``relationship()`` unambiguously what columns   should be
+  considered as corresponding to the mapping - the
+  annotations make the point   more directly, such as below
+  where ``Parent`` joins to ``Child`` by matching the
   ``Parent.name`` column converted to lower case to that of
   the ``Child.name_upper`` column:
 
   ::
 
-      class Parent(Base):
-          __tablename__ = 'parent'
-          id = Column(Integer, primary_key=True)
-          name = Column(String)
-          children = relationship("Child",
-                  primaryjoin="Parent.name==foreign(func.lower(Child.name_upper))"
-              )
+      
+        class Parent(Base):
+            __tablename__ = 'parent'
+            id = Column(Integer, primary_key=True)
+            name = Column(String)
+            children = relationship("Child",
+                    primaryjoin="Parent.name==foreign(func.lower(Child.name_upper))"
+                )
+      
+        class Child(Base):
+            __tablename__ = 'child'
+            id = Column(Integer, primary_key=True)
+            name_upper = Column(String)
 
-      class Child(Base):
-          __tablename__ = 'child'
-          id = Column(Integer, primary_key=True)
-          name_upper = Column(String)
+#1401 #610
 
-  #1401 #610
+New Class Inspection System
+---------------------------
 
-  === New Class Inspection System ===
+Status: completed, needs docs
 
-  Status: completed, needs docs
+Lots of SQLAlchemy users are writing systems that require
+the ability to inspect the attributes of a mapped class,
+including being able to get at the primary key columns,
+object relationships, plain attributes, and so forth,
+typically for the purpose of building data-marshalling
+systems, like JSON/XML conversion schemes and of course form
+libraries galore.
 
-  Lots of SQLAlchemy users are writing systems that require
-  the ability to inspect the
-  attributes of a mapped class, including being able to get at
-  the primary key columns,
-  object relationships, plain attributes, and so forth,
-  typically for the purpose of
-  building data-marshalling systems, like JSON/XML conversion
-  schemes and of course
-  form libraries galore.
+Originally, the ``Table`` and ``Column`` model were the
+original inspection points, which have a well-documented
+system.  While SQLAlchemy ORM models are also fully
+introspectable, this has never been a fully stable and
+supported feature, and users tended to not have a clear idea
+how to get at this information.
 
-  Originally, the ``Table`` and ``Column`` model were the
-  original
-  inspection points, which have a well-documented system.
-  While SQLAlchemy
-  ORM models are also fully introspectable, this has never
-  been a fully stable and supported feature, and users tended
-  to not have a clear idea
-  how to get at this information.
+0.8 has a plan to produce a consistent, stable and fully
+documented API for this purpose, which would provide an
+inspection system that works on classes, instances, and
+possibly other things as well.   While many elements of this
+system are already available, the plan is to lock down the
+API including various accessors available from such objects
+as ``Mapper``, ``InstanceState``, and ``MapperProperty``:
 
-  0.8 has a plan to produce a consistent, stable and fully
-  documented
-  API for this purpose, which would provide an inspection
-  system that works on classes, instances,
-  and possibly other things as well.   While many elements of
-  this system are already
-  available, the plan is to lock down the API including
-  various accessors
-  available from such objects as ``Mapper``,
-  ``InstanceState``, and ``MapperProperty``:
+::
 
-  ::
+    class User(Base):
+        __tablename__ = 'user'
+    
+        id = Column(Integer, primary_key=True)
+        name = Column(String)
+        name_syn = synonym(name)
+        addresses = relationship(Address)
+    
+    # universal entry point is inspect()
+    >>> b = inspect(User)
+    
+    # column collection
+    >>> b.columns
+    [<id column>, <name column>]
+    
+    # its a ColumnCollection
+    >>> b.columns.id
+    <id column>
+    
+    # i.e. from mapper
+    >>> b.primary_key
+    (<id column>, )
+    
+    # ColumnProperty
+    >>> b.attr.id.columns
+    [<id column>]
+    
+    # get only column attributes
+    >>> b.column_attrs
+    [<id prop>, <name prop>]
+    
+    # its a namespace
+    >>> b.column_attrs.id
+    <id prop>
+    
+    # get only relationships
+    >>> b.relationships
+    [<addresses prop>]
+    
+    # its a namespace
+    >>> b.relationships.addresses
+    <addresses prop>
+    
+    # point inspect() at a class level attribute,
+    # basically returns ".property"
+    >>> b = inspect(User.addresses)
+    >>> b
+    <addresses prop>
+    
+    # mapper
+    >>> b.mapper
+    <Address mapper>
+    
+    # None columns collection, just like columnprop has empty mapper
+    >>> b.columns
+    None
+    
+    # the parent
+    >>> b.parent
+    <User mapper>
+    
+    # __clause_element__()
+    >>> b.expression
+    User.id==Address.user_id
+    
+    >>> inspect(User.id).expression
+    <id column with ORM annotations>
+    
+    # inspect works on instances !
+    >>> u1 = User(id=3, name='x')
+    >>> b = inspect(u1)
+    
+    # what's b here ?  probably InstanceState
+    >>> b
+    <InstanceState>
+    
+    >>> b.attr.keys()
+    ['id', 'name', 'name_syn', 'addresses']
+    
+    # attribute interface
+    >>> b.attr.id
+    <magic attribute inspect thing>
+    
+    # value
+    >>> b.attr.id.value
+    3
+    
+    # history
+    >>> b.attr.id.history
+    <history object>
+    
+    >>> b.attr.id.history.unchanged
+    3
+    
+    >>> b.attr.id.history.deleted
+    None
+    
+    # lets assume the object is persistent
+    >>> s = Session()
+    >>> s.add(u1)
+    >>> s.commit()
+    
+    # big one - the primary key identity !  always
+    # works in query.get()
+    >>> b.identity
+    [3]
+    
+    # the mapper level key
+    >>> b.identity_key
+    (User, [3])
+    
+    >>> b.persistent
+    True
+    
+    >>> b.transient
+    False
+    
+    >>> b.deleted
+    False
+    
+    >>> b.detached
+    False
+    
+    >>> b.session
+    <session>
+    
 
-      class User(Base):
-          __tablename__ = 'user'
+#2208
 
-          id = Column(Integer, primary_key=True)
-          name = Column(String)
-          name_syn = synonym(name)
-          addresses = relationship(Address)
+Fully extensible, type-level operator support in Core
+-----------------------------------------------------
 
-      # universal entry point is inspect()
-      >>> b = inspect(User)
+Status: completed, needs more docs
 
-      # column collection
-      >>> b.columns
-      [<id column>, <name column>]
+The Core has to date never had any system of adding support
+for new SQL operators to Column and other expression
+constructs, other than the ``op(<somestring>)`` function
+which is "just enough" to make things work. There has also
+never been any system in place for Core which allows the
+behavior of existing operators to be overridden.   Up until
+now, the only way operators could be flexibly redefined was
+in the ORM layer, using ``column_property()`` given a
+``comparator_factory`` argument.   Third party libraries
+like GeoAlchemy therefore were forced to be ORM-centric and
+rely upon an array of hacks to apply new opertions as well
+as to get them to propagate correctly.
 
-      # its a ColumnCollection
-      >>> b.columns.id
-      <id column>
+The new operator system in Core adds the one hook that's
+been missing all along, which is to associate new and
+overridden operators with *types*.   Since after all, it's
+not really a column, CAST operator, or SQL function that
+really drives what kinds of operations are present, it's the
+*type* of the expression.   The implementation details are
+minimal - only a few extra methods are added to the core
+``ColumnElement`` type so that it consults it's
+``TypeEngine`` object for an optional set of operators.
+New or revised operations can be associated with any type,
+either via subclassing of an existing type, by using
+``TypeDecorator``, or "globally across-the-board" by
+attaching a new ``Comparator`` object to an existing type
+class.
 
-      # i.e. from mapper
-      >>> b.primary_key
-      (<id column>, )
+For example, to add logarithm support to ``Numeric`` types:
 
-      # ColumnProperty
-      >>> b.attr.id.columns
-      [<id column>]
+::
 
-      # get only column attributes
-      >>> b.column_attrs
-      [<id prop>, <name prop>]
+    
+    from sqlalchemy.types import Numeric
+    from sqlalchemy.sql import func
+    
+    class CustomNumeric(Numeric):
+        class comparator_factory(Numeric.Comparator):
+            def log(self, other):
+                return func.log(self.expr, other)
 
-      # its a namespace
-      >>> b.column_attrs.id
-      <id prop>
+The new type is usable like any other type:
 
-      # get only relationships
-      >>> b.relationships
-      [<addresses prop>]
+::
 
-      # its a namespace
-      >>> b.relationships.addresses
-      <addresses prop>
+    
+    data = Table('data', metadata,
+              Column('id', Integer, primary_key=True),
+              Column('x', CustomNumeric(10, 5)),
+              Column('y', CustomNumeric(10, 5))
+         )
+    
+    stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
+    print conn.execute(stmt).fetchall()
+    
 
-      # point inspect() at a class level attribute,
-      # basically returns ".property"
-      >>> b = inspect(User.addresses)
-      >>> b
-      <addresses prop>
+New features which should come from this immediately are
+support for Postgresql's HSTORE type, which is ready to go
+in a separate library which may be merged, as well as all
+the special operations associated with Postgresql's ARRAY
+type.    It also paves the way for existing types to acquire
+lots more operators that are specific to those types, such
+as more string, integer and date operators.
 
-      # mapper
-      >>> b.mapper
-      <Address mapper>
+#2547
 
-      # None columns collection, just like columnprop has empty mapper
-      >>> b.columns
-      None
+New with_polymorphic() feature, can be used anywhere
+----------------------------------------------------
 
-      # the parent
-      >>> b.parent
-      <User mapper>
+Status: completed
 
-      # __clause_element__()
-      >>> b.expression
-      User.id==Address.user_id
+The ``Query.with_polymorphic()`` method allows the user to
+specify which tables should be present when querying against
+a joined-table entity.   Unfortunately the method is awkward
+and only applies to the first entity in the list, and
+otherwise has awkward behaviors both in usage as well as
+within the internals.  A new enhancement to the
+``aliased()`` construct has been added called
+``with_polymorphic()`` which allows any entity to be
+"aliased" into a "polymorphic" version of itself, freely
+usable anywhere:
 
-      >>> inspect(User.id).expression
-      <id column with ORM annotations>
+::
 
-      # inspect works on instances !
-      >>> u1 = User(id=3, name='x')
-      >>> b = inspect(u1)
+    from sqlalchemy.orm import with_polymorphic
+    palias = with_polymorphic(Person, [Engineer, Manager])
+    session.query(Company).\
+                join(palias, Company.employees).\
+                filter(or_(Engineer.language=='java', Manager.hair=='pointy'))
 
-      # what's b here ?  probably InstanceState
-      >>> b
-      <InstanceState>
+#2333
 
-      >>> b.attr.keys()
-      ['id', 'name', 'name_syn', 'addresses']
+of_type() works with alias(), with_polymorphic(), any(), has(), joinedload(), subqueryload(), contains_eager()
+--------------------------------------------------------------------------------------------------------------
 
-      # attribute interface
-      >>> b.attr.id
-      <magic attribute inspect thing>
+Status: completed
 
-      # value
-      >>> b.attr.id.value
-      3
+You can use ``of_type()`` with aliases and polymorphic
+constructs; also works with most relationship functions like
+``joinedload()``, ``subqueryload()``, ``contains_eager()``,
+``any()``, and ``has()``:
 
-      # history
-      >>> b.attr.id.history
-      <history object>
+::
 
-      >>> b.attr.id.history.unchanged
-      3
+    
+    # use eager loading in conjunction with with_polymorphic targets
+    Job_P = with_polymorphic(Job, SubJob, aliased=True)
+    q = s.query(DataContainer).\
+                join(DataContainer.jobs.of_type(Job_P)).\
+                    options(contains_eager(DataContainer.jobs.of_type(Job_P)))
+    
+    # pass subclasses to eager loads (implicitly applies with_polymorphic)
+    q = s.query(ParentThing).\
+                    options(
+                        joinedload_all(
+                            ParentThing.container,
+                            DataContainer.jobs.of_type(SubJob)
+                    ))
+    
+    # control self-referential aliasing with any()/has()
+    Job_A = aliased(Job)
+    q = s.query(Job).join(DataContainer.jobs).\
+                    filter(
+                        DataContainer.jobs.of_type(Job_A).\
+                            any(and_(Job_A.id < Job.id, Job_A.type=='fred'))
+    
 
-      >>> b.attr.id.history.deleted
-      None
+#2438 #1106
 
-      # lets assume the object is persistent
-      >>> s = Session()
-      >>> s.add(u1)
-      >>> s.commit()
+New DeferredReflection Feature in Declarative
+---------------------------------------------
 
-      # big one - the primary key identity !  always
-      # works in query.get()
-      >>> b.identity
-      [3]
+The "deferred reflection" example has been moved to a
+supported feature within Declarative.  This feature allows
+the construction of declarative mapped classes with only
+placeholder ``Table`` metadata, until a ``prepare()`` step
+is called, given an ``Engine`` with which to reflect fully
+all tables and establish actual mappings.   The system
+supports overriding of columns, single and joined
+inheritance, as well as distinct bases-per-engine. A full
+declarative configuration can now be created against an
+existing table that is assembled upon engine creation time
+in one step:
 
-      # the mapper level key
-      >>> b.identity_key
-      (User, [3])
+::
 
-      >>> b.persistent
-      True
+    class ReflectedOne(DeferredReflection, Base):
+        __abstract__ = True
+    
+    class ReflectedTwo(DeferredReflection, Base):
+        __abstract__ = True
+    
+    class MyClass(ReflectedOne):
+        __tablename__ = 'mytable'
+    
+    class MyOtherClass(ReflectedOne):
+        __tablename__ = 'myothertable'
+    
+    class YetAnotherClass(ReflectedTwo):
+        __tablename__ = 'yetanothertable'
+    
+    ReflectedOne.prepare(engine_one)
+    ReflectedTwo.prepare(engine_two)
 
-      >>> b.transient
-      False
+#2485
 
-      >>> b.deleted
-      False
+New, configurable DATE, TIME types for SQLite
+---------------------------------------------
 
-      >>> b.detached
-      False
+Status: completed
 
-      >>> b.session
-      <session>
+SQLite has no built-in DATE, TIME, or DATETIME types, and
+instead provides some support for storage of date and time
+values either as strings or integers.   The date and time
+types for SQLite are enhanced in 0.8 to be much more
+configurable as to the specific format, including that the
+"microseconds" portion is optional, as well as pretty much
+everything else.
 
-  #2208
+::
 
-  === Fully extensible, type-level operator support in Core
-  ===
+    Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
+    Column('sometimestamp', sqlite.DATETIME(
+                        storage_format=(
+                                    "%(year)04d%(month)02d%(day)02d"
+                                    "%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
+                        ),
+                        regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
+                        )
+                )
+    Column('somedate', sqlite.DATE(
+                        storage_format="%(month)02d/%(day)02d/%(year)04d",
+                        regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
+                    )
+                )
+    
 
-  Status: completed, needs more docs
+Huge thanks to Nate Dub for the sprinting on this at Pycon
+'12.
 
-  The Core has to date never had any system of adding support
-  for new SQL
-  operators to Column and other expression constructs, other
-  than the
-  ``op(<somestring>)`` function which is "just enough" to make
-  things work.
-  There has also never been any system in place for Core which
-  allows the behavior
-  of existing operators to be overridden.   Up until now, the
-  only way operators
-  could be flexibly redefined was in the ORM layer, using
-  ``column_property()``
-  given a ``comparator_factory`` argument.   Third party
-  libraries like !GeoAlchemy
-  therefore were forced to be ORM-centric and rely upon an
-  array of hacks to
-  apply new opertions as well as to get them to propagate
-  correctly.
+#2363
 
-  The new operator system in Core adds the one hook that's
-  been missing all along,
-  which is to associate new and overridden operators with
-  *types*.   Since after all,
-  it's not really a column, CAST operator, or SQL function
-  that really drives what
-  kinds of operations are present, it's the *type* of the
-  expression.   The implementation
-  details are minimal - only a few extra methods are added to
-  the core ``ColumnElement`` type
-  so that it consults it's ``TypeEngine`` object for an
-  optional set of operators.    New or
-  revised operations can be associated with any type, either
-  via subclassing of an existing
-  type, by using ``TypeDecorator``, or "globally across-the-
-  board" by
-  attaching a new ``Comparator`` object to an existing type
-  class.
+Query.update() will support UPDATE..FROM
+----------------------------------------
 
-  For example, to add logarithm support to ``Numeric`` types:
+Status: not implemented
 
-  ::
+Not 100% sure if this will make it in, the new UPDATE..FROM
+mechanics should work in query.update():
 
-      from sqlalchemy.types import Numeric
-      from sqlalchemy.sql import func
+::
 
-      class CustomNumeric(Numeric):
-          class comparator_factory(Numeric.Comparator):
-              def log(self, other):
-                  return func.log(self.expr, other)
+    query(SomeEntity).\
+        filter(SomeEntity.id==SomeOtherEntity.id).\
+        filter(SomeOtherEntity.foo=='bar').\
+        update({"data":"x"})
 
-  The new type is usable like any other type:
+Should also work when used against a joined-inheritance
+entity, provided the target of the UPDATE is local to the
+table being filtered on, or if the parent and child tables
+are mixed, they are joined explicitly in the query.  Below,
+given ``Engineer`` as a joined subclass of ``Person``:
 
-  ::
+::
 
-      data = Table('data', metadata,
-                Column('id', Integer, primary_key=True),
-                Column('x', CustomNumeric(10, 5)),
-                Column('y', CustomNumeric(10, 5))
-           )
+    query(Engineer).\
+            filter(Person.id==Engineer.id).\
+            filter(Person.name=='dilbert').\
+            update({"engineer_data":"java"})
 
-      stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
-      print conn.execute(stmt).fetchall()
+would produce:
 
-  New features which should come from this immediately are
-  support for Postgresql's HSTORE
-  type, which is ready to go in a separate library which may
-  be merged, as well as all the
-  special operations associated with Postgresql's ARRAY type.
-  It also paves the way for
-  existing types to acquire lots more operators that are
-  specific to those types, such
-  as more string, integer and date operators.
+::
 
-  #2547
+    UPDATE engineer SET engineer_data='java' FROM person
+    WHERE person.id=engineer.id AND person.name='dilbert'
 
-  === New with_polymorphic() feature, can be used anywhere ===
+#2365
 
-  Status: completed
+Enhanced Postgresql ARRAY type
+------------------------------
 
-  The ``Query.with_polymorphic()`` method allows the user to
-  specify which tables
-  should be present when querying against a joined-table
-  entity.   Unfortunately the method
-  is awkward and only applies to the first entity in the list,
-  and otherwise has awkward
-  behaviors both in usage as well as within the internals.  A
-  new enhancement to the ``aliased()``
-  construct has been added called ``with_polymorphic()`` which
-  allows any entity to be "aliased"
-  into a "polymorphic" version of itself, freely usable
-  anywhere:
+status: completed
 
-  ::
+The ``postgresql.ARRAY`` type will accept an optional
+"dimension" argument, pinning it to a fixed number of
+dimensions and greatly improving efficiency when retrieving
+results:
 
-      from sqlalchemy.orm import with_polymorphic
-      palias = with_polymorphic(Person, [Engineer, Manager])
-      session.query(Company).\
-                  join(palias, Company.employees).\
-                  filter(or_(Engineer.language=='java', Manager.hair=='pointy'))
+::
 
-  #2333
+    # old way, still works since PG supports N-dimensions per row:
+    Column("my_array", postgresql.ARRAY(Integer))
+    
+    # new way, will render ARRAY with correct number of [] in DDL,
+    # will process binds and results more efficiently as we don't need
+    # to guess how many levels deep to go
+    Column("my_array", postgresql.ARRAY(Integer, dimensions=2))
 
-  === of_type() works with alias(), with_polymorphic(), any(),
-  has(), joinedload(), subqueryload(), contains_eager() ===
+#2441
 
-  Status: completed
+rollback() will only roll back "dirty" objects from a begin_nested()
+--------------------------------------------------------------------
 
-  You can use ``of_type()`` with aliases
-  and polymorphic constructs; also works with most
-  relationship
-  functions like ``joinedload()``, ``subqueryload()``,
-  ``contains_eager()``, ``any()``, and ``has()``:
+Status: completed
 
-  ::
+A behavioral change that should improve efficiency for those
+users using SAVEPOINT via ``Session.begin_nested()`` - upon
+``rollback()``, only those objects that were made dirty
+since the last flush will be expired, the rest of the
+``Session`` remains intact.  This because a ROLLBACK to a
+SAVEPOINT does not terminate the containing transaction's
+isolation, so no expiry is needed except for those changes
+that were not flushed in the current transaction.
 
-      # use eager loading in conjunction with with_polymorphic targets
-      Job_P = with_polymorphic(Job, SubJob, aliased=True)
-      q = s.query(DataContainer).\
-                  join(DataContainer.jobs.of_type(Job_P)).\
-                      options(contains_eager(DataContainer.jobs.of_type(Job_P)))
+#2452
 
-      # pass subclasses to eager loads (implicitly applies with_polymorphic)
-      q = s.query(ParentThing).\
-                      options(
-                          joinedload_all(
-                              ParentThing.container,
-                              DataContainer.jobs.of_type(SubJob)
-                      ))
+Behavioral Changes
+==================
 
-      # control self-referential aliasing with any()/has()
-      Job_A = aliased(Job)
-      q = s.query(Job).join(DataContainer.jobs).\
-                      filter(
-                          DataContainer.jobs.of_type(Job_A).\
-                              any(and_(Job_A.id < Job.id, Job_A.type=='fred'))
+The after_attach event fires after the item is associated with the Session instead of before; before_attach added
+-----------------------------------------------------------------------------------------------------------------
 
-  #2438 #1106
+Event handlers which use after_attach can now assume the
+given instance is associated with the given session:
 
-  === New !DeferredReflection Feature in Declarative ===
+::
 
-  The "deferred reflection" example has been moved to a
-  supported feature
-  within Declarative.  This feature allows the construction of
-  declarative mapped classes with only placeholder ``Table``
-  metadata, until a ``prepare()``
-  step is called, given an ``Engine`` with which to reflect
-  fully all tables
-  and establish actual mappings.   The system supports
-  overriding of columns,
-  single and joined inheritance, as well as distinct bases-
-  per-engine.
-  A full declarative configuration can now be created against
-  an existing table
-  that is assembled upon engine creation time in one step:
+    @event.listens_for(Session, "after_attach")
+    def after_attach(session, instance):
+        assert instance in session
 
-  ::
+Some use cases require that it work this way.  However,
+other use cases require that the item is *not* yet part of
+the session, such as when a query, intended to load some
+state required for an instance, emits autoflush first and
+would otherwise prematurely flush the target object.  Those
+use cases should use the new "before_attach" event:
 
-      class ReflectedOne(DeferredReflection, Base):
-          __abstract__ = True
+::
 
-      class ReflectedTwo(DeferredReflection, Base):
-          __abstract__ = True
+    @event.listens_for(Session, "before_attach")
+    def before_attach(session, instance):
+        instance.some_necessary_attribute = session.query(Widget).\
+                                                filter_by(instance.widget_name).\
+                                                first()
 
-      class MyClass(ReflectedOne):
-          __tablename__ = 'mytable'
+#2464
 
-      class MyOtherClass(ReflectedOne):
-          __tablename__ = 'myothertable'
+Query now auto-correlates like a select() does
+----------------------------------------------
 
-      class YetAnotherClass(ReflectedTwo):
-          __tablename__ = 'yetanothertable'
+Status: Completed
 
-      ReflectedOne.prepare(engine_one)
-      ReflectedTwo.prepare(engine_two)
+Previously it was necessary to call ``Query.correlate`` in
+order to have a column- or WHERE-subquery correlate to the
+parent:
 
-  #2485
+::
 
-  === New, configurable DATE, TIME types for SQLite ===
+    subq = session.query(Entity.value).\
+                    filter(Entity.id==Parent.entity_id).\
+                    correlate(Parent).\
+                    as_scalar()
+    session.query(Parent).filter(subq=="some value")
 
-  Status: completed
+This was the opposite behavior of a plain ``select()``
+construct which would assume auto-correlation by default.
+The above statement in 0.8 will correlate automatically:
 
-  SQLite has no built-in DATE, TIME, or DATETIME types, and
-  instead provides some support
-  for storage of date and time values either as strings or
-  integers.   The date and time
-  types for SQLite are enhanced in 0.8 to be much more
-  configurable as to the specific format,
-  including that the "microseconds" portion is optional, as
-  well as pretty much everything else.
+::
 
-  ::
+    subq = session.query(Entity.value).\
+                    filter(Entity.id==Parent.entity_id).\
+                    as_scalar()
+    session.query(Parent).filter(subq=="some value")
 
-      Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
-      Column('sometimestamp', sqlite.DATETIME(
-                          storage_format=(
-                                      "%(year)04d%(month)02d%(day)02d"
-                                      "%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
-                          ),
-                          regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
-                          )
-                  )
-      Column('somedate', sqlite.DATE(
-                          storage_format="%(month)02d/%(day)02d/%(year)04d",
-                          regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
-                      )
-                  )
+like in ``select()``, correlation can be disabled by calling
+``query.correlate(None)`` or manually set by passing an
+entity, ``query.correlate(someentity)``.
 
-  Huge thanks to Nate Dub for the sprinting on this at Pycon
-  '12.
+#2179
 
-  #2363
+No more magic coercion of "=" to IN when comparing to subquery in MS-SQL
+------------------------------------------------------------------------
 
-  === Query.update() will support UPDATE..FROM ===
+Status: Completed
 
-  Status: not implemented
+We found a very old behavior in the MSSQL dialect which
+would attempt to rescue the user from his or herself when
+doing something like this:
 
-  Not 100% sure if this will make it in, the new UPDATE..FROM
-  mechanics should work in query.update():
+::
 
-  ::
+    scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
+    select([sometable]).where(sometable.c.id==scalar_subq)
 
-      query(SomeEntity).\
-          filter(SomeEntity.id==SomeOtherEntity.id).\
-          filter(SomeOtherEntity.foo=='bar').\
-          update({"data":"x"})
+SQL Server doesn't allow an equality comparison to a scalar
+SELECT, that is, "x = (SELECT something)". The MSSQL dialect
+would convert this to an IN.   The same thing would happen
+however upon a comparison like "(SELECT something) = x", and
+overall this level of guessing is outside of SQLAlchemy's
+usual scope so the behavior is removed.
 
-  Should also work when used against a joined-inheritance
-  entity, provided the target of the UPDATE is
-  local to the table being filtered on, or if the parent and
-  child tables are mixed, they are joined
-  explicitly in the query.  Below, given ``Engineer`` as a
-  joined subclass of ``Person``:
+#2277
 
-  ::
+Fixed the behavior of Session.is_modified()
+-------------------------------------------
 
-      query(Engineer).\
-              filter(Person.id==Engineer.id).\
-              filter(Person.name=='dilbert').\
-              update({"engineer_data":"java"})
+Status: completed
 
-  would produce:
+The ``Session.is_modified()`` method accepts an argument
+``passive`` which basically should not be necessary, the
+argument in all cases should be the value ``True`` - when
+left at its default of ``False`` it would have the effect of
+hitting the database, and often triggering autoflush which
+would itself change the results.   In 0.8 the ``passive``
+argument will have no effect, and unloaded attributes will
+never be checked for history since by definition there can
+be no pending state change on an unloaded attribute.
 
-  ::
+#2320
 
-      UPDATE engineer SET engineer_data='java' FROM person
-      WHERE person.id=engineer.id AND person.name='dilbert'
+``column.key`` is honored in the ``.c.`` attribute of ``select()`` with ``apply_labels()``
+------------------------------------------------------------------------------------------
 
-  #2365
+Status: completed
 
-  === Enhanced Postgresql ARRAY type ===
+Users of the expression system know that ``apply_labels()``
+prepends the table name to each column name, affecting the
+names that are available from ``.c.``:
 
-  status: completed
+::
 
-  The ``postgresql.ARRAY`` type will accept an optional
-  "dimension" argument, pinning
-  it to a fixed number of dimensions and greatly improving
-  efficiency when retrieving
-  results:
+    s = select([table1]).apply_labels()
+    s.c.table1_col1
+    s.c.table1_col2
 
-  ::
+Before 0.8, if the ``Column`` had a different ``key``, this
+key would be ignored, inconsistently versus when
+``apply_labels()`` were not used:
 
-      # old way, still works since PG supports N-dimensions per row:
-      Column("my_array", postgresql.ARRAY(Integer))
+::
 
-      # new way, will render ARRAY with correct number of [] in DDL,
-      # will process binds and results more efficiently as we don't need
-      # to guess how many levels deep to go
-      Column("my_array", postgresql.ARRAY(Integer, dimensions=2))
+    # before 0.8
+    table1 = Table('t1', metadata,
+        Column('col1', Integer, key='column_one')
+    )
+    s = select([table1])
+    s.c.column_one # would be accessible like this
+    s.c.col1 # would raise AttributeError
+    
+    s = select([table1]).apply_labels()
+    s.c.table1_column_one # would raise AttributeError
+    s.c.table1_col1 # would be accessible like this
 
-  #2441
+In 0.8, ``key`` is honored in both cases:
 
-  === rollback() will only roll back "dirty" objects from a
-  begin_nested() ===
+::
 
-  Status: completed
+    # with 0.8
+    table1 = Table('t1', metadata,
+        Column('col1', Integer, key='column_one')
+    )
+    s = select([table1])
+    s.c.column_one # works
+    s.c.col1 # AttributeError
+    
+    s = select([table1]).apply_labels()
+    s.c.table1_column_one # works
+    s.c.table1_col1 # AttributeError
 
-  A behavioral change that should improve efficiency for those
-  users using
-  SAVEPOINT via ``Session.begin_nested()`` - upon
-  ``rollback()``, only those objects that
-  were made dirty since the last flush will be expired, the
-  rest of the ``Session`` remains
-  intact.  This because a ROLLBACK to a SAVEPOINT does not
-  terminate the containing
-  transaction's isolation, so no expiry is needed except for
-  those changes that were
-  not flushed in the current transaction.
+All other behavior regarding "name" and "key" are the same,
+including that the rendered SQL will still use the form
+``<tablename>_<colname>`` - the emphasis here was on
+preventing the ``key`` contents from being rendered into the
+``SELECT`` statement so that there are no issues with
+special/ non-ascii characters used in the ``key``.
 
-  #2452
+#2397
 
-  == Behavioral Changes ==
+single_parent warning is now an error
+-------------------------------------
 
-  === The after_attach event fires after the item is
-  associated with the Session instead of before; before_attach
-  added ===
+Status: completed
 
-  Event handlers which use after_attach can now assume the
-  given instance is associated
-  with the given session:
+A ``relationship()`` that is many-to-one or many-to-many and
+specifies "cascade='all, delete-orphan'", which is an
+awkward but nonetheless supported use case (with
+restrictions) will now raise an error if the relationship
+does not specify the ``single_parent=True`` option.
+Previously it would only emit a warning, but a failure would
+follow almost immediately within the attribute system in any
+case.
 
-  ::
+#2405
 
-      @event.listens_for(Session, "after_attach")
-      def after_attach(session, instance):
-          assert instance in session
+Adding the ``inspector`` argument to the ``column_reflect`` event
+-----------------------------------------------------------------
 
-  Some use cases require that it work this way.  However,
-  other use cases require that
-  the item is *not* yet part of the session, such as when a
-  query, intended to load
-  some state required for an instance, emits autoflush first
-  and would otherwise
-  prematurely flush the target object.  Those use cases should
-  use the new "before_attach"
-  event:
+Status: completed
 
-  ::
+0.7 added a new event called ``column_reflect``, provided so
+that the reflection of columns could be augmented as each
+one were reflected.   We got this event slightly wrong in
+that the event gave no way to get at the current
+``Inspector`` and ``Connection`` being used for the
+reflection, in the case that additional information from the
+database is needed.   As this is a new event not widely used
+yet, we'll be adding the ``inspector`` argument into it
+directly:
 
-      @event.listens_for(Session, "before_attach")
-      def before_attach(session, instance):
-          instance.some_necessary_attribute = session.query(Widget).\
-                                                  filter_by(instance.widget_name).\
-                                                  first()
+::
 
-  #2464
+    @event.listens_for(Table, "column_reflect")
+    def listen_for_col(inspector, table, column_info):
+        # ...
 
-  === Query now auto-correlates like a select() does ===
+#2418
 
-  Status: Completed
+Disabling auto-detect of collations, casing for MySQL
+-----------------------------------------------------
 
-  Previously it was necessary to call ``Query.correlate`` in
-  order to have a column-
-  or WHERE-subquery correlate to the parent:
+Status: completed
 
-  ::
+The MySQL dialect does two calls, one very expensive, to
+load all possible collations from the database as well as
+information on casing, the first time an ``Engine``
+connects.   Neither of these collections are used for any
+SQLAlchemy functions, so these calls will be changed to no
+longer be emitted automatically. Applications that might
+have relied on these collections being present on
+``engine.dialect`` will need to call upon
+``_detect_collations()`` and ``_detect_casing()`` directly.
 
-      subq = session.query(Entity.value).\
-                      filter(Entity.id==Parent.entity_id).\
-                      correlate(Parent).\
-                      as_scalar()
-      session.query(Parent).filter(subq=="some value")
+#2404
 
-  This was the opposite behavior of a plain ``select()``
-  construct which would assume auto-correlation
-  by default.   The above statement in 0.8 will correlate
-  automatically:
+"Unconsumed column names" warning becomes an exception
+------------------------------------------------------
 
-  ::
+Status: completed
 
-      subq = session.query(Entity.value).\
-                      filter(Entity.id==Parent.entity_id).\
-                      as_scalar()
-      session.query(Parent).filter(subq=="some value")
+Referring to a non-existent column in an ``insert()`` or
+``update()`` construct will raise an error instead of a
+warning:
 
-  like in ``select()``, correlation can be disabled by calling
-  ``query.correlate(None)`` or manually
-  set by passing an entity, ``query.correlate(someentity)``.
+::
 
-  #2179
+    t1 = table('t1', column('x'))
+    t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
 
-  === No more magic coercion of "=" to IN when comparing to
-  subquery in MS-SQL ===
+#2415
 
-  Status: Completed
+Inspector.get_primary_keys() is deprecated, use Inspector.get_pk_constraint
+---------------------------------------------------------------------------
 
-  We found a very old behavior in the MSSQL dialect which
-  would attempt to rescue the
-  user from his or herself when doing something like this:
+Status: completed
 
-  ::
+These two methods on ``Inspector`` were redundant, where
+``get_primary_keys()`` would return the same information as
+``get_pk_constraint()`` minus the name of the constraint:
 
-      scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
-      select([sometable]).where(sometable.c.id==scalar_subq)
+::
 
-  SQL Server doesn't allow an equality comparison to a scalar
-  SELECT, that is, "x = (SELECT something)".
-  The MSSQL dialect would convert this to an IN.   The same
-  thing would happen however upon a comparison
-  like "(SELECT something) = x", and overall this level of
-  guessing is outside of SQLAlchemy's usual
-  scope so the behavior is removed.
+    >>> insp.get_primary_keys()
+    ["a", "b"]
+    
+    >>> insp.get_pk_constraint()
+    {"name":"pk_constraint", "constrained_columns":["a", "b"]}
 
-  #2277
+#2422
 
-  === Fixed the behavior of Session.is_modified() ===
+Case-insensitive result row names will be disabled in most cases
+----------------------------------------------------------------
 
-  Status: completed
+Status: completed
 
-  The ``Session.is_modified()`` method accepts an argument
-  ``passive`` which basically should not
-  be necessary, the argument in all cases should be the value
-  ``True`` - when left at its default of
-  ``False`` it would have the effect of hitting the database,
-  and often triggering autoflush which
-  would itself change the results.   In 0.8 the ``passive``
-  argument will have no effect, and
-  unloaded attributes will never be checked for history since
-  by definition there can be no pending
-  state change on an unloaded attribute.
+A very old behavior, the column names in ``RowProxy`` were
+always compared case-insensitively:
 
-  #2320
+::
 
-  === ``column.key`` is honored in the ``.c.`` attribute of
-  ``select()`` with ``apply_labels()`` ===
+    >>> row = result.fetchone()
+    >>> row['foo'] == row['FOO'] == row['Foo']
+    True
 
-  Status: completed
+This was for the benefit of a few dialects which in the
+early days needed this, like Oracle and Firebird, but in
+modern usage we have more accurate ways of dealing with the
+case-insensitive behavior of these two platforms.
 
-  Users of the expression system know that ``apply_labels()``
-  prepends the table name to each
-  column name, affecting the names that are available from
-  ``.c.``:
+Going forward, this behavior will be available only
+optionally, by passing the flag ```case_sensitive=False```
+to ```create_engine()```, but otherwise column names
+requested from the row must match as far as casing.
 
-  ::
+#2423
 
-      s = select([table1]).apply_labels()
-      s.c.table1_col1
-      s.c.table1_col2
+``InstrumentationManager`` and alternate class instrumentation is now an extension
+----------------------------------------------------------------------------------
 
-  Before 0.8, if the ``Column`` had a different ``key``, this
-  key would be ignored, inconsistently
-  versus when ``apply_labels()`` were not used:
+The ``sqlalchemy.orm.interfaces.InstrumentationManager``
+class is moved to
+``sqlalchemy.ext.instrumentation.InstrumentationManager``.
+The "alternate instrumentation" system was built for the
+benefit of a very small number of installations that needed
+to work with existing or unusual class instrumentation
+systems, and generally is very seldom used.   The complexity
+of this system has been exported to an ``ext.`` module.  It
+remains unused until once imported, typically when a third
+party library imports ``InstrumentationManager``, at which
+point it is injected back into ``sqlalchemy.orm`` by
+replacing the default ``InstrumentationFactory`` with
+``ExtendedInstrumentationRegistry``.
 
-  ::
+Removed
+=======
 
-      # before 0.8
-      table1 = Table('t1', metadata,
-          Column('col1', Integer, key='column_one')
-      )
-      s = select([table1])
-      s.c.column_one # would be accessible like this
-      s.c.col1 # would raise AttributeError
+SQLSoup
+-------
 
-      s = select([table1]).apply_labels()
-      s.c.table1_column_one # would raise AttributeError
-      s.c.table1_col1 # would be accessible like this
+Status: completed
 
-  In 0.8, ``key`` is honored in both cases:
+SQLSoup is a handy package that presents an alternative
+interface on top of the SQLAlchemy ORM.   SQLSoup is now
+moved into its own project and documented/released
+separately; see https://bitbucket.org/zzzeek/sqlsoup.
 
-  ::
+SQLSoup is a very simple tool that could also benefit from
+contributors who are interested in its style of usage.
 
-      # with 0.8
-      table1 = Table('t1', metadata,
-          Column('col1', Integer, key='column_one')
-      )
-      s = select([table1])
-      s.c.column_one # works
-      s.c.col1 # AttributeError
+#2262
 
-      s = select([table1]).apply_labels()
-      s.c.table1_column_one # works
-      s.c.table1_col1 # AttributeError
+MutableType
+-----------
 
-  All other behavior regarding "name" and "key" are the same,
-  including that the rendered SQL
-  will still use the form ``<tablename>_<colname>`` - the
-  emphasis here was on preventing the ``key``
-  contents from being rendered into the ``SELECT`` statement
-  so that there are no issues with special/
-  non-ascii characters used in the ``key``.
+Status: completed
 
-  #2397
+The older "mutable" system within the SQLAlchemy ORM has
+been removed.   This refers to the ``MutableType`` interface
+which was applied to types such as ``PickleType`` and
+conditionally to ``TypeDecorator``, and since very early
+SQLAlchemy versions has provided a way for the ORM to detect
+changes in so-called "mutable" data structures such as JSON
+structures and pickled objects.   However, the
+implementation was never reasonable and forced a very
+inefficient mode of usage on the unit-of-work which caused
+an expensive scan of all objects to take place during flush.
+In 0.7, the `sqlalchemy.ext.mutable <http://docs.sqlalchemy.
+org/en/latest/orm/extensions/mutable.html>`_ extension was
+introduced so that user-defined datatypes can appropriately
+send events to the unit of work as changes occur.
 
-  === single_parent warning is now an error ===
+Today, usage of ``MutableType`` is expected to be low, as
+warnings have been in place for some years now regarding its
+inefficiency.
 
-  Status: completed
+#2442
 
-  A ``relationship()`` that is many-to-one or many-to-many and
-  specifies "cascade='all, delete-orphan'",
-  which is an awkward but nonetheless supported use case (with
-  restrictions) will now raise an error
-  if the relationship does not specify the
-  ``single_parent=True`` option.  Previously it would only
-  emit a warning, but a failure would follow almost
-  immediately within the attribute system in any case.
+sqlalchemy.exceptions (has been sqlalchemy.exc for years)
+---------------------------------------------------------
 
-  #2405
+Status: completed
 
-  === Adding the ``inspector`` argument to the
-  ``column_reflect`` event ===
+We had left in an alias ``sqlalchemy.exceptions`` to attempt
+to make it slightly easier for some very old libraries that
+hadn't yet been upgraded to use ``sqlalchemy.exc``.  Some
+users are still being confused by it however so in 0.8 we're
+taking it out entirely to eliminate any of that confusion.
 
-  Status: completed
+#2433
 
-  0.7 added a new event called ``column_reflect``, provided so
-  that the reflection of columns could
-  be augmented as each one were reflected.   We got this event
-  slightly wrong in that the event gave
-  no way to get at the current ``Inspector`` and
-  ``Connection`` being used for the reflection, in the case
-  that
-  additional information from the database is needed.   As
-  this is a new event not widely used yet, we'll
-  be adding the ``inspector`` argument into it directly:
-
-  ::
-
-      @event.listens_for(Table, "column_reflect")
-      def listen_for_col(inspector, table, column_info):
-          # ...
-
-  #2418
-
-  === Disabling auto-detect of collations, casing for MySQL
-  ===
-
-  Status: completed
-
-  The MySQL dialect does two calls, one very expensive, to
-  load all possible collations from the database
-  as well as information on casing, the first time an
-  ``Engine`` connects.   Neither of these collections
-  are used for any SQLAlchemy functions, so these calls will
-  be changed to no longer be emitted automatically.
-  Applications that might have relied on these collections
-  being present on ``engine.dialect`` will need to call
-  upon ``_detect_collations()`` and ``_detect_casing()``
-  directly.
-
-  #2404
-
-  === "Unconsumed column names" warning becomes an exception
-  ===
-
-  Status: completed
-
-  Referring to a non-existent column in an ``insert()`` or
-  ``update()`` construct will raise an error
-  instead of a warning:
-
-  ::
-
-      t1 = table('t1', column('x'))
-      t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
-
-  #2415
-
-  === Inspector.get_primary_keys() is deprecated, use
-  Inspector.get_pk_constraint ===
-
-  Status: completed
-
-  These two methods on ``Inspector`` were redundant, where
-  ``get_primary_keys()`` would return the same
-  information as ``get_pk_constraint()`` minus the name of the
-  constraint:
-
-  ::
-
-      >>> insp.get_primary_keys()
-      ["a", "b"]
-
-      >>> insp.get_pk_constraint()
-      {"name":"pk_constraint", "constrained_columns":["a", "b"]}
-
-  #2422
-
-  === Case-insensitive result row names will be disabled in
-  most cases ===
-
-  Status: completed
-
-  A very old behavior, the column names in ``RowProxy`` were
-  always compared case-insensitively:
-
-  ::
-
-      >>> row = result.fetchone()
-      >>> row['foo'] == row['FOO'] == row['Foo']
-      True
-
-  This was for the benefit of a few dialects which in the
-  early days needed this, like Oracle and
-  Firebird, but in modern usage we have more accurate ways of
-  dealing with the case-insensitive behavior
-  of these two platforms.
-
-  Going forward, this behavior will be available only
-  optionally, by passing the flag ```case_sensitive=False```
-  to ```create_engine()```, but otherwise column names
-  requested from the row must match as far as casing.
-
-  #2423
-
-  === ``InstrumentationManager`` and alternate class
-  instrumentation is now an extension ===
-
-  The ``sqlalchemy.orm.interfaces.InstrumentationManager``
-  class is moved to
-  ``sqlalchemy.ext.instrumentation.InstrumentationManager``.
-  The "alternate instrumentation"
-  system was built for the benefit of a very small number of
-  installations that needed to
-  work with existing or unusual class instrumentation systems,
-  and generally
-  is very seldom used.   The complexity of this system has
-  been exported
-  to an ``ext.`` module.  It remains unused until once
-  imported, typically when a third
-  party library imports ``InstrumentationManager``, at which
-  point it is
-  injected back into ``sqlalchemy.orm`` by replacing the
-  default ``InstrumentationFactory``
-  with ``ExtendedInstrumentationRegistry``.
-
-  == Removed ==
-
-  === SQLSoup ===
-
-  Status: completed
-
-  SQLSoup is a handy package that presents an alternative
-  interface on top of the SQLAlchemy ORM.   SQLSoup is now
-  moved into its own project and documented/released
-  separately; see https://bitbucket.org/zzzeek/sqlsoup.
-
-  SQLSoup is a very simple tool that could also benefit from
-  contributors who are interested in its
-  style of usage.
-
-  #2262
-
-  === !MutableType ===
-
-  Status: completed
-
-  The older "mutable" system within the SQLAlchemy ORM has
-  been removed.   This
-  refers to the ``MutableType`` interface which was applied to
-  types such as ``PickleType`` and
-  conditionally to ``TypeDecorator``, and since very early
-  SQLAlchemy versions has provided a way
-  for the ORM to detect changes in so-called "mutable" data
-  structures such as JSON structures
-  and pickled objects.   However, the implementation was never
-  reasonable and forced a very inefficient
-  mode of usage on the unit-of-work which caused an expensive
-  scan of all objects to take
-  place during flush.  In 0.7, the `sqlalchemy.ext.mutable <ht
-  tp://docs.sqlalchemy.org/en/latest/orm/extensions/mutable.ht
-  ml>`_
-  extension was introduced so that user-defined datatypes can
-  appropriately send events to the unit of work
-  as changes occur.
-
-  Today, usage of ``MutableType`` is expected to be low, as
-  warnings have been in place for some years now
-  regarding its inefficiency.
-
-  #2442
-
-  === sqlalchemy.exceptions (has been sqlalchemy.exc for
-  years) ===
-
-  Status: completed
-
-  We had left in an alias ``sqlalchemy.exceptions`` to attempt
-  to make it slightly easier for some
-  very old libraries that hadn't yet been upgraded to use
-  ``sqlalchemy.exc``.  Some users are still
-  being confused by it however so in 0.8 we're taking it out
-  entirely to eliminate any of that
-  confusion.
-
-  #2433

File trac_to_rst.py

                     current_chunk = []
 
             elif code:
-                current_chunk.append(line)
+                if not re.match(r'^\s*#!', line):
+                    current_chunk.append(line)
             elif not line:
                 if current_chunk:
                     yield {
                 if line == "----" or \
                     line.startswith("[[PageOutline"):
                     continue
+                line = re.sub(r'(\*\*?\w+)(?!\*)\b', lambda m: "\\%s" % m.group(1), line)
+                line = re.sub(r'\!(\w+)\b', lambda m: m.group(1), line)
                 line = re.sub(r"`(.+?)`", lambda m: "``%s``" % m.group(1), line)
-                line = re.sub(r"'''(.+?)'''", lambda m: "**%s**" % m.group(1), line)
-                line = re.sub(r"''(.+?)'", lambda m: "*%s*" % m.group(1), line)
+                line = re.sub(r"'''(.+?)'''", lambda m: "**%s**" % m.group(1).replace("``", ""), line)
+                line = re.sub(r"''(.+?)'", lambda m: "*%s*" % m.group(1).replace("``", ""), line)
                 line = re.sub(r'\[(http://\S+) (.*)\]',
                         lambda m: "`%s <%s>`_" % (m.group(2), m.group(1)),
                         line