Commits

Hajime Nakagami  committed 84570fe Merge

merge from default

  • Participants
  • Parent commits 6d98014, 775b404
  • Branches cymysql

Comments (0)

Files changed (24)

File doc/build/changelog/changelog_07.rst

 
     .. change::
       :tags: bug, orm
-      :tickets: 2689
-
-    Fixed bug in unit of work whereby a joined-inheritance
-    subclass could insert the row for the "sub" table
-    before the parent table, if the two tables had no
-    ForeignKey constraints set up between them.
+      :tickets: 2699
+
+      Fixed bug when a query of the form:
+      ``query(SubClass).options(subqueryload(Baseclass.attrname))``,
+      where ``SubClass`` is a joined inh of ``BaseClass``,
+      would fail to apply the ``JOIN`` inside the subquery
+      on the attribute load, producing a cartesian product.
+      The populated results still tended to be correct as additional
+      rows are just ignored, so this issue may be present as a
+      performance degradation in applications that are
+      otherwise working correctly.
+
+    .. change::
+        :tags: bug, orm
+        :tickets: 2689
+
+      Fixed bug in unit of work whereby a joined-inheritance
+      subclass could insert the row for the "sub" table
+      before the parent table, if the two tables had no
+      ForeignKey constraints set up between them.
 
     .. change::
         :tags: feature, postgresql

File doc/build/changelog/changelog_08.rst

 
     .. change::
       :tags: bug, orm
+      :tickets: 2697
+
+      Fixed bug whereby ORM would run the wrong kind of
+      query when refreshing an inheritance-mapped class
+      where the superclass was mapped to a non-Table
+      object, like a custom join() or a select(),
+      running a query that assumed a hierarchy that's
+      mapped to individual Table-per-class.
+
+    .. change::
+      :tags: bug, orm
+
+      Fixed `__repr__()` on mapper property constructs
+      to work before the object is initialized, so
+      that Sphinx builds with recent Sphinx versions
+      can read them.
+
+    .. change::
+      :tags: bug, sql, postgresql
+
+      The _Binary base type now converts values through
+      the bytes() callable when run on Python 3; in particular
+      psycopg2 2.5 with Python 3.3 seems to now be returning
+      the "memoryview" type, so this is converted to bytes
+      before return.
+
+    .. change::
+      :tags: bug, sql
+      :tickets: 2695
+
+      Improvements to Connection auto-invalidation
+      handling.  If a non-disconnect error occurs,
+      but leads to a delayed disconnect error within error
+      handling (happens with MySQL), the disconnect condition
+      is detected.  The Connection can now also be closed
+      when in an invalid state, meaning it will raise "closed"
+      on next usage, and additionally the "close with result"
+      feature will work even if the autorollback in an error
+      handling routine fails and regardless of whether the
+      condition is a disconnect or not.
+
+    .. change::
+      :tags: bug, sql
+      :tickets: 2702
+
+      A major fix to the way in which a select() object produces
+      labeled columns when apply_labels() is used; this mode
+      produces a SELECT where each column is labeled as in
+      <tablename>_<columnname>, to remove column name collisions
+      for a multiple table select.   The fix is that if two labels
+      collide when combined with the table name, i.e.
+      "foo.bar_id" and "foo_bar.id", anonymous aliasing will be
+      applied to one of the dupes.  This allows the ORM to handle
+      both columns independently; previously, 0.7
+      would in some cases silently emit a second SELECT for the
+      column that was "duped", and in 0.8 an ambiguous column error
+      would be emitted.   The "keys" applied to the .c. collection
+      of the select() will also be deduped, so that the "column
+      being replaced" warning will no longer emit for any select()
+      that specifies use_labels, though the dupe key will be given
+      an anonymous label which isn't generally user-friendly.
+
+    .. change::
+      :tags: bug, orm, declarative
+      :tickets: 2656
+
+      Fixed indirect regression regarding :func:`.has_inherited_table`,
+      where since it considers the current class' ``__table__``, was
+      sensitive to when it was called.  This is 0.7's behavior also,
+      but in 0.7 things tended to "work out" within events like
+      ``__mapper_args__()``.  :func:`.has_inherited_table` now only
+      considers superclasses, so should return the same answer
+      regarding the current class no matter when it's called
+      (obviously assuming the state of the superclass).
+
+    .. change::
+      :tags: bug, orm
+      :tickets: 2699
+
+      Fixed bug when a query of the form:
+      ``query(SubClass).options(subqueryload(Baseclass.attrname))``,
+      where ``SubClass`` is a joined inh of ``BaseClass``,
+      would fail to apply the ``JOIN`` inside the subquery
+      on the attribute load, producing a cartesian product.
+      The populated results still tended to be correct as additional
+      rows are just ignored, so this issue may be present as a
+      performance degradation in applications that are
+      otherwise working correctly.  Also in 0.7.11.
+
+    .. change::
+      :tags: bug, orm
       :tickets: 2689
 
-    Fixed bug in unit of work whereby a joined-inheritance
-    subclass could insert the row for the "sub" table
-    before the parent table, if the two tables had no
-    ForeignKey constraints set up between them.
-    Also in 0.7.11.
+      Fixed bug in unit of work whereby a joined-inheritance
+      subclass could insert the row for the "sub" table
+      before the parent table, if the two tables had no
+      ForeignKey constraints set up between them.
+      Also in 0.7.11.
 
     .. change::
       :tags: bug, mssql
       :pullreq: 47
 
-    Added support for additional "disconnect" messages
-    to the pymssql dialect.  Courtesy John Anderson.
+      Added support for additional "disconnect" messages
+      to the pymssql dialect.  Courtesy John Anderson.
 
     .. change::
       :tags: feature, sql
 
-    Loosened the check on dialect-specific argument names
-    passed to Table(); since we want to support external dialects
-    and also want to support args without a certain dialect
-    being installed, it only checks the format of the arg now,
-    rather than looking for that dialect in sqlalchemy.dialects.
+      Loosened the check on dialect-specific argument names
+      passed to Table(); since we want to support external dialects
+      and also want to support args without a certain dialect
+      being installed, it only checks the format of the arg now,
+      rather than looking for that dialect in sqlalchemy.dialects.
 
     .. change::
       :tags: bug, sql
 
-    Fixed bug whereby a DBAPI that can return "0"
-    for cursor.lastrowid would not function correctly
-    in conjunction with :attr:`.ResultProxy.inserted_primary_key`.
+      Fixed bug whereby a DBAPI that can return "0"
+      for cursor.lastrowid would not function correctly
+      in conjunction with :attr:`.ResultProxy.inserted_primary_key`.
 
     .. change::
       :tags: bug, mssql
       :tickets: 2683
       :pullreq: 46
 
-    Fixed Py3K bug regarding "binary" types and
-    pymssql.  Courtesy Marc Abramowitz.
+      Fixed Py3K bug regarding "binary" types and
+      pymssql.  Courtesy Marc Abramowitz.
 
     .. change::
       :tags: bug, postgresql
       :tickets: 2680
 
-    Added missing HSTORE type to postgresql type names
-    so that the type can be reflected.
+      Added missing HSTORE type to postgresql type names
+      so that the type can be reflected.
 
 .. changelog::
     :version: 0.8.0

File doc/build/dialects/index.rst

 * `sqlalchemy-akiban <https://github.com/zzzeek/sqlalchemy_akiban>`_ - driver and ORM extensions for the `Akiban <http://www.akiban.com>`_ database.
 * `sqlalchemy-cubrid <https://bitbucket.org/zzzeek/sqlalchemy-cubrid>`_ - driver for the CUBRID database.
 * `sqlalchemy-maxdb <https://bitbucket.org/zzzeek/sqlalchemy-maxdb>`_ - driver for the MaxDB database.
+* `CALCHIPAN <https://bitbucket.org/zzzeek/calchipan/>`_ - Adapts `Pandas <http://pandas.pydata.org/>`_ dataframes to SQLAlchemy.
 
 

File lib/sqlalchemy/engine/base.py

         self.__savepoint_seq = 0
         self.__branch = _branch
         self.__invalid = False
+        self.__can_reconnect = True
         if _dispatch:
             self.dispatch = _dispatch
         elif engine._has_events:
     def closed(self):
         """Return True if this connection is closed."""
 
-        return not self.__invalid and '_Connection__connection' \
-                        not in self.__dict__
+        return '_Connection__connection' not in self.__dict__ \
+            and not self.__can_reconnect
 
     @property
     def invalidated(self):
             return self._revalidate_connection()
 
     def _revalidate_connection(self):
-        if self.__invalid:
+        if self.__can_reconnect and self.__invalid:
             if self.__transaction is not None:
                 raise exc.InvalidRequestError(
                                 "Can't reconnect until invalid "
         and will allow no further operations.
 
         """
-
         try:
             conn = self.__connection
         except AttributeError:
-            return
-        if not self.__branch:
-            conn.close()
-        self.__invalid = False
-        del self.__connection
+            pass
+        else:
+            if not self.__branch:
+                conn.close()
+            del self.__connection
+        self.__can_reconnect = False
         self.__transaction = None
 
     def scalar(self, object, *multiparams, **params):
             if isinstance(e, (SystemExit, KeyboardInterrupt)):
                 raise
 
+    _reentrant_error = False
+    _is_disconnect = False
+
     def _handle_dbapi_exception(self,
                                     e,
                                     statement,
                                     parameters,
                                     cursor,
                                     context):
-        if getattr(self, '_reentrant_error', False):
+
+        if not self._is_disconnect:
+            self._is_disconnect = isinstance(e, self.dialect.dbapi.Error) and \
+                not self.closed and \
+                self.dialect.is_disconnect(e, self.__connection, cursor)
+
+        if self._reentrant_error:
             # Py3K
             #raise exc.DBAPIError.instance(statement, parameters, e,
             #                               self.dialect.dbapi.Error) from e
                                                     e)
                 context.handle_dbapi_exception(e)
 
-            is_disconnect = isinstance(e, self.dialect.dbapi.Error) and \
-                self.dialect.is_disconnect(e, self.__connection, cursor)
-
-            if is_disconnect:
-                dbapi_conn_wrapper = self.connection
-                self.invalidate(e)
-                if not hasattr(dbapi_conn_wrapper, '_pool') or \
-                    dbapi_conn_wrapper._pool is self.engine.pool:
-                    self.engine.dispose()
-            else:
+            if not self._is_disconnect:
                 if cursor:
                     self._safe_close_cursor(cursor)
                 self._autorollback()
-                if self.should_close_with_result:
-                    self.close()
 
             if not should_wrap:
                 return
             #                        parameters,
             #                        e,
             #                        self.dialect.dbapi.Error,
-            #                        connection_invalidated=is_disconnect) \
+            #                        connection_invalidated=self._is_disconnect) \
             #                        from e
             # Py2K
             raise exc.DBAPIError.instance(
                                     parameters,
                                     e,
                                     self.dialect.dbapi.Error,
-                                    connection_invalidated=is_disconnect), \
+                                    connection_invalidated=self._is_disconnect), \
                                     None, sys.exc_info()[2]
             # end Py2K
 
         finally:
             del self._reentrant_error
+            if self._is_disconnect:
+                del self._is_disconnect
+                dbapi_conn_wrapper = self.connection
+                self.invalidate(e)
+                if not hasattr(dbapi_conn_wrapper, '_pool') or \
+                        dbapi_conn_wrapper._pool is self.engine.pool:
+                    self.engine.dispose()
+            if self.should_close_with_result:
+                self.close()
 
     # poor man's multimethod/generic function thingy
     executors = {

File lib/sqlalchemy/engine/result.py

             # this check isn't currently available if the row
             # was unpickled.
             if result is not None and \
-                result[1] is not None:
+                 result[1] is not None:
                 for obj in result[1]:
                     if key._compare_name_for_result(obj):
                         break

File lib/sqlalchemy/ext/declarative/api.py

     """Given a class, return True if any of the classes it inherits from has a
     mapped table, otherwise return False.
     """
-    for class_ in cls.__mro__:
+    for class_ in cls.__mro__[1:]:
         if getattr(class_, '__table__', None) is not None:
             return True
     return False

File lib/sqlalchemy/orm/interfaces.py

     def __repr__(self):
         return '<%s at 0x%x; %s>' % (
             self.__class__.__name__,
-            id(self), self.key)
+            id(self), getattr(self, 'key', 'no key'))
 
 class PropComparator(operators.ColumnOperators):
     """Defines boolean, comparison, and other operators for

File lib/sqlalchemy/orm/mapper.py

             for mapper in reversed(list(self.iterate_to_root())):
                 if mapper.local_table in tables:
                     start = True
+                elif not isinstance(mapper.local_table, expression.TableClause):
+                    return None
                 if start and not mapper.single:
                     allconds.append(visitors.cloned_traverse(
                                                 mapper.inherit_condition,

File lib/sqlalchemy/orm/strategies.py

         # produce a subquery from it.
         left_alias = self._generate_from_original_query(
                             orig_query, leftmost_mapper,
-                            leftmost_attr
+                            leftmost_attr, entity.mapper
         )
 
         # generate another Query that will join the
 
     def _generate_from_original_query(self,
             orig_query, leftmost_mapper,
-            leftmost_attr
+            leftmost_attr, entity_mapper
     ):
         # reformat the original query
         # to look only for significant columns
         # TODO: why does polymporphic etc. require hardcoding
         # into _adapt_col_list ?  Does query.add_columns(...) work
         # with polymorphic loading ?
+        if entity_mapper.isa(leftmost_mapper):
+            q._set_select_from(entity_mapper)
         q._set_entities(q._adapt_col_list(leftmost_attr))
 
         if q._order_by is False:
 
         # the original query now becomes a subquery
         # which we'll join onto.
+
         embed_q = q.with_labels().subquery()
         left_alias = orm_util.AliasedClass(leftmost_mapper, embed_q,
                             use_mapper_path=True)

File lib/sqlalchemy/sql/compiler.py

         else:
             self.result_map[keyname] = name, objects, type_
 
-    def _label_select_column(self, select, column, populate_result_map,
+    def _label_select_column(self, select, column,
+                                    populate_result_map,
                                     asfrom, column_clause_args,
+                                    name=None,
                                     within_columns_clause=True):
         """produce labeled columns present in a select()."""
 
         if column.type._has_column_expression and \
-            populate_result_map:
+                populate_result_map:
             col_expr = column.type.column_expression(column)
             add_to_result_map = lambda keyname, name, objects, type_: \
                                 self._add_to_result_map(
             else:
                 result_expr = col_expr
 
-        elif select is not None and \
-                select.use_labels and \
-                column._label:
+        elif select is not None and name:
             result_expr = _CompileLabel(
                     col_expr,
-                    column._label,
-                    alt_names=(column._key_label, )
+                    name,
+                    alt_names=(column._key_label,)
                 )
 
         elif \
             isinstance(column, sql.ColumnClause) and \
             not column.is_literal and \
             column.table is not None and \
-            not isinstance(column.table, sql.Select):
+                not isinstance(column.table, sql.Select):
             result_expr = _CompileLabel(col_expr,
                                     sql._as_truncated(column.name),
                                     alt_names=(column.key,))
         # correlate_froms.union(existingfroms)
 
         populate_result_map = force_result_map or (
-                                compound_index == 0 and (
-                                    not entry or \
-                                    entry.get('iswrapper', False)
-                                )
-                            )
+                                        compound_index == 0 and (
+                                            not entry or \
+                                            entry.get('iswrapper', False)
+                                        )
+                                    )
 
         self.stack.append({'from': correlate_froms,
                             'iswrapper': iswrapper})
         # the actual list of columns to print in the SELECT column list.
         inner_columns = [
             c for c in [
-                self._label_select_column(select, column,
+                self._label_select_column(select,
+                                    column,
                                     populate_result_map, asfrom,
-                                    column_clause_args)
-                for column in util.unique_list(select.inner_columns)
+                                    column_clause_args,
+                                    name=name)
+                for name, column in select._columns_plus_names
                 ]
             if c is not None
         ]

File lib/sqlalchemy/sql/expression.py

         fromclause = _interpret_as_from(fromclause)
         self._from_obj = self._from_obj.union([fromclause])
 
+
+    @_memoized_property
+    def _columns_plus_names(self):
+        if self.use_labels:
+            names = set()
+            def name_for_col(c):
+                if c._label is None:
+                    return (None, c)
+                name = c._label
+                if name in names:
+                    name = c.anon_label
+                else:
+                    names.add(name)
+                return name, c
+
+            return [
+                name_for_col(c)
+                for c in util.unique_list(_select_iterables(self._raw_columns))
+            ]
+        else:
+            return [
+                (None, c)
+                for c in util.unique_list(_select_iterables(self._raw_columns))
+            ]
+
     def _populate_column_collection(self):
-        for c in self.inner_columns:
-            if hasattr(c, '_make_proxy'):
-                c._make_proxy(self,
-                        name=c._label if self.use_labels else None,
-                        key=c._key_label if self.use_labels else None,
-                        name_is_truncatable=True)
+        for name, c in self._columns_plus_names:
+            if not hasattr(c, '_make_proxy'):
+                continue
+            if name is None:
+                key = None
+            elif self.use_labels:
+                key = c._key_label
+                if key is not None and key in self.c:
+                    key = c.anon_label
+            else:
+                key = None
+
+            c._make_proxy(self, key=key,
+                    name=name,
+                    name_is_truncatable=True)
 
     def _refresh_for_new_column(self, column):
         for fromclause in self._froms:

File lib/sqlalchemy/types.py

         return process
 
     # Python 3 has native bytes() type
-    # both sqlite3 and pg8000 seem to return it
-    # (i.e. and not 'memoryview')
+    # both sqlite3 and pg8000 seem to return it,
+    # psycopg2 as of 2.5 returns 'memoryview'
+    # Py3K
+    #def result_processor(self, dialect, coltype):
+    #    def process(value):
+    #        if value is not None:
+    #            value = bytes(value)
+    #        return value
+    #    return process
     # Py2K
     def result_processor(self, dialect, coltype):
         if util.jython:

File lib/sqlalchemy/util/_collections.py

 ordered_column_set = OrderedSet
 populate_column_dict = PopulateDict
 
-
 def unique_list(seq, hashfunc=None):
     seen = {}
     if not hashfunc:

File test/aaa_profiling/test_compiler.py

         def go():
             s = select([t1], t1.c.c2 == t2.c.c1)
             s.compile(dialect=self.dialect)
+        go()
+
+    def test_select_labels(self):
+        # give some of the cached type values
+        # a chance to warm up
+        s = select([t1], t1.c.c2 == t2.c.c1).apply_labels()
+        s.compile(dialect=self.dialect)
+
+        @profiling.function_call_count()
+        def go():
+            s = select([t1], t1.c.c2 == t2.c.c1).apply_labels()
+            s.compile(dialect=self.dialect)
         go()

File test/engine/test_reconnect.py

 from sqlalchemy.testing import fixtures
 from sqlalchemy.testing.engines import testing_engine
 
-class MockDisconnect(Exception):
+class MockError(Exception):
+    pass
+
+class MockDisconnect(MockError):
     pass
 
 class MockDBAPI(object):
         self.connections = weakref.WeakKeyDictionary()
     def connect(self, *args, **kwargs):
         return MockConnection(self)
-    def shutdown(self):
+    def shutdown(self, explode='execute'):
         for c in self.connections:
-            c.explode[0] = True
-    Error = MockDisconnect
+            c.explode = explode
+    Error = MockError
 
 class MockConnection(object):
     def __init__(self, dbapi):
         dbapi.connections[self] = True
-        self.explode = [False]
+        self.explode = ""
     def rollback(self):
-        pass
+        if self.explode == 'rollback':
+            raise MockDisconnect("Lost the DB connection on rollback")
+        if self.explode == 'rollback_no_disconnect':
+            raise MockError(
+                "something broke on rollback but we didn't lose the connection")
+        else:
+            return
     def commit(self):
         pass
     def cursor(self):
         self.explode = parent.explode
         self.description = ()
     def execute(self, *args, **kwargs):
-        if self.explode[0]:
-            raise MockDisconnect("Lost the DB connection")
+        if self.explode == 'execute':
+            raise MockDisconnect("Lost the DB connection on execute")
+        elif self.explode in ('execute_no_disconnect', ):
+            raise MockError(
+                "something broke on execute but we didn't lose the connection")
+        elif self.explode in ('rollback', 'rollback_no_disconnect'):
+            raise MockError(
+                "something broke on execute but we didn't lose the connection")
         else:
             return
     def close(self):
 
         dbapi.shutdown()
 
-        # raises error
-        try:
-            conn.execute(select([1]))
-            assert False
-        except tsa.exc.DBAPIError:
-            pass
+        assert_raises(
+            tsa.exc.DBAPIError,
+            conn.execute, select([1])
+        )
 
         assert not conn.closed
         assert conn.invalidated
         assert not conn.invalidated
         assert len(dbapi.connections) == 1
 
+    def test_invalidated_close(self):
+        conn = db.connect()
+
+        dbapi.shutdown()
+
+        assert_raises(
+            tsa.exc.DBAPIError,
+            conn.execute, select([1])
+        )
+
+        conn.close()
+        assert conn.closed
+        assert conn.invalidated
+        assert_raises_message(
+            tsa.exc.StatementError,
+            "This Connection is closed",
+            conn.execute, select([1])
+        )
+
+    def test_noreconnect_execute_plus_closewresult(self):
+        conn = db.connect(close_with_result=True)
+
+        dbapi.shutdown("execute_no_disconnect")
+
+        # raises error
+        assert_raises_message(
+            tsa.exc.DBAPIError,
+            "something broke on execute but we didn't lose the connection",
+            conn.execute, select([1])
+        )
+
+        assert conn.closed
+        assert not conn.invalidated
+
+    def test_noreconnect_rollback_plus_closewresult(self):
+        conn = db.connect(close_with_result=True)
+
+        dbapi.shutdown("rollback_no_disconnect")
+
+        # raises error
+        assert_raises_message(
+            tsa.exc.DBAPIError,
+            "something broke on rollback but we didn't lose the connection",
+            conn.execute, select([1])
+        )
+
+        assert conn.closed
+        assert not conn.invalidated
+
+        assert_raises_message(
+            tsa.exc.StatementError,
+            "This Connection is closed",
+            conn.execute, select([1])
+        )
+
+    def test_reconnect_on_reentrant(self):
+        conn = db.connect()
+
+        conn.execute(select([1]))
+
+        assert len(dbapi.connections) == 1
+
+        dbapi.shutdown("rollback")
+
+        # raises error
+        assert_raises_message(
+            tsa.exc.DBAPIError,
+            "Lost the DB connection on rollback",
+            conn.execute, select([1])
+        )
+
+        assert not conn.closed
+        assert conn.invalidated
+
+    def test_reconnect_on_reentrant_plus_closewresult(self):
+        conn = db.connect(close_with_result=True)
+
+        dbapi.shutdown("rollback")
+
+        # raises error
+        assert_raises_message(
+            tsa.exc.DBAPIError,
+            "Lost the DB connection on rollback",
+            conn.execute, select([1])
+        )
+
+        assert conn.closed
+        assert conn.invalidated
+
+        assert_raises_message(
+            tsa.exc.StatementError,
+            "This Connection is closed",
+            conn.execute, select([1])
+        )
+
 class CursorErrTest(fixtures.TestBase):
 
     def setup(self):

File test/ext/declarative/test_inheritance.py

     Session
 from sqlalchemy.testing import eq_
 from sqlalchemy.util import classproperty
-from sqlalchemy.ext.declarative import declared_attr, AbstractConcreteBase, ConcreteBase
+from sqlalchemy.ext.declarative import declared_attr, AbstractConcreteBase, \
+            ConcreteBase, has_inherited_table
 from sqlalchemy.testing import fixtures
 
 Base = None
                             'concrete':True}
         self._roundtrip(Employee, Manager, Engineer, Boss)
 
+
+    def test_has_inherited_table_doesnt_consider_base(self):
+        class A(Base):
+            __tablename__ = 'a'
+            id = Column(Integer, primary_key=True)
+
+        assert not has_inherited_table(A)
+
+        class B(A):
+            __tablename__ = 'b'
+            id = Column(Integer, ForeignKey('a.id'), primary_key=True)
+
+        assert has_inherited_table(B)
+
+    def test_has_inherited_table_in_mapper_args(self):
+        class Test(Base):
+            __tablename__ = 'test'
+            id = Column(Integer, primary_key=True)
+            type = Column(String(20))
+
+            @declared_attr
+            def __mapper_args__(cls):
+                if not has_inherited_table(cls):
+                    ret = {
+                        'polymorphic_identity': 'default',
+                        'polymorphic_on': cls.type,
+                        }
+                else:
+                    ret = {'polymorphic_identity': cls.__name__}
+                return ret
+
+        class PolyTest(Test):
+            __tablename__ = 'poly_test'
+            id = Column(Integer, ForeignKey(Test.id), primary_key=True)
+
+        configure_mappers()
+
+        assert Test.__mapper__.polymorphic_on is Test.__table__.c.type
+        assert PolyTest.__mapper__.polymorphic_on is Test.__table__.c.type
+
     def test_ok_to_override_type_from_abstract(self):
         class Employee(AbstractConcreteBase, Base, fixtures.ComparableEntity):
             pass

File test/orm/inheritance/test_basic.py

             Column('b', String(10))
         )
 
+    def test_no_optimize_on_map_to_join(self):
+        base, sub = self.tables.base, self.tables.sub
+
+        class Base(fixtures.ComparableEntity):
+            pass
+
+        class JoinBase(fixtures.ComparableEntity):
+            pass
+        class SubJoinBase(JoinBase):
+            pass
+
+        mapper(Base, base)
+        mapper(JoinBase, base.outerjoin(sub), properties={
+                'id': [base.c.id, sub.c.id],
+                'counter': [base.c.counter, sub.c.counter]
+            })
+        mapper(SubJoinBase, inherits=JoinBase)
+
+        sess = Session()
+        sess.add(Base(data='data'))
+        sess.commit()
+
+        sjb = sess.query(SubJoinBase).one()
+        sjb_id = sjb.id
+        sess.expire(sjb)
+
+        # this should not use the optimized load,
+        # which assumes discrete tables
+        def go():
+            eq_(sjb.data, 'data')
+
+        self.assert_sql_execution(
+            testing.db,
+            go,
+            CompiledSQL(
+                "SELECT base.counter AS base_counter, "
+                "sub.counter AS sub_counter, base.id AS base_id, "
+                "sub.id AS sub_id, base.data AS base_data, "
+                "base.type AS base_type, sub.sub AS sub_sub, "
+                "sub.counter2 AS sub_counter2 FROM base "
+                "LEFT OUTER JOIN sub ON base.id = sub.id "
+                "WHERE base.id = :param_1",
+                {'param_1': sjb_id}
+            ),
+        )
+
+
     def test_optimized_passes(self):
         """"test that the 'optimized load' routine doesn't crash when
         a column in the join condition is not available."""
             pass
         mapper(Base, base, polymorphic_on=base.c.type, polymorphic_identity='base')
         mapper(Sub, sub, inherits=Base, polymorphic_identity='sub', properties={
-            'concat':column_property(sub.c.sub + "|" + sub.c.sub)
+            'concat': column_property(sub.c.sub + "|" + sub.c.sub)
         })
         sess = sessionmaker()()
         s1 = Sub(data='s1data', sub='s1sub')
             pass
         mapper(Base, base, polymorphic_on=base.c.type, polymorphic_identity='base')
         mapper(Sub, sub, inherits=Base, polymorphic_identity='sub', properties={
-            'concat':column_property(base.c.data + "|" + sub.c.sub)
+            'concat': column_property(base.c.data + "|" + sub.c.sub)
         })
         sess = sessionmaker()()
         s1 = Sub(data='s1data', sub='s1sub')

File test/orm/inheritance/test_polymorphic_rel.py

         count = 5
         self.assert_sql_count(testing.db, go, count)
 
+
     def test_joinedload_on_subclass(self):
         sess = create_session()
         expected = [

File test/orm/test_froms.py

                 filter(Sub1.id==1).one(),
                 b1
         )
+
+class LabelCollideTest(fixtures.MappedTest):
+    """Test handling for a label collision.  This collision
+    is handled by core, see ticket:2702 as well as
+    test/sql/test_selectable->WithLabelsTest.  here we want
+    to make sure the end result is as we expect.
+
+    """
+
+    @classmethod
+    def define_tables(cls, metadata):
+        Table('foo', metadata,
+                Column('id', Integer, primary_key=True),
+                Column('bar_id', Integer)
+        )
+        Table('foo_bar', metadata,
+                Column('id', Integer, primary_key=True),
+                )
+
+    @classmethod
+    def setup_classes(cls):
+        class Foo(cls.Basic):
+            pass
+        class Bar(cls.Basic):
+            pass
+
+    @classmethod
+    def setup_mappers(cls):
+        mapper(cls.classes.Foo, cls.tables.foo)
+        mapper(cls.classes.Bar, cls.tables.foo_bar)
+
+    @classmethod
+    def insert_data(cls):
+        s = Session()
+        s.add_all([
+            cls.classes.Foo(id=1, bar_id=2),
+            cls.classes.Bar(id=3)
+        ])
+        s.commit()
+
+    def test_overlap_plain(self):
+        s = Session()
+        row = s.query(self.classes.Foo, self.classes.Bar).all()[0]
+        def go():
+            eq_(row.Foo.id, 1)
+            eq_(row.Foo.bar_id, 2)
+            eq_(row.Bar.id, 3)
+        # all three columns are loaded independently without
+        # overlap, no additional SQL to load all attributes
+        self.assert_sql_count(testing.db, go, 0)
+
+    def test_overlap_subquery(self):
+        s = Session()
+        row = s.query(self.classes.Foo, self.classes.Bar).from_self().all()[0]
+        def go():
+            eq_(row.Foo.id, 1)
+            eq_(row.Foo.bar_id, 2)
+            eq_(row.Bar.id, 3)
+        # all three columns are loaded independently without
+        # overlap, no additional SQL to load all attributes
+        self.assert_sql_count(testing.db, go, 0)

File test/orm/test_joins.py

 
         sess = create_session()
         eq_(sess.query(Node).filter(Node.children.any(Node.data == 'n3'
-            )).all(), [Node(data='n1'), Node(data='n2')])
+            )).order_by(Node.data).all(), [Node(data='n1'), Node(data='n2')])
 
     def test_contains(self):
         Node = self.classes.Node

File test/orm/test_subquery_relations.py

             ])
         self.assert_sql_count(testing.db, go, 2)
 
+
+from .inheritance._poly_fixtures import _Polymorphic, Person, Engineer, Paperwork
+
+class BaseRelationFromJoinedSubclassTest(_Polymorphic):
+    @classmethod
+    def define_tables(cls, metadata):
+        people = Table('people', metadata,
+            Column('person_id', Integer,
+                primary_key=True,
+                test_needs_autoincrement=True),
+            Column('name', String(50)),
+            Column('type', String(30)))
+
+        # to test fully, PK of engineers table must be
+        # named differently from that of people
+        engineers = Table('engineers', metadata,
+            Column('engineer_id', Integer,
+                ForeignKey('people.person_id'),
+                primary_key=True),
+            Column('primary_language', String(50)))
+
+        paperwork = Table('paperwork', metadata,
+            Column('paperwork_id', Integer,
+                primary_key=True,
+                test_needs_autoincrement=True),
+            Column('description', String(50)),
+            Column('person_id', Integer,
+                ForeignKey('people.person_id')))
+
+    @classmethod
+    def setup_mappers(cls):
+        people = cls.tables.people
+        engineers = cls.tables.engineers
+        paperwork = cls.tables.paperwork
+
+        mapper(Person, people,
+            polymorphic_on=people.c.type,
+            polymorphic_identity='person',
+            properties={
+                'paperwork': relationship(
+                    Paperwork, order_by=paperwork.c.paperwork_id)})
+
+        mapper(Engineer, engineers,
+            inherits=Person,
+            polymorphic_identity='engineer')
+
+        mapper(Paperwork, paperwork)
+
+    @classmethod
+    def insert_data(cls):
+
+        e1 = Engineer(primary_language="java")
+        e2 = Engineer(primary_language="c++")
+        e1.paperwork = [Paperwork(description="tps report #1"),
+                        Paperwork(description="tps report #2")]
+        e2.paperwork = [Paperwork(description="tps report #3")]
+        sess = create_session()
+        sess.add_all([e1, e2])
+        sess.flush()
+
+    def test_correct_subquery(self):
+        sess = create_session()
+        # use Person.paperwork here just to give the least
+        # amount of context
+        q = sess.query(Engineer).\
+                filter(Engineer.primary_language == 'java').\
+                options(subqueryload(Person.paperwork))
+        def go():
+            eq_(q.all()[0].paperwork,
+                    [Paperwork(description="tps report #1"),
+                    Paperwork(description="tps report #2")],
+
+                )
+        self.assert_sql_execution(
+                testing.db,
+                go,
+                CompiledSQL(
+                    "SELECT people.person_id AS people_person_id, "
+                    "people.name AS people_name, people.type AS people_type, "
+                    "engineers.engineer_id AS engineers_engineer_id, "
+                    "engineers.primary_language AS engineers_primary_language "
+                    "FROM people JOIN engineers ON "
+                    "people.person_id = engineers.engineer_id "
+                    "WHERE engineers.primary_language = :primary_language_1",
+                    {"primary_language_1": "java"}
+                ),
+                # ensure we get "people JOIN engineer" here, even though
+                # primary key "people.person_id" is against "Person"
+                # *and* the path comes out as "Person.paperwork", still
+                # want to select from "Engineer" entity
+                CompiledSQL(
+                    "SELECT paperwork.paperwork_id AS paperwork_paperwork_id, "
+                    "paperwork.description AS paperwork_description, "
+                    "paperwork.person_id AS paperwork_person_id, "
+                    "anon_1.people_person_id AS anon_1_people_person_id "
+                    "FROM (SELECT people.person_id AS people_person_id "
+                        "FROM people JOIN engineers "
+                        "ON people.person_id = engineers.engineer_id "
+                        "WHERE engineers.primary_language = "
+                            ":primary_language_1) AS anon_1 "
+                    "JOIN paperwork "
+                        "ON anon_1.people_person_id = paperwork.person_id "
+                    "ORDER BY anon_1.people_person_id, paperwork.paperwork_id",
+                    {"primary_language_1": "java"}
+                )
+        )
+
+
+
 class SelfReferentialTest(fixtures.MappedTest):
     @classmethod
     def define_tables(cls, metadata):

File test/profiles.txt

 test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_cextensions 135
 test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_nocextensions 135
 
+# TEST: test.aaa_profiling.test_compiler.CompileTest.test_select_labels
+
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_nocextensions 177
+
 # TEST: test.aaa_profiling.test_compiler.CompileTest.test_update
 
 test.aaa_profiling.test_compiler.CompileTest.test_update 2.5_sqlite_pysqlite_nocextensions 65

File test/sql/test_query.py

             lambda: row[u2.c.user_id]
         )
 
+    def test_ambiguous_column_contains(self):
+        # ticket 2702.  in 0.7 we'd get True, False.
+        # in 0.8, both columns are present so it's True;
+        # but when they're fetched you'll get the ambiguous error.
+        users.insert().execute(user_id=1, user_name='john')
+        result = select([
+                    users.c.user_id,
+                    addresses.c.user_id]).\
+                    select_from(users.outerjoin(addresses)).execute()
+        row = result.first()
+
+        eq_(
+            set([users.c.user_id in row, addresses.c.user_id in row]),
+            set([True])
+        )
+
     def test_ambiguous_column_by_col_plus_label(self):
         users.insert().execute(user_id=1, user_name='john')
         result = select([users.c.user_id,

File test/sql/test_selectable.py

         comp2 = c2.comparator
 
         assert (c2 == 5).left._annotations == {"foo": "bar", "bat": "hoho"}
+
+class WithLabelsTest(fixtures.TestBase):
+    def _assert_labels_warning(self, s):
+        assert_raises_message(
+            exc.SAWarning,
+            "replaced by another column with the same key",
+            lambda: s.c
+        )
+
+    def _assert_result_keys(self, s, keys):
+        compiled = s.compile()
+        eq_(set(compiled.result_map), set(keys))
+
+    def _assert_subq_result_keys(self, s, keys):
+        compiled = s.select().compile()
+        eq_(set(compiled.result_map), set(keys))
+
+    def _names_overlap(self):
+        m = MetaData()
+        t1 = Table('t1', m, Column('x', Integer))
+        t2 = Table('t2', m, Column('x', Integer))
+        return select([t1, t2])
+
+    def test_names_overlap_nolabel(self):
+        sel = self._names_overlap()
+        self._assert_labels_warning(sel)
+        self._assert_result_keys(sel, ['x'])
+
+    def test_names_overlap_label(self):
+        sel = self._names_overlap().apply_labels()
+        eq_(
+            sel.c.keys(),
+            ['t1_x', 't2_x']
+        )
+        self._assert_result_keys(sel, ['t1_x', 't2_x'])
+
+    def _names_overlap_keys_dont(self):
+        m = MetaData()
+        t1 = Table('t1', m, Column('x', Integer, key='a'))
+        t2 = Table('t2', m, Column('x', Integer, key='b'))
+        return select([t1, t2])
+
+    def test_names_overlap_keys_dont_nolabel(self):
+        sel = self._names_overlap_keys_dont()
+        eq_(
+            sel.c.keys(),
+            ['a', 'b']
+        )
+        self._assert_result_keys(sel, ['x'])
+
+    def test_names_overlap_keys_dont_label(self):
+        sel = self._names_overlap_keys_dont().apply_labels()
+        eq_(
+            sel.c.keys(),
+            ['t1_a', 't2_b']
+        )
+        self._assert_result_keys(sel, ['t1_x', 't2_x'])
+
+    def _labels_overlap(self):
+        m = MetaData()
+        t1 = Table('t', m, Column('x_id', Integer))
+        t2 = Table('t_x', m, Column('id', Integer))
+        return select([t1, t2])
+
+    def test_labels_overlap_nolabel(self):
+        sel = self._labels_overlap()
+        eq_(
+            sel.c.keys(),
+            ['x_id', 'id']
+        )
+        self._assert_result_keys(sel, ['x_id', 'id'])
+
+    def test_labels_overlap_label(self):
+        sel = self._labels_overlap().apply_labels()
+        t2 = sel.froms[1]
+        eq_(
+            sel.c.keys(),
+            ['t_x_id', t2.c.id.anon_label]
+        )
+        self._assert_result_keys(sel, ['t_x_id', 'id_1'])
+        self._assert_subq_result_keys(sel, ['t_x_id', 'id_1'])
+
+    def _labels_overlap_keylabels_dont(self):
+        m = MetaData()
+        t1 = Table('t', m, Column('x_id', Integer, key='a'))
+        t2 = Table('t_x', m, Column('id', Integer, key='b'))
+        return select([t1, t2])
+
+    def test_labels_overlap_keylabels_dont_nolabel(self):
+        sel = self._labels_overlap_keylabels_dont()
+        eq_(sel.c.keys(), ['a', 'b'])
+        self._assert_result_keys(sel, ['x_id', 'id'])
+
+    def test_labels_overlap_keylabels_dont_label(self):
+        sel = self._labels_overlap_keylabels_dont().apply_labels()
+        eq_(sel.c.keys(), ['t_a', 't_x_b'])
+        self._assert_result_keys(sel, ['t_x_id', 'id_1'])
+
+    def _keylabels_overlap_labels_dont(self):
+        m = MetaData()
+        t1 = Table('t', m, Column('a', Integer, key='x_id'))
+        t2 = Table('t_x', m, Column('b', Integer, key='id'))
+        return select([t1, t2])
+
+    def test_keylabels_overlap_labels_dont_nolabel(self):
+        sel = self._keylabels_overlap_labels_dont()
+        eq_(sel.c.keys(), ['x_id', 'id'])
+        self._assert_result_keys(sel, ['a', 'b'])
+
+    def test_keylabels_overlap_labels_dont_label(self):
+        sel = self._keylabels_overlap_labels_dont().apply_labels()
+        t2 = sel.froms[1]
+        eq_(sel.c.keys(), ['t_x_id', t2.c.id.anon_label])
+        self._assert_result_keys(sel, ['t_a', 't_x_b'])
+        self._assert_subq_result_keys(sel, ['t_a', 't_x_b'])
+
+    def _keylabels_overlap_labels_overlap(self):
+        m = MetaData()
+        t1 = Table('t', m, Column('x_id', Integer, key='x_a'))
+        t2 = Table('t_x', m, Column('id', Integer, key='a'))
+        return select([t1, t2])
+
+    def test_keylabels_overlap_labels_overlap_nolabel(self):
+        sel = self._keylabels_overlap_labels_overlap()
+        eq_(sel.c.keys(), ['x_a', 'a'])
+        self._assert_result_keys(sel, ['x_id', 'id'])
+        self._assert_subq_result_keys(sel, ['x_id', 'id'])
+
+    def test_keylabels_overlap_labels_overlap_label(self):
+        sel = self._keylabels_overlap_labels_overlap().apply_labels()
+        t2 = sel.froms[1]
+        eq_(sel.c.keys(), ['t_x_a', t2.c.a.anon_label])
+        self._assert_result_keys(sel, ['t_x_id', 'id_1'])
+        self._assert_subq_result_keys(sel, ['t_x_id', 'id_1'])
+
+    def _keys_overlap_names_dont(self):
+        m = MetaData()
+        t1 = Table('t1', m, Column('a', Integer, key='x'))
+        t2 = Table('t2', m, Column('b', Integer, key='x'))
+        return select([t1, t2])
+
+    def test_keys_overlap_names_dont_nolabel(self):
+        sel = self._keys_overlap_names_dont()
+        self._assert_labels_warning(sel)
+        self._assert_result_keys(sel, ['a', 'b'])
+
+    def test_keys_overlap_names_dont_label(self):
+        sel = self._keys_overlap_names_dont().apply_labels()
+        eq_(
+            sel.c.keys(),
+            ['t1_x', 't2_x']
+        )
+        self._assert_result_keys(sel, ['t1_a', 't2_b'])