Mike Bayer avatar Mike Bayer committed ca32d1a Merge

merge default tip

Comments (0)

Files changed (15)

doc/build/changelog/changelog_08.rst

     :version: 0.8.0b2
 
     .. change::
+        :tags: orm, bug
+        :tickets: 2635
+
+      The :meth:`.Query.select_from` method can now be used with a
+      :func:`.aliased` construct without it interfering with the entities
+      being selected.   Basically, a statement like this::
+
+        ua = aliased(User)
+        session.query(User.name).select_from(ua).join(User, User.name > ua.name)
+
+      Will maintain the columns clause of the SELECT as coming from the
+      unaliased "user", as specified; the select_from only takes place in the
+      FROM clause::
+
+        SELECT users.name AS users_name FROM users AS users_1
+        JOIN users ON users.name < users_1.name
+
+      Note that this behavior is in contrast
+      to the original, older use case for :meth:`.Query.select_from`, which is that
+      of restating the mapped entity in terms of a different selectable::
+
+        session.query(User.name).\
+          select_from(user_table.select().where(user_table.c.id > 5))
+
+      Which produces::
+
+        SELECT anon_1.name AS anon_1_name FROM (SELECT users.id AS id,
+        users.name AS name FROM users WHERE users.id > :id_1) AS anon_1
+
+      It was the "aliasing" behavior of the latter use case that was
+      getting in the way of the former use case.   The method now
+      specifically considers a SQL expression like
+      :func:`.expression.select` or :func:`.expression.alias`
+      separately from a mapped entity like a :func:`.aliased`
+      construct.
+
+    .. change::
+        :tags: sql, bug
+        :tickets: 2633
+
+      Fixed a regression caused by :ticket:`2410` whereby a
+      :class:`.CheckConstraint` would apply itself back to the
+      original table during a :meth:`.Table.tometadata` operation, as
+      it would parse the SQL expression for a parent table. The
+      operation now copies the given expression to correspond to the
+      new table.
+
+    .. change::
+        :tags: oracle, bug
+        :tickets: 2619
+
+      Fixed table reflection for Oracle when accessing a synonym that refers
+      to a DBLINK remote database; while the syntax has been present in the
+      Oracle dialect for some time, up until now it has never been tested.
+      The syntax has been tested against a sample database linking to itself,
+      however there's still some uncertainty as to what should be used for the
+      "owner" when querying the remote database for table information.
+      Currently, the value of "username" from user_db_links is used to
+      match the "owner".
+
+    .. change::
+        :tags: orm, feature
+        :tickets: 2601
+
+      Added :meth:`.KeyedTuple._asdict` and :attr:`.KeyedTuple._fields`
+      to the :class:`.KeyedTuple` class to provide some degree of compatibility
+      with the Python standard library ``collections.namedtuple()``.
+
+    .. change::
         :tags: sql, bug
         :tickets: 2631
 

doc/build/core/connections.rst

 SQLAlchemy also allows a dialect to be registered within the current process, bypassing
 the need for separate installation.   Use the ``register()`` function as follows::
 
-    from sqlalchemy.dialects import register
+    from sqlalchemy.dialects import registry
     registry.register("mysql.foodialect", "myapp.dialect", "MyMySQLDialect")
 
 The above will respond to ``create_engine("mysql+foodialect://")`` and load the

doc/build/orm/query.rst

 
 .. autoclass:: sqlalchemy.orm.util.AliasedInsp
 
+.. autoclass:: sqlalchemy.util.KeyedTuple
+	:members: keys, _fields, _asdict
+
 .. autofunction:: join
 
 .. autofunction:: outerjoin

doc/build/orm/tutorial.rst

     fred Fred Flinstone
 
 The tuples returned by :class:`~sqlalchemy.orm.query.Query` are *named*
-tuples, and can be treated much like an ordinary Python object. The names are
+tuples, supplied by the :class:`.KeyedTuple` class, and can be treated much like an
+ordinary Python object. The names are
 the same as the attribute's name for an attribute, and the class name for a
 class:
 

lib/sqlalchemy/dialects/oracle/base.py

 
         if resolve_synonyms:
             actual_name, owner, dblink, synonym = self._resolve_synonym(
-                                                         connection,
-                                                         desired_owner=self.denormalize_name(schema),
-                                                         desired_synonym=self.denormalize_name(table_name)
-                                                   )
+                        connection,
+                         desired_owner=self.denormalize_name(schema),
+                         desired_synonym=self.denormalize_name(table_name)
+                       )
         else:
             actual_name, owner, dblink, synonym = None, None, None, None
         if not actual_name:
             actual_name = self.denormalize_name(table_name)
-        if not dblink:
-            dblink = ''
-        if not owner:
+
+        if dblink:
+            # using user_db_links here since all_db_links appears
+            # to have more restricted permissions.
+            # http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_admin005.htm
+            # will need to hear from more users if we are doing
+            # the right thing here.  See [ticket:2619]
+            owner = connection.scalar(
+                            sql.text("SELECT username FROM user_db_links "
+                                    "WHERE db_link=:link"), link=dblink)
+            dblink = "@" + dblink
+        elif not owner:
             owner = self.denormalize_name(schema or self.default_schema_name)
-        return (actual_name, owner, dblink, synonym)
+
+        return (actual_name, owner, dblink or '', synonym)
 
     @reflection.cache
     def get_schema_names(self, connection, **kw):
         else:
             char_length_col = 'data_length'
 
-        c = connection.execute(sql.text(
-                "SELECT column_name, data_type, %(char_length_col)s, data_precision, data_scale, "
-                "nullable, data_default FROM ALL_TAB_COLUMNS%(dblink)s "
-                "WHERE table_name = :table_name AND owner = :owner "
-                "ORDER BY column_id" % {'dblink': dblink, 'char_length_col': char_length_col}),
-                               table_name=table_name, owner=schema)
+        params = {"table_name": table_name}
+        text = "SELECT column_name, data_type, %(char_length_col)s, "\
+                "data_precision, data_scale, "\
+                "nullable, data_default FROM ALL_TAB_COLUMNS%(dblink)s "\
+                "WHERE table_name = :table_name"
+        if schema is not None:
+            params['owner'] = schema
+            text += " AND owner = :owner "
+        text += " ORDER BY column_id"
+        text = text % {'dblink': dblink, 'char_length_col': char_length_col}
+
+        c = connection.execute(sql.text(text), **params)
 
         for row in c:
             (colname, orig_colname, coltype, length, precision, scale, nullable, default) = \
                                           resolve_synonyms, dblink,
                                           info_cache=info_cache)
         indexes = []
-        q = sql.text("""
-        SELECT a.index_name, a.column_name, b.uniqueness
-        FROM ALL_IND_COLUMNS%(dblink)s a,
-        ALL_INDEXES%(dblink)s b
-        WHERE
-            a.index_name = b.index_name
-            AND a.table_owner = b.table_owner
-            AND a.table_name = b.table_name
 
-        AND a.table_name = :table_name
-        AND a.table_owner = :schema
-        ORDER BY a.index_name, a.column_position""" % {'dblink': dblink})
-        rp = connection.execute(q, table_name=self.denormalize_name(table_name),
-                                schema=self.denormalize_name(schema))
+        params = {'table_name': table_name}
+        text = \
+            "SELECT a.index_name, a.column_name, b.uniqueness "\
+            "\nFROM ALL_IND_COLUMNS%(dblink)s a, "\
+            "\nALL_INDEXES%(dblink)s b "\
+            "\nWHERE "\
+            "\na.index_name = b.index_name "\
+            "\nAND a.table_owner = b.table_owner "\
+            "\nAND a.table_name = b.table_name "\
+            "\nAND a.table_name = :table_name "
+
+        if schema is not None:
+            params['schema'] = schema
+            text += "AND a.table_owner = :schema "
+
+        text += "ORDER BY a.index_name, a.column_position"
+
+        text = text % {'dblink': dblink}
+
+        q = sql.text(text)
+        rp = connection.execute(q, **params)
         indexes = []
         last_index_name = None
         pk_constraint = self.get_pk_constraint(
     def _get_constraint_data(self, connection, table_name, schema=None,
                             dblink='', **kw):
 
-        rp = connection.execute(
-            sql.text("""SELECT
-             ac.constraint_name,
-             ac.constraint_type,
-             loc.column_name AS local_column,
-             rem.table_name AS remote_table,
-             rem.column_name AS remote_column,
-             rem.owner AS remote_owner,
-             loc.position as loc_pos,
-             rem.position as rem_pos
-           FROM all_constraints%(dblink)s ac,
-             all_cons_columns%(dblink)s loc,
-             all_cons_columns%(dblink)s rem
-           WHERE ac.table_name = :table_name
-           AND ac.constraint_type IN ('R','P')
-           AND ac.owner = :owner
-           AND ac.owner = loc.owner
-           AND ac.constraint_name = loc.constraint_name
-           AND ac.r_owner = rem.owner(+)
-           AND ac.r_constraint_name = rem.constraint_name(+)
-           AND (rem.position IS NULL or loc.position=rem.position)
-           ORDER BY ac.constraint_name, loc.position""" % {'dblink': dblink}),
-            table_name=table_name, owner=schema)
+        params = {'table_name': table_name}
+
+        text = \
+            "SELECT"\
+            "\nac.constraint_name,"\
+            "\nac.constraint_type,"\
+            "\nloc.column_name AS local_column,"\
+            "\nrem.table_name AS remote_table,"\
+            "\nrem.column_name AS remote_column,"\
+            "\nrem.owner AS remote_owner,"\
+            "\nloc.position as loc_pos,"\
+            "\nrem.position as rem_pos"\
+            "\nFROM all_constraints%(dblink)s ac,"\
+            "\nall_cons_columns%(dblink)s loc,"\
+            "\nall_cons_columns%(dblink)s rem"\
+            "\nWHERE ac.table_name = :table_name"\
+            "\nAND ac.constraint_type IN ('R','P')"
+
+        if schema is not None:
+            params['owner'] = schema
+            text += "\nAND ac.owner = :owner"
+
+        text += \
+            "\nAND ac.owner = loc.owner"\
+            "\nAND ac.constraint_name = loc.constraint_name"\
+            "\nAND ac.r_owner = rem.owner(+)"\
+            "\nAND ac.r_constraint_name = rem.constraint_name(+)"\
+            "\nAND (rem.position IS NULL or loc.position=rem.position)"\
+            "\nORDER BY ac.constraint_name, loc.position"
+
+        text = text % {'dblink': dblink}
+        rp = connection.execute(sql.text(text), **params)
         constraint_data = rp.fetchall()
         return constraint_data
 
             self._prepare_reflection_args(connection, view_name, schema,
                                           resolve_synonyms, dblink,
                                           info_cache=info_cache)
-        s = sql.text("""
-        SELECT text FROM all_views
-        WHERE owner = :schema
-        AND view_name = :view_name
-        """)
-        rp = connection.execute(s,
-                                view_name=view_name, schema=schema).scalar()
+
+        params = {'view_name': view_name}
+        text = "SELECT text FROM all_views WHERE view_name=:view_name"
+
+        if schema is not None:
+            text += " AND owner = :schema"
+            params['schema'] = schema
+
+        rp = connection.execute(sql.text(text), **params).scalar()
         if rp:
             return rp.decode(self.encoding)
         else:

lib/sqlalchemy/orm/query.py

                 self._polymorphic_adapters[m.local_table] = adapter
 
     def _set_select_from(self, *obj):
-
         fa = []
+        select_from_alias = None
         for from_obj in obj:
-            if isinstance(from_obj, expression.SelectBase):
-                from_obj = from_obj.alias()
-            fa.append(from_obj)
+            info = inspect(from_obj)
+
+            if hasattr(info, 'mapper') and \
+                (info.is_mapper or info.is_aliased_class):
+                self._select_from_entity = from_obj
+                fa.append(info.selectable)
+            elif not info.is_selectable:
+                raise sa_exc.ArgumentError(
+                        "argument is not a mapped class, mapper, "
+                        "aliased(), or FromClause instance.")
+            else:
+                if isinstance(from_obj, expression.SelectBase):
+                    from_obj = from_obj.alias()
+                select_from_alias = from_obj
+                fa.append(from_obj)
 
         self._from_obj = tuple(fa)
 
         if len(self._from_obj) == 1 and \
-            isinstance(self._from_obj[0], expression.Alias):
+            isinstance(select_from_alias, expression.Alias):
             equivs = self.__all_equivs()
             self._from_obj_alias = sql_util.ColumnAdapter(
                                                 self._from_obj[0], equivs)
         usage of :meth:`~.Query.select_from`.
 
         """
-        obj = []
-        for fo in from_obj:
-            info = inspect(fo)
-            if hasattr(info, 'mapper') and \
-                (info.is_mapper or info.is_aliased_class):
-                self._select_from_entity = fo
-                obj.append(info.selectable)
-            elif not info.is_selectable:
-                raise sa_exc.ArgumentError(
-                            "select_from() accepts FromClause objects only.")
-            else:
-                obj.append(fo)
-
-        self._set_select_from(*obj)
+
+        self._set_select_from(*from_obj)
 
     def __getitem__(self, item):
         if isinstance(item, slice):

lib/sqlalchemy/schema.py

         args = []
         for c in self.columns:
             args.append(c.copy(schema=schema))
-        for c in self.constraints:
-            args.append(c.copy(schema=schema))
         table = Table(
             self.name, metadata, schema=schema,
             *args, **self.kwargs
             )
+        for c in self.constraints:
+            table.append_constraint(c.copy(schema=schema, target_table=table))
+
         for index in self.indexes:
             # skip indexes that would be generated
             # by the 'index' flag on Column
     """
 
     def __init__(self, sqltext, name=None, deferrable=None,
-                    initially=None, table=None, _create_rule=None):
+                    initially=None, table=None, _create_rule=None,
+                    _autoattach=True):
         """Construct a CHECK constraint.
 
         :param sqltext:
         self.sqltext = expression._literal_as_text(sqltext)
         if table is not None:
             self._set_parent_with_dispatch(table)
-        else:
+        elif _autoattach:
             cols = sqlutil.find_columns(self.sqltext)
             tables = set([c.table for c in cols
                         if c.table is not None])
             return "column_check_constraint"
     __visit_name__ = property(__visit_name__)
 
-    def copy(self, **kw):
-        c = CheckConstraint(self.sqltext,
+    def copy(self, target_table=None, **kw):
+        if target_table is not None:
+            def replace(col):
+                if self.table.c.contains_column(col):
+                    return target_table.c[col.key]
+                else:
+                    return None
+            sqltext = visitors.replacement_traverse(self.sqltext, {}, replace)
+        else:
+            sqltext = self.sqltext
+        c = CheckConstraint(sqltext,
                                 name=self.name,
                                 initially=self.initially,
                                 deferrable=self.deferrable,
-                                _create_rule=self._create_rule)
+                                _create_rule=self._create_rule,
+                                table=target_table,
+                                _autoattach=False)
         c.dispatch._update(self.dispatch)
         return c
 
             event.listen(table.metadata, "before_drop",
                          DropConstraint(self, on=supports_alter))
 
-    def copy(self, **kw):
+    def copy(self, schema=None, **kw):
         fkc = ForeignKeyConstraint(
                     [x.parent.name for x in self._elements.values()],
-                    [x._get_colspec(**kw) for x in self._elements.values()],
+                    [x._get_colspec(schema=schema) for x in self._elements.values()],
                     name=self.name,
                     onupdate=self.onupdate,
                     ondelete=self.ondelete,

lib/sqlalchemy/testing/plugin/noseplugin.py

 def _post_setup_options(opt, file_config):
     from sqlalchemy.testing import config
     config.options = options
+    config.file_config = file_config
 
 
 @post
 
                 if not check.enabled:
                     raise SkipTest(
-                        "'%s' unsupported on DB implementation '%s'" % (
-                         cls.__name__, config.db.name)
+                        check.reason if check.reason
+                        else
+                        (
+                            "'%s' unsupported on DB implementation '%s'" % (
+                                cls.__name__, config.db.name
+                            )
                         )
+                    )
 
         if cls.__unsupported_on__:
             spec = exclusions.db_spec(*cls.__unsupported_on__)

lib/sqlalchemy/util/_collections.py

 
 
 class KeyedTuple(tuple):
-    """tuple() subclass that adds labeled names.
+    """``tuple`` subclass that adds labeled names.
 
-    Unlike collections.namedtuple, this is
-    an ad-hoc data structure, not a factory
-    for new types.
+    E.g.::
 
-    It's pickleable in-place without the need for stack
-    frame manipulation, new KeyedTuple types can be created
-    very quickly and simply (look at the source
-    to collections.namedtuple for contrast).
+        >>> k = KeyedTuple([1, 2, 3], labels=["one", "two", "three"])
+        >>> k.one
+        1
+        >>> k.two
+        2
 
-    Is used by :class:`.Query` to return result rows.
+    Result rows returned by :class:`.Query` that contain multiple
+    ORM entities and/or column expressions make use of this
+    class to return rows.
+
+    The :class:`.KeyedTuple` exhibits similar behavior to the
+    ``collections.namedtuple()`` construct provided in the Python
+    standard library, however is architected very differently.
+    Unlike ``collections.namedtuple()``, :class:`.KeyedTuple` is
+    does not rely on creation of custom subtypes in order to represent
+    a new series of keys, instead each :class:`.KeyedTuple` instance
+    receives its list of keys in place.   The subtype approach
+    of ``collections.namedtuple()`` introduces significant complexity
+    and performance overhead, which is not necessary for the
+    :class:`.Query` object's use case.
+
+    .. versionchanged:: 0.8
+        Compatibility methods with ``collections.namedtuple()`` have been
+        added including :attr:`.KeyedTuple._fields` and
+        :meth:`.KeyedTuple._asdict`.
+
+    .. seealso::
+
+        :ref:`ormtutorial_querying`
 
     """
 
         return t
 
     def keys(self):
+        """Return a list of string key names for this :class:`.KeyedTuple`.
+
+        .. seealso::
+
+            :attr:`.KeyedTuple._fields`
+
+        """
+
         return [l for l in self._labels if l is not None]
 
     @property
     def _fields(self):
+        """Return a tuple of string key names for this :class:`.KeyedTuple`.
+
+        This method provides compatibility with ``collections.namedtuple()``.
+
+        .. versionadded:: 0.8
+
+        .. seealso::
+
+            :meth:`.KeyedTuple.keys`
+
+        """
         return tuple(self.keys())
 
     def _asdict(self):
+        """Return the contents of this :class:`.KeyedTuple` as a dictionary.
+
+        This method provides compatibility with ``collections.namedtuple()``,
+        with the exception that the dictionary returned is **not** ordered.
+
+        .. versionadded:: 0.8
+
+        """
         return dict((key, self.__dict__[key]) for key in self.keys())
 
 

lib/sqlalchemy/util/langhelpers.py

     if to_inspect is None:
         to_inspect = obj
 
+    missing = object()
+
     def genargs():
         try:
             (args, vargs, vkw, defaults) = \
                 yield repr(getattr(obj, arg, None))
             for (arg, defval) in zip(args[-default_len:], defaults):
                 try:
-                    val = getattr(obj, arg, None)
-                    if val != defval:
+                    val = getattr(obj, arg, missing)
+                    if val is not missing and val != defval:
                         yield '%s=%r' % (arg, val)
                 except:
                     pass
         if additional_kw:
             for arg, defval in additional_kw:
                 try:
-                    val = getattr(obj, arg, None)
-                    if val != defval:
+                    val = getattr(obj, arg, missing)
+                    if val is not missing and val != defval:
                         yield '%s=%r' % (arg, val)
                 except:
                     pass
 requirement_cls=test.requirements:DefaultRequirements
 profile_file=test/profiles.txt
 
+# name of a "loopback" link set up on the oracle database.
+# to create this, suppose your DB is scott/tiger@xe.  You'd create it
+# like:
+# create database link test_link connect to scott identified by tiger using 'xe';
+oracle_db_link = test_link
+
+
 [db]
 default=sqlite:///:memory:
 sqlite=sqlite:///:memory:

test/dialect/test_oracle.py

 # coding: utf-8
+from __future__ import with_statement
 
 from sqlalchemy.testing import eq_
 from sqlalchemy import *
         eq_(result, u'’é')
 
 
+class DBLinkReflectionTest(fixtures.TestBase):
+    __requires__ = 'oracle_test_dblink',
+    __only_on__ = 'oracle'
+
+    @classmethod
+    def setup_class(cls):
+        from sqlalchemy.testing import config
+        cls.dblink = config.file_config.get('sqla_testing', 'oracle_db_link')
+
+        with testing.db.connect() as conn:
+            conn.execute(
+                "create table test_table "
+                "(id integer primary key, data varchar2(50))")
+            conn.execute("create synonym test_table_syn "
+                "for test_table@%s" % cls.dblink)
+
+    @classmethod
+    def teardown_class(cls):
+        with testing.db.connect() as conn:
+            conn.execute("drop synonym test_table_syn")
+            conn.execute("drop table test_table")
+
+    def test_hello_world(self):
+        """test that the synonym/dblink is functional."""
+        testing.db.execute("insert into test_table_syn (id, data) "
+                            "values (1, 'some data')")
+        eq_(
+            testing.db.execute("select * from test_table_syn").first(),
+            (1, 'some data')
+        )
+
+    def test_reflection(self):
+        """test the resolution of the synonym/dblink. """
+        m = MetaData()
+
+        t = Table('test_table_syn', m, autoload=True,
+                autoload_with=testing.db, oracle_resolve_synonyms=True)
+        eq_(t.c.keys(), ['id', 'data'])
+        eq_(list(t.primary_key), [t.c.id])

test/orm/test_froms.py

         )
 
 
+    def test_aliased_class_vs_nonaliased(self):
+        User, users = self.classes.User, self.tables.users
+        mapper(User, users)
+
+        ua = aliased(User)
+
+        sess = create_session()
+        self.assert_compile(
+            sess.query(User).select_from(ua).join(User, ua.name > User.name),
+            "SELECT users.id AS users_id, users.name AS users_name "
+            "FROM users AS users_1 JOIN users ON users.name < users_1.name"
+        )
+
+        self.assert_compile(
+            sess.query(User.name).select_from(ua).join(User, ua.name > User.name),
+            "SELECT users.name AS users_name FROM users AS users_1 "
+            "JOIN users ON users.name < users_1.name"
+        )
+
+        self.assert_compile(
+            sess.query(ua.name).select_from(ua).join(User, ua.name > User.name),
+            "SELECT users_1.name AS users_1_name FROM users AS users_1 "
+            "JOIN users ON users.name < users_1.name"
+        )
+
+        self.assert_compile(
+            sess.query(ua).select_from(User).join(ua, ua.name > User.name),
+            "SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
+            "FROM users JOIN users AS users_1 ON users.name < users_1.name"
+        )
+
+        # this is tested in many other places here, just adding it
+        # here for comparison
+        self.assert_compile(
+            sess.query(User.name).\
+                    select_from(users.select().where(users.c.id > 5)),
+            "SELECT anon_1.name AS anon_1_name FROM (SELECT users.id AS id, "
+            "users.name AS name FROM users WHERE users.id > :id_1) AS anon_1"
+        )
 
     def test_join_no_order_by(self):
         User, users = self.classes.User, self.tables.users

test/requirements.py

         return skip_if(lambda: not self._has_sqlite())
 
     @property
+    def oracle_test_dblink(self):
+        return skip_if(
+                    lambda: not self.config.file_config.has_option(
+                        'sqla_testing', 'oracle_db_link'),
+                    "oracle_db_link option not specified in config"
+                )
+
+    @property
     def ad_hoc_engines(self):
         """Test environment must allow ad-hoc engine/connection creation.
 

test/sql/test_constraints.py

         c = Index('foo', t.c.a)
         assert c in t.indexes
 
+    def test_tometadata_ok(self):
+        m = MetaData()
+
+        t = Table('tbl', m,
+                  Column('a', Integer),
+                  Column('b', Integer)
+        )
+
+        t2 = Table('t2', m,
+                Column('a', Integer),
+                Column('b', Integer)
+        )
+
+        uq = UniqueConstraint(t.c.a)
+        ck = CheckConstraint(t.c.a > 5)
+        fk = ForeignKeyConstraint([t.c.a], [t2.c.a])
+        pk = PrimaryKeyConstraint(t.c.a)
+
+        m2 = MetaData()
+
+        t3 = t.tometadata(m2)
+
+        eq_(len(t3.constraints), 4)
+
+        for c in t3.constraints:
+            assert c.table is t3
+
+    def test_check_constraint_copy(self):
+        m = MetaData()
+        t = Table('tbl', m,
+                  Column('a', Integer),
+                  Column('b', Integer)
+        )
+        ck = CheckConstraint(t.c.a > 5)
+        ck2 = ck.copy()
+        assert ck in t.constraints
+        assert ck2 not in t.constraints
+
+
     def test_ambig_check_constraint_auto_append(self):
         m = MetaData()
 
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.