Commits

Mike Bayer committed 79ea271

- [removed] Deprecated identifiers removed:

* allow_null_pks mapper() argument
(use allow_partial_pks)

* _get_col_to_prop() mapper method
(use get_property_by_column())

* dont_load argument to Session.merge()
(use load=True)

* sqlalchemy.orm.shard module
(use sqlalchemy.ext.horizontal_shard)

Comments (0)

Files changed (8)

   - [removed] The legacy "mutable" system of the
     ORM, including the MutableType class as well
     as the mutable=True flag on PickleType
-    and postgresql.ARRAY has been removed.   
+    and postgresql.ARRAY has been removed.
     In-place mutations are detected by the ORM
     using the sqlalchemy.ext.mutable extension,
     introduced in 0.7.   The removal of MutableType
     no longer has any effect. is_modified() in 
     all cases looks only at local in-memory
     modified flags and will not emit any
-    SQL or invoke loader callables/initializers.  
+    SQL or invoke loader callables/initializers.
     [ticket:2320]
 
   - [bug] The warning emitted when using 
     in these collections when the event 
     is called.  Added before_attach
     event to accommodate use cases that 
-    need autoflush w pre-attached object.  
+    need autoflush w pre-attached object.
     [ticket:2464]
 
+  - [removed] Deprecated identifiers removed:
+
+    * allow_null_pks mapper() argument
+      (use allow_partial_pks)
+
+    * _get_col_to_prop() mapper method
+      (use get_property_by_column())
+
+    * dont_load argument to Session.merge()
+      (use load=True)
+
+    * sqlalchemy.orm.shard module
+      (use sqlalchemy.ext.horizontal_shard)
+
 - engine
   - [feature] Added a new system
     for registration of new dialects in-process
     argument, preceding "table".   Code which
     uses the 0.7 version of this very new 
     event will need modification to add the 
-    "inspector" object as the first argument.  
+    "inspector" object as the first argument.
     [ticket:2418]
 
   - [feature] The behavior of column targeting
 
 0.7.7 - 0.7.xx
 ==============
-0.8 development begins during 0.7.7 development.   
+0.8 development begins during 0.7.7 development.
 All relevant bug fixes 
 and features listed below from version 0.7.7 on 
 are also present in 0.8.
 
   - [feature] The 'objects' argument to 
     flush() is no longer deprecated, as some
-    valid use cases have been identified.  
+    valid use cases have been identified.
 
   - [bug] Fixed identity_key() function which 
     was not accepting a scalar argument 
   - [bug] Repaired common table expression
     rendering to function correctly when the 
     SELECT statement contains UNION or other
-    compound expressions, courtesy btbuilder.  
+    compound expressions, courtesy btbuilder.
     [ticket:2490]
 
   - [bug] Fixed bug whereby append_column()
     string positional parameters passed to 
     engine/connection execute() would fail to be
     interpreted correctly, due to __iter__
-    being present on Py3K string.  
+    being present on Py3K string.
     [ticket:2503].
 
 - postgresql
     would not get the appropriate collection
     instrumentation if it were only used
     in a custom subclass that used
-    @collection.internally_instrumented.  
+    @collection.internally_instrumented.
     [ticket:2406]
 
   - [bug] Fixed bug whereby SQL adaption mechanics
     would fail in a very nested scenario involving
     joined-inheritance, joinedload(), limit(), and a
-    derived function in the columns clause.  
+    derived function in the columns clause.
     [ticket:2419]
 
   - [bug] Fixed the repr() for CascadeOptions to
 
   - [feature] Added the ability to query for
     Table-bound column names when using 
-    query(sometable).filter_by(colname=value).  
+    query(sometable).filter_by(colname=value).
     [ticket:2400]
 
   - [bug] Improved the "declarative reflection" 
   - [bug] Fixed bug whereby event.listen(SomeClass)
     forced an entirely unnecessary compile of the 
     mapper, making events very hard to set up
-    at module import time (nobody noticed this ??)  
+    at module import time (nobody noticed this ??)
     [ticket:2367]
 
   - [bug] Fixed bug whereby hybrid_property didn't 
 - engine
   - [bug] Added __reduce__ to StatementError, 
     DBAPIError, column errors so that exceptions 
-    are pickleable, as when using multiprocessing.  
+    are pickleable, as when using multiprocessing.
     However, not 
     all DBAPIs support this yet, such as 
     psycopg2. [ticket:2371]

lib/sqlalchemy/orm/__init__.py

         descriptor placed on the class. 
         **Deprecated.** Please see :class:`.AttributeEvents`.
 
-
     """
 
     return ColumnProperty(*cols, **kw)
            flag is highly discouraged; as an alternative, see the method
            :meth:`.Query.populate_existing`.
 
-        :param allow_null_pks: This flag is deprecated - this is stated as
-           allow_partial_pks which defaults to True.
-
         :param allow_partial_pks: Defaults to True.  Indicates that a
            composite primary key with some NULL values should be considered as
            possibly existing within the database. This affects whether a

lib/sqlalchemy/orm/mapper.py

                  polymorphic_identity=None,
                  concrete=False,
                  with_polymorphic=None,
-                 allow_null_pks=None,
                  allow_partial_pks=True,
                  batch=True,
                  column_prefix=None,
         self._reconstructor = None
         self._deprecated_extensions = util.to_list(extension or [])
 
-        if allow_null_pks:
-            util.warn_deprecated(
-                    "the allow_null_pks option to Mapper() is "
-                    "deprecated.  It is now allow_partial_pks=False|True, "
-                    "defaults to True.")
-            allow_partial_pks = allow_null_pks
-
         self.allow_partial_pks = allow_partial_pks
 
         self._set_with_polymorphic(with_polymorphic)
 
         if self.inherits:
             self.dispatch._update(self.inherits.dispatch)
-            super_extensions = set(chain(*[m._deprecated_extensions 
-                                    for m in self.inherits.iterate_to_root()]))
+            super_extensions = set(
+                        chain(*[m._deprecated_extensions 
+                        for m in self.inherits.iterate_to_root()]))
         else:
             super_extensions = set()
 
 
     def _configure_listeners(self):
         if self.inherits:
-            super_extensions = set(chain(*[m._deprecated_extensions 
-                                    for m in self.inherits.iterate_to_root()]))
+            super_extensions = set(
+                        chain(*[m._deprecated_extensions 
+                        for m in self.inherits.iterate_to_root()]))
         else:
             super_extensions = set()
 
             raise sa_exc.InvalidRequestError(
                     "Mapper '%s' has no property '%s'" % (self, key))
 
-    @util.deprecated('0.6.4',
-                     'Call to deprecated function mapper._get_col_to_pr'
-                     'op(). Use mapper.get_property_by_column()')
-    def _get_col_to_prop(self, col):
-        return self._columntoproperty[col]
-
     def get_property_by_column(self, column):
         """Given a :class:`.Column` object, return the
         :class:`.MapperProperty` which maps this column."""

lib/sqlalchemy/orm/session.py

         for o, m, st_, dct_ in cascade_states:
             self._delete_impl(st_)
 
-    def merge(self, instance, load=True, **kw):
+    def merge(self, instance, load=True):
         """Copy the state an instance onto the persistent instance with the
         same identifier.
 
         See :ref:`unitofwork_merging` for a detailed discussion of merging.
 
         """
-        if 'dont_load' in kw:
-            load = not kw['dont_load']
-            util.warn_deprecated('dont_load=True has been renamed to '
-                                 'load=False.')
 
         _recursive = {}
 

lib/sqlalchemy/orm/shard.py

-# orm/shard.py
-# Copyright (C) 2005-2012 the SQLAlchemy authors and contributors <see AUTHORS file>
-#
-# This module is part of SQLAlchemy and is released under
-# the MIT License: http://www.opensource.org/licenses/mit-license.php
-
-from sqlalchemy import util
-
-util.warn_deprecated(
-    "Horizontal sharding is now importable via "
-    "'import sqlalchemy.ext.horizontal_shard"
-)
-
-from sqlalchemy.ext.horizontal_shard import *
-

lib/sqlalchemy/orm/strategies.py

                 load_scalar_from_joined_existing_row, \
                 None, load_scalar_from_joined_exec
 
-EagerLoader = JoinedLoader
-"""Deprecated, use JoinedLoader"""
-
 log.class_logger(JoinedLoader)
 
 class EagerLazyOption(StrategizedOption):

lib/sqlalchemy/orm/sync.py

     else:
         return False
 
-def _raise_col_to_prop(isdest, source_mapper, source_column, dest_mapper, dest_column):
+def _raise_col_to_prop(isdest, source_mapper, source_column,
+                       dest_mapper, dest_column):
     if isdest:
-        raise exc.UnmappedColumnError(
-                                "Can't execute sync rule for destination column '%s'; "
-                                "mapper '%s' does not map this column.  Try using an explicit"
-                                " `foreign_keys` collection which does not include this column "
-                                "(or use a viewonly=True relation)." % (dest_column, dest_mapper)
-                                )
+        raise exc.UnmappedColumnError("Can't execute sync rule for "
+                "destination column '%s'; mapper '%s' does not map "
+                "this column.  Try using an explicit `foreign_keys` "
+                "collection which does not include this column (or use "
+                "a viewonly=True relation)." % (dest_column,
+                dest_mapper))
     else:
-        raise exc.UnmappedColumnError(
-                                "Can't execute sync rule for source column '%s'; mapper '%s' "
-                                "does not map this column.  Try using an explicit `foreign_keys`"
-                                " collection which does not include destination column '%s' (or "
-                                "use a viewonly=True relation)." % 
-                                (source_column, source_mapper, dest_column)
-                                )
+        raise exc.UnmappedColumnError("Can't execute sync rule for "
+                "source column '%s'; mapper '%s' does not map this "
+                "column.  Try using an explicit `foreign_keys` "
+                "collection which does not include destination column "
+                "'%s' (or use a viewonly=True relation)."
+                % (source_column, source_mapper, dest_column))

test/orm/test_merge.py

 import operator
 from test.lib import testing
 from sqlalchemy.util import OrderedSet
-from sqlalchemy.orm import mapper, relationship, create_session, PropComparator, \
-                            synonym, comparable_property, sessionmaker, attributes,\
-                            Session, backref, configure_mappers
+from sqlalchemy.orm import mapper, relationship, create_session, \
+    PropComparator, synonym, comparable_property, sessionmaker, \
+    attributes, Session, backref, configure_mappers
 from sqlalchemy.orm.collections import attribute_mapped_collection
 from sqlalchemy.orm.interfaces import MapperOption
 from test.lib.testing import eq_, ne_
         eq_(sess.query(User).first(), User(id=7, name='fred'))
 
     def test_transient_to_pending_no_pk(self):
-        """test that a transient object with no PK attribute doesn't trigger a needless load."""
+        """test that a transient object with no PK attribute 
+        doesn't trigger a needless load."""
 
         User, users = self.classes.User, self.tables.users
 
                 Address(id=3, email_address='fred3')])))
 
     def test_unsaved_cascade(self):
-        """Merge of a transient entity with two child transient entities, with a bidirectional relationship."""
+        """Merge of a transient entity with two child transient 
+        entities, with a bidirectional relationship."""
 
         users, Address, addresses, User = (self.tables.users,
                                 self.classes.Address,
             'addresses': relationship(
                 mapper(Address, addresses),
                 backref='user',
-                collection_class=attribute_mapped_collection('email_address')),
+                collection_class=
+                    attribute_mapped_collection('email_address')),
             })
         u1 = User(id=7, name='fred')
         u1.addresses['foo@bar.com'] = Address(email_address='foo@bar.com')
         assert u1.addresses.keys() == ['foo@bar.com']
 
     def test_attribute_cascade(self):
-        """Merge of a persistent entity with two child persistent entities."""
+        """Merge of a persistent entity with two child 
+        persistent entities."""
 
         users, Address, addresses, User = (self.tables.users,
                                 self.classes.Address,
 
 
         mapper(User, users, properties={
-            'addresses':relationship(mapper(Address, addresses), backref='user')
+            'addresses':relationship(mapper(Address, addresses), 
+                        backref='user')
         })
         load = self.load_tracker(User)
         self.load_tracker(Address, load)
 
 
         mapper(Order, orders, properties={
-            'items':relationship(mapper(Item, items), secondary=order_items)})
+            'items':relationship(mapper(Item, items), 
+                        secondary=order_items)})
 
         load = self.load_tracker(Order)
         self.load_tracker(Item, load)
 
 
         mapper(User, users, properties={
-            'address':relationship(mapper(Address, addresses),uselist = False)
+            'address':relationship(mapper(Address, addresses),
+                                    uselist = False)
         })
         load = self.load_tracker(User)
         self.load_tracker(Address, load)
                                 self.classes.User)
 
         mapper(User, users, properties={
-            'address':relationship(mapper(Address, addresses),uselist = False, backref='user')
+            'address':relationship(mapper(Address, addresses),
+                                uselist = False, backref='user')
         })
         sess = sessionmaker()()
-        u = User(id=7, name="fred", address=Address(id=1, email_address='foo@bar.com'))
+        u = User(id=7, name="fred", 
+                    address=Address(id=1, email_address='foo@bar.com'))
         sess.add(u)
         sess.commit()
         sess.close()
 
         sess = create_session()
         u = User()
-        assert_raises_message(sa.exc.InvalidRequestError, "load=False option does not support", sess.merge, u, load=False)
-
-    def test_dont_load_deprecated(self):
-        User, users = self.classes.User, self.tables.users
-
-        mapper(User, users)
-
-        sess = create_session()
-        u = User(name='ed')
-        sess.add(u)
-        sess.flush()
-        u = sess.query(User).first()
-        sess.expunge(u)
-        sess.execute(users.update().values(name='jack'))
-        @testing.uses_deprecated("dont_load=True has been renamed")
-        def go():
-            u1 = sess.merge(u, dont_load=True)
-            assert u1 in sess
-            assert u1.name=='ed'
-            assert u1 not in sess.dirty
-        go()
+        assert_raises_message(sa.exc.InvalidRequestError, 
+                "load=False option does not support", 
+                sess.merge, u, load=False)
 
     def test_no_load_with_backrefs(self):
-        """load=False populates relationships in both directions without requiring a load"""
+        """load=False populates relationships in both 
+        directions without requiring a load"""
 
         users, Address, addresses, User = (self.tables.users,
                                 self.classes.Address,
                                 self.classes.User)
 
         mapper(User, users, properties={
-            'addresses':relationship(mapper(Address, addresses), backref='user')
+            'addresses':relationship(mapper(Address, addresses), 
+                                backref='user')
         })
 
         u = User(id=7, name='fred', addresses=[
 
     def test_dontload_with_eager(self):
         """
-
-        This test illustrates that with load=False, we can't just copy the
-        committed_state of the merged instance over; since it references
-        collection objects which themselves are to be merged.  This
-        committed_state would instead need to be piecemeal 'converted' to
-        represent the correct objects.  However, at the moment I'd rather not
-        support this use case; if you are merging with load=False, you're
-        typically dealing with caching and the merged objects shouldnt be
-        'dirty'.
-
+        
+        This test illustrates that with load=False, we can't just copy
+        the committed_state of the merged instance over; since it
+        references collection objects which themselves are to be merged.
+        This committed_state would instead need to be piecemeal
+        'converted' to represent the correct objects.  However, at the
+        moment I'd rather not support this use case; if you are merging
+        with load=False, you're typically dealing with caching and the
+        merged objects shouldnt be 'dirty'.
+        
         """
 
         users, Address, addresses, User = (self.tables.users,
         sess.flush()
 
         sess2 = create_session()
-        u2 = sess2.query(User).options(sa.orm.joinedload('addresses')).get(7)
+        u2 = sess2.query(User).\
+                options(sa.orm.joinedload('addresses')).get(7)
 
         sess3 = create_session()
         u3 = sess3.merge(u2, load=False)
             sess2.merge(u, load=False)
             assert False
         except sa.exc.InvalidRequestError, e:
-            assert ("merge() with load=False option does not support "
-                    "objects marked as 'dirty'.  flush() all changes on mapped "
-                    "instances before merging with load=False.") in str(e)
+            assert "merge() with load=False option does not support "\
+                "objects marked as 'dirty'.  flush() all changes on "\
+                "mapped instances before merging with load=False." \
+                in str(e)
 
         u2 = sess2.query(User).get(7)
 
                                 self.classes.User)
 
         mapper(User, users, properties={
-            'addresses':relationship(mapper(Address, addresses),backref='user')})
+            'addresses':relationship(mapper(Address, addresses),
+                            backref='user')})
 
         sess = create_session()
         u = User()
         self.assert_sql_count(testing.db, go, 0)
 
     def test_no_load_preserves_parents(self):
-        """Merge with load=False does not trigger a 'delete-orphan' operation.
-
-        merge with load=False sets attributes without using events.  this means
-        the 'hasparent' flag is not propagated to the newly merged instance.
-        in fact this works out OK, because the '_state.parents' collection on
-        the newly merged instance is empty; since the mapper doesn't see an
-        active 'False' setting in this collection when _is_orphan() is called,
-        it does not count as an orphan (i.e. this is the 'optimistic' logic in
+        """Merge with load=False does not trigger a 'delete-orphan'
+        operation.
+        
+        merge with load=False sets attributes without using events.
+        this means the 'hasparent' flag is not propagated to the newly
+        merged instance. in fact this works out OK, because the
+        '_state.parents' collection on the newly merged instance is
+        empty; since the mapper doesn't see an active 'False' setting in
+        this collection when _is_orphan() is called, it does not count
+        as an orphan (i.e. this is the 'optimistic' logic in
         mapper._is_orphan().)
-
+        
         """
 
         users, Address, addresses, User = (self.tables.users,
 
         mapper(User, users, properties={
             'addresses':relationship(mapper(Address, addresses),
-                                 backref='user', cascade="all, delete-orphan")})
+                                 backref='user', 
+                                 cascade="all, delete-orphan")})
         sess = create_session()
         u = User()
         u.id = 7
         eq_(sess2.query(User).get(u2.id).addresses[0].email_address,
             'somenewaddress')
 
-        # this use case is not supported; this is with a pending Address on
-        # the pre-merged object, and we currently dont support 'dirty' objects
-        # being merged with load=False.  in this case, the empty
-        # '_state.parents' collection would be an issue, since the optimistic
-        # flag is False in _is_orphan() for pending instances.  so if we start
-        # supporting 'dirty' with load=False, this test will need to pass
+        # this use case is not supported; this is with a pending Address
+        # on the pre-merged object, and we currently dont support
+        # 'dirty' objects being merged with load=False.  in this case,
+        # the empty '_state.parents' collection would be an issue, since
+        # the optimistic flag is False in _is_orphan() for pending
+        # instances.  so if we start supporting 'dirty' with load=False,
+        # this test will need to pass
+
         sess = create_session()
         u = sess.query(User).get(7)
         u.addresses.append(Address())
 
         s = create_session(autoflush=True, autocommit=False)
         mapper(User, users, properties={
-            'addresses':relationship(mapper(Address, addresses),backref='user')})
+            'addresses':relationship(mapper(Address, addresses),
+                            backref='user')})
 
         a1 = Address(user=s.merge(User(id=1, name='ed')), email_address='x')
         before_id = id(a1.user)
-        a2 = Address(user=s.merge(User(id=1, name='jack')), email_address='x')
+        a2 = Address(user=s.merge(User(id=1, name='jack')), 
+                            email_address='x')
         after_id = id(a1.user)
         other_id = id(a2.user)
         eq_(before_id, other_id)
 
         sess = create_session(autoflush=True, autocommit=False)
         m = mapper(User, users, properties={
-            'addresses':relationship(mapper(Address, addresses),backref='user')})
-        user = User(id=8, name='fred', addresses=[Address(email_address='user')])
+            'addresses':relationship(mapper(Address, addresses),
+                                backref='user')})
+        user = User(id=8, name='fred', 
+                        addresses=[Address(email_address='user')])
         merged_user = sess.merge(user)
         assert merged_user in sess.new
         sess.flush()
     @classmethod
     def define_tables(cls, metadata):
         Table('user', metadata,
-            Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
+            Column('id', Integer, primary_key=True, 
+                            test_needs_autoincrement=True),
             Column('name', String(50)),
         )
         Table('address', metadata,
-            Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
+            Column('id', Integer, primary_key=True, 
+                            test_needs_autoincrement=True),
             Column('user_id', Integer, ForeignKey('user.id')),
             Column('email', String(50)),
         )
     @classmethod
     def define_tables(cls, metadata):
         Table("data", metadata, 
-            Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
+            Column('id', Integer, primary_key=True, 
+                            test_needs_autoincrement=True),
             Column('data', PickleType(comparator=operator.eq))
         )
 
         eq_(m,r)
 
     def test_merge_delete_orphan_o2o_none(self):
-        """one to one delete_orphan relationships marked load_on_pending should
-        be able to merge() with attribute None"""
+        """one to one delete_orphan relationships marked load_on_pending
+        should be able to merge() with attribute None"""
+
         self._setup_delete_orphan_o2o()
         self._merge_delete_orphan_o2o_with(None)
 
     def test_merge_delete_orphan_o2o(self):
-        """one to one delete_orphan relationships marked load_on_pending should
-        be able to merge()"""
+        """one to one delete_orphan relationships marked load_on_pending
+        should be able to merge()"""
+
         self._setup_delete_orphan_o2o()
         self._merge_delete_orphan_o2o_with(self.classes.Bug(id=1))