"FlushError: Over 100 subsequent flushes" when deleting same object twice in 1.1

Issue #3839 resolved
Adrian created an issue
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class Foo(Base):
    __tablename__ = 'foo'
    id = Column(Integer, primary_key=True)


e = create_engine('sqlite:///', echo=False)
Base.metadata.create_all(e)
s = Session(e)

s.add(Foo())
s.commit()

foo = s.query(Foo).first()
s.delete(foo)
s.flush()
s.delete(foo)
s.flush()
s.commit()

With SQLalchemy 1.0:

[adrian@blackhole:/tmp/test]> pip install -q 'sqlalchemy<1.1'
[adrian@blackhole:/tmp/test]> python satest.py
/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py:925: SAWarning: DELETE statement on table 'foo' expected to delete 1 row(s); 0 were matched.  Please set confirm_deleted_rows=False within the mapper configuration to prevent this warning.
  (table.description, expected, rows_matched)

With SQLAlchemy 1.1:

[adrian@blackhole:/tmp/test]> pip install -Uq sqlalchemy
[adrian@blackhole:/tmp/test]> python satest.py
Traceback (most recent call last):
  File "satest.py", line 26, in <module>
    s.commit()
  File "/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 874, in commit
    self.transaction.commit()
  File "/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 461, in commit
    self._prepare_impl()
  File "/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 444, in _prepare_impl
    "Over 100 subsequent flushes have occurred within "
sqlalchemy.orm.exc.FlushError: Over 100 subsequent flushes have occurred within session.commit() - is an after_flush() hook creating new objects?

It looks like the second delete adds the object to s.deleted and it stays there forever. It also fails only during s.commit(). not during a normal s.flush()

While it can be considered a bug in my application that I end up deleting the same object twice, I don't think the 1.1 behavior is correct - if it should indeed be an error case instead of just a warning as in 1.0 the error should probably be somewhat clear.

Comments (5)

  1. Mike Bayer repo owner

    the 100 flushes error is only supposed to occur when flush events are in use. if you are illustrating this condition occurring with a simple double-delete that would be an enormous regression along the lines of "release today".

  2. Mike Bayer repo owner

    issue #2677, 108c60f460c7, first the delete() method now rejects placing the object in the identity map if it is detected as "already attached":

    1.1:
    
    -> s.delete(foo)
    (Pdb) !s.identity_map.contains_state(inspect(foo))
    False
    (Pdb) next
    > /home/classic/dev/sqlalchemy/test.py(33)<module>()
    -> print "THREE!!!"
    (Pdb) !s.identity_map.contains_state(inspect(foo))
    False
    
    
    1.0:
    
    -> s.delete(foo)
    (Pdb) !s.identity_map.contains_state(inspect(foo))
    False
    (Pdb) next
    > /home/classic/dev/sqlalchemy/test.py(33)<module>()
    -> print "THREE!!!"
    (Pdb) !s.identity_map.contains_state(inspect(foo))
    True
    

    next, unitofwork.register_object rejects the object because the session does not "contain" the object, this logic is unchanged but the result is different:

        def register_object(self, state, isdelete=False,
                            listonly=False, cancel_delete=False,
                            operation=None, prop=None):
            if not self.session._contains_state(state):
                if not state.deleted and operation is not None:
                    util.warn("Object of type %s not in session, %s operation "
                              "along '%s' will not proceed" %
                              (orm_util.state_class_str(state), operation, prop))
                return False
    

    therefore the object stays in session._deleted but is never handled.

  3. Mike Bayer repo owner

    Restore object to the identity_map upon delete() unconditionally

    Fixed regression caused by 🎫2677 whereby calling :meth:.Session.delete on an object that was already flushed as deleted in that session would fail to set up the object in the identity map (or reject the object), causing flush errors as the object were in a state not accommodated by the unit of work. The pre-1.1 behavior in this case has been restored, which is that the object is put back into the identity map so that the DELETE statement will be attempted again, which emits a warning that the number of expected rows was not matched (unless the row were restored outside of the session).

    Change-Id: I9a8871f82cb1ebe67a7ad54d888d5ee835a9a40a Fixes: #3839

    → <<cset e56a9d85acd1>>

  4. Log in to comment