Please update doc_nav/dialects/mssql.html with 'timestamp'/RowVersion info..

Issue #3675 resolved
Jason Robertson
created an issue

Hi - under the mssql.html documentation there is a section 'Triggers' ...SQLAlchemy by default uses OUTPUT INSERTED to get at newly generated primary...

Please add verbiage to also indicate that this seems to apply to rowversion/timestamp colums as well. I banged my head on this for a few hours getting a rowcount back of -1, found out that if you update a table with a timestamp, you can't use OUTPUT inserted.timestamp and rowcount at the same time. You always get -1. To be honest I'm not sure if it's a pyodbc/freetds or MS server issue, but it doesn't work. I had to set implicit_returning to False

So maybe just a little text in that section like:

Note that this limitation also applies to tables containing timestamp columns used as a version_id_col.

Comments (10)

  1. Michael Bayer repo owner

    I don't understand the issue of "getting a rowcount back of -1". The documentation you refer to is referring to a straight up raise of an error from the driver if OUTPUT INSERTED is used on a table with triggers, it has nothing to do with rowcounts. INSERT statements don't have a "rowcount", that only applies to UPDATE and DELETE to indicate the rows affected by the WHERE clause.

    Can you please be more specific as to what you're trying to do ?

  2. Jason Robertson reporter

    Hi Mike - I have a table with a 'timestamp' (RowVersion) column in an MS SQL table. I am doing optimistic locking on a table something like this (this is simplified version):

    class Profile(Base,MyBase):
        __tablename__ = "profiles"
        id = Column(Integer(), primary_key=True)
        project = Column(String(256), nullable=False)
        profile = Column(String(256), nullable=False)
        note = Column(String(4000))
        timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
        __mapper_args__ = {
            'version_id_col': timestamp,
            'version_id_generator': False

    So when I do an update with a 'timestamp' column, I expect it to fail unless timestamp matches what's in the DB. The problem is that this doesn't work. No matter what, it thinks there is a state issue because the insert is returning -1 for the number of rows modified.

    Adding implicit_returning seems to fix it, so for all intents and purposes using a 'timestamp' column in MS SQL is like using a trigger and seems to preclude getting a row count back.

  3. Jason Robertson reporter

    Oops, yeah sorry I mistyped on that last update. It's an update, not an insert. "because the insert is returning" should be "because the update is returning". I've been talking about updates, 'inserted' is still the correct output parameter in the SQL syntax for getting output columns on update.

    Here is the test case. Keep in mind I'm not sure this is a bug, maybe just the way MS SQL works, I just think the doc should be updated as described above because it seems to apply to timestamp columns and not just triggers.

    #...usual stuff
    class MyTable(Base):
        __tablename__ = 'mytable'
        id = Column(Integer, primary_key=True)
        stuff = Column(String(10))
        timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
        __mapper_args__ = {
            'version_id_col': timestamp,
            'version_id_generator': False
    #...usual stuff
    Session = sessionmaker(bind=engine)
    s = Session()
    cur = s.query(MyTable).one_or_none()
    assert cur is not None and cur.stuff == 'OK'
    cur.stuff = "NOTOK"
    # Errors out with: sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'mytable' expected to update 1 row(s); -1 were matched.
    # But it's not an error - the update worked fine.

    Note that the above works if you add table_args = ({'implicit_returning':False},)

  4. Michael Bayer repo owner

    OK now I understand. this is pyodbc being lame, and pymssql doesn't support rowcount at all, so is even lamer (but the case works for that one with just a warning). this needs a new section.

  5. Log in to comment