connection.invalidate() is broken

Issue #224 resolved
Mike Bayer repo owner created an issue

we have an unfinished feature on ConnectionFairy and people are starting to notice it. lets put all notes related to invalidate() on this ticket (including patches to get it working....all with unit tests please !) also note #121 which is blocked by this ticket.

Comments (6)

  1. Former user Account Deleted

    Here is a patch that fixes all the issues I ran into with invalidate. It now works with a (psycopg2-specific) reconnection manager I use.

  2. Mike Bayer reporter
    • changed status to open
    • removed status

    the changes to invalidate() break the connection pool:

    import sqlalchemy.pool as pool
    import psycopg2 as psycopg
    import thread,time
    psycopg = pool.manage(psycopg,pool_size=2,max_overflow=1, timeout=5, echo=True)
    print psycopg
    db = create_engine('postgres://scott:tiger@127.0.0.1/test',pool=psycopg,strategy='threadlocal')
    print db.connection_provider._pool
    metadata = BoundMetaData(db)
    
    users_table = Table('users', metadata,
      Column('user_id', Integer, primary_key=True),
      Column('user_name', String(40)),
      Column('password', String(10)))
    metadata.create_all()
    
    class User(object):
        pass
    usermapper = mapper(User, users_table)
    
    #Then i create loads of threads and in run() of each thread:
    def run():
        session = create_session()
        transaction = session.create_transaction()
        query = session.query(User)
        u1=query.select(User.c.user_id==3)
    
    for x in range(0,50):
        thread.start_new_thread(run, ())
    
    while True:
        time.sleep(5)
    

    this test will open too many connections, as the invalidate() method gets called for timed-out connections, the overflow is then decreased, then the pool opens up as many connections as there are threads. fixes are being considered; however you should submit a test case for your original issue, as I am likely going to have to revert the "overflow" logic that was modified underneath this ticket.

  3. Mike Bayer reporter

    heres that program again:

    from sqlalchemy import *
    import sqlalchemy.pool as pool
    import psycopg2 as psycopg
    import thread,time
    psycopg = pool.manage(psycopg,pool_size=2,max_overflow=1, timeout=5, echo=True)
    print psycopg
    db = create_engine('postgres://scott:tiger@127.0.0.1/test',pool=psycopg,strategy='threadlocal')
    print db.connection_provider._pool
    metadata = BoundMetaData(db)
    
    users_table = Table('users', metadata,
      Column('user_id', Integer, primary_key=True),
      Column('user_name', String(40)),
      Column('password', String(10)))
    metadata.create_all()
    
    class User(object):
        pass
    usermapper = mapper(User, users_table)
    
    #Then i create loads of threads and in run() of each thread:
    def run():
        session = create_session()
        transaction = session.create_transaction()
        query = session.query(User)
        u1=query.select(User.c.user_id==3)
    
    for x in range(0,50):
        thread.start_new_thread(run, ())
    
    while True:
        time.sleep(5)
    
  4. Mike Bayer reporter

    changeset:1761 introduces a further adjustment so that the overflow counter is only affected if the connection actually succeeded. please verify that the original case you were having is still fixed (and please submit a unit test).

  5. Log in to comment