Mike Bayer  committed c359033 Merge

merge doc fixes from tip

  • Participants
  • Parent commits 20b6ce0, e88231f

Comments (0)

Files changed (5)

File doc/build/core/pooling.rst

 .. module:: sqlalchemy.pool
-SQLAlchemy ships with a connection pooling framework that integrates
-with the Engine system and can also be used on its own to manage plain
-DB-API connections.
+The establishment of a
+database connection is typically a somewhat expensive operation, and
+applications need a way to get at database connections repeatedly
+with minimal overhead.  Particularly for
+server-side web applications, a connection pool is the standard way to
+maintain a "pool" of active database connections in memory which are
+reused across requests.
-At the base of any database helper library is a system for efficiently
-acquiring connections to the database.  Since the establishment of a
-database connection is typically a somewhat expensive operation, an
-application needs a way to get at database connections repeatedly
-without incurring the full overhead each time.  Particularly for
-server-side web applications, a connection pool is the standard way to
-maintain a group or "pool" of active database connections which are
-reused from request to request in a single server process.
+SQLAlchemy includes several connection pool implementations 
+which integrate with the :class:`.Engine`.  They can also be used
+directly for applications that want to add pooling to an otherwise
+plain DBAPI approach.
 Connection Pool Configuration
                          pool_size=20, max_overflow=0)
 In the case of SQLite, a :class:`SingletonThreadPool` is provided instead,
-to provide compatibility with SQLite's restricted threading model.
+to provide compatibility with SQLite's restricted threading model, as well
+as to provide a reasonable default behavior to SQLite "memory" databases,
+which maintain their entire dataset within the scope of a single connection.
+All SQLAlchemy pool implementations have in common
+that none of them "pre create" connections - all implementations wait
+until first use before creating a connection.   At that point, if
+no additional concurrent checkout requests for more connections 
+are made, no additional connections are created.   This is why it's perfectly
+fine for :func:`.create_engine` to default to using a :class:`.QueuePool`
+of size five without regard to whether or not the application really needs five connections
+queued up - the pool would only grow to that size if the application
+actually used five connections concurrently, in which case the usage of a 
+small pool is an entirely appropriate default behavior.
-Custom Pool Construction
+Switching Pool Implementations
-:class:`Pool` instances may be created directly for your own use or to
-supply to :func:`sqlalchemy.create_engine` via the ``pool=``
-keyword argument.
+The usual way to use a different kind of pool with :func:`.create_engine`
+is to use the ``poolclass`` argument.   This argument accepts a class
+imported from the ``sqlalchemy.pool`` module, and handles the details
+of building the pool for you.   Common options include specifying
+:class:`.QueuePool` with SQLite::
-Constructing your own pool requires supplying a callable function the
-Pool can use to create new connections.  The function will be called
-with no arguments.
+    from sqlalchemy.pool import QueuePool
+    engine = create_engine('sqlite:///file.db', poolclass=QueuePool)
-Through this method, custom connection schemes can be made, such as a
-using connections from another library's pool, or making a new
-connection that automatically executes some initialization commands::
+Disabling pooling using :class:`.NullPool`::
+    from sqlalchemy.pool import NullPool
+    engine = create_engine(
+              'postgresql+psycopg2://scott:tiger@localhost/test', 
+              poolclass=NullPool)
+Using a Custom Connection Function
+All :class:`.Pool` classes accept an argument ``creator`` which is 
+a callable that creates a new connection.  :func:`.create_engine`
+accepts this function to pass onto the pool via an argument of
+the same name::
     import sqlalchemy.pool as pool
     import psycopg2
     def getconn():
         c = psycopg2.connect(username='ed', host='', dbname='test')
-        # execute an initialization function on the connection before returning
-        c.cursor.execute("setup_encodings()")
+        # do things with 'c' to set up
         return c
-    p = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
+    engine = create_engine('postgresql+psycopg2://', creator=getconn)
-Or with SingletonThreadPool::
+For most "initialize on connection" routines, it's more convenient
+to use a :class:`.PoolListener`, so that the usual URL argument to
+:func:`.create_engine` is still usable.  ``creator`` is there as
+a total last resort for when a DBAPI has some form of ``connect``
+that is not at all supported by SQLAlchemy.
+Constructing a Pool
+To use a :class:`.Pool` by itself, the ``creator`` function is
+the only argument that's required and is passed first, followed
+by any additional options::
     import sqlalchemy.pool as pool
-    import sqlite
+    import psycopg2
-    p = pool.SingletonThreadPool(lambda: sqlite.connect(filename='myfile.db'))
+    def getconn():
+        c = psycopg2.connect(username='ed', host='', dbname='test')
+        return c
+    mypool = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
+DBAPI connections can then be procured from the pool using the :meth:`.Pool.connect`
+function.  The return value of this method is a DBAPI connection that's contained
+within a transparent proxy::
+    # get a connection
+    conn = mypool.connect()
+    # use it
+    cursor = conn.cursor()
+    cursor.execute("select foo")
+The purpose of the transparent proxy is to intercept the ``close()`` call,
+such that instead of the DBAPI connection being closed, its returned to the 
+    # "close" the connection.  Returns
+    # it to the pool.
+    conn.close()
+The proxy also returns its contained DBAPI connection to the pool 
+when it is garbage collected,
+though it's not deterministic in Python that this occurs immediately (though
+it is typical with cPython).
+A particular pre-created :class:`.Pool` can be shared with one or more
+engines by passing it to the ``pool`` argument of :func:`.create_engine`::
+    e = create_engine('postgresql://', pool=mypool)
+Pool Event Listeners
+Connection pools support an event interface that allows hooks to execute
+upon first connect, upon each new connection, and upon checkout and 
+checkin of connections.   See :class:`.PoolListener` for details.
 Builtin Pool Implementations
-.. autoclass:: AssertionPool
-   :show-inheritance:
-   .. automethod:: __init__
-.. autoclass:: NullPool
-   :show-inheritance:
-   .. automethod:: __init__
 .. autoclass:: sqlalchemy.pool.Pool
    .. automethod:: __init__
    .. automethod:: __init__
+.. autoclass:: AssertionPool
+   :show-inheritance:
+.. autoclass:: NullPool
+   :show-inheritance:
 .. autoclass:: StaticPool
-   .. automethod:: __init__
 Pooling Plain DB-API Connections

File doc/build/orm/collections.rst

     posts = jack.posts[5:20]
 The dynamic relationship supports limited write operations, via the
-``append()`` and ``remove()`` methods. Since the read side of the dynamic
-relationship always queries the database, changes to the underlying collection
-will not be visible until the data has been flushed:
-.. sourcecode:: python+sql
+``append()`` and ``remove()`` methods::
     oldpost = jack.posts.filter(Post.headline=='old post').one()
     jack.posts.append(Post('new post'))
+Since the read side of the dynamic relationship always queries the 
+database, changes to the underlying collection will not be visible 
+until the data has been flushed.  However, as long as "autoflush" is 
+enabled on the :class:`.Session` in use, this will occur 
+automatically each time the collection is about to emit a 
 To place a dynamic relationship on a backref, use ``lazy='dynamic'``:
 .. sourcecode:: python+sql
 this collection is a ``list``::
     mapper(Parent, properties={
-        children = relationship(Child)
+        'children' : relationship(Child)
     parent = Parent()
     # use a set
     mapper(Parent, properties={
-        children = relationship(Child, collection_class=set)
+        'children' : relationship(Child, collection_class=set)
     parent = Parent()

File doc/build/orm/inheritance.rst

 ``with_polymorphic`` setting.
 Advanced Control of Which Tables are Queried
 The :meth:`.Query.with_polymorphic` method and configuration works fine for
 simplistic scenarios. However, it currently does not work with any
 query the name of employees with particular criterion::
-        outerjoin((engineer, engineer.c.employee_id==Employee.c.employee_id)).\
-        outerjoin((manager, manager.c.employee_id==Employee.c.employee_id)).\
+        outerjoin((engineer, engineer.c.employee_id==Employee.employee_id)).\
+        outerjoin((manager, manager.c.employee_id==Employee.employee_id)).\
         filter(or_(Engineer.engineer_info=='w', Manager.manager_data=='q'))
 The base table, in this case the "employees" table, isn't always necessary. A
 Creating Joins to Specific Subtypes
 The :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` method is a
 helper which allows the construction of joins along

File doc/build/outline.txt

-introduction   intro.rst
-SQLAlchemy ORM   orm.rst
-    tutorial   tutorial.rst
-    mapper configuration   mapper_config.rst
-        customizing column properties
-            subset of table columns
-            attr names for mapped columns
-            multiple cols for single attribute
-        deferred col loading
-            ref: deferred
-            ref: defer
-            ref: undefer
-            ref: undefer-group
-        sql expressions as mapped attributes
-            ref: column_property
-        changing attribute behavior
-            simple validators
-                ref: validates
-            using descriptors
-                ref: synonym
-                ref: hybrid
-            custom comparators
-                ref: PropComparator (needs the inherits)
-                ref: comparable_property
-        composite column types
-            ref: composite
-        Mapper API
-            ref: mapper
-            ref: Mapper
-    relationship configuration  relationships.rst
-        basic patterns
-        adjacency list
-        join conditions
-        mutually dependent rows
-        mutable priamry keys update cascades
-        loading options (blah blah eager loading is better ad-hoc link)
-        ref: relationship()
-    inheritance configuration and usage   inheritance.rst
-    session usage  session.rst
-        reference
-            ref: session
-            ref: object_session
-            ref: attribute stuff
-    query object  query.rst
-        ref: query
-    eagerloading  loading.rst
-        what kind of loading to use ?
-        routing explicit joins
-        reference
-            ref: joinedload
-            ref: subqueryload
-            ref: lazyload
-            ref: noload
-            ref: contains_eager
-        other options
-            ref:
-        constructs
-    events events.rst
-    collection configuration collections.rst
-        dictionaries
-        custom collections
-        collection decorators
-        instr and custom types
-        large collection techniques
-    ORM Extensions extensions.rst
-        association proxy
-        declarative
-        orderinglist
-        horizontal shard
-        sqlsoup
-    examples examples.rst
-    deprecated interfaces deprecated.rst
-SQLAlchemy Core core.rst
-    Expression Tutorial  sqlexpression.rst
-    Expression API Reference  expression_api.rst
-    engines  engines.rst
-        intro
-        db support
-        create_engine URLs
-        DBAPI arguments
-        logging
-        api reference
-         ref: create_engine
-         ref: engine_from_config
-         ref: engine
-         ref: resultproxy
-         ref: URL
-    connection pools  pooling.rst
-        events
-    connections / transactions connections.rst
-        more on connections
-        using transactions with connection
-        understanding autocommit
-        connectionless/implicit
-        threadlocal
-        events
-        Connection API ref
-            ref: connection
-            ref: connectable
-            ref: transaction
-    schema schema.rst
-        metadata
-            accessing tables and columns
-            creating dropping
-            binding
-            reflecting
-                ..
-                ..
-                ..
-                ref: inspector
-            specifying schema name
-            backend options
-            table metadata API
-                ref: metadata
-                ref: table  (put the tometadata section in the method)
-                ref: column
-            column insert/update defaults
-                scalar defaults
-                pyhton functions
-                    context sensitive
-                sql expressions
-                server side defaults
-                triggered
-                defining sequences
-                column default API
-                    ref: fetchavlue
-                    ref: seq
-                    etc
-            defining constraints and indexes
-                defining foregin keys
-                    creating/dropping via alter
-                    onupdate ondelete
-                    Foreign Key API
-                unique constraint
-                    ref: unique
-                check constraint
-                    ref: check
-                indexes
-                    ref: index
-            customizing ddl
-                contorl ddl sequences
-                custom ddl
-                events
-                DDL API
-                    :ddlelement
-                    ddl
-                    ddlevents
-                    ceratetable
-                    droptable
-                    et etc
-    datatypes types.rst
-        generic
-        sql standard
-        vendor
-        custom
-            - changing type compliation
-            - wrapping types with typedec
-            - user defined types
-        base API    
-            - abstract
-            - typeengine
-            - mutable
-            - etc
-    custom compilation compiler.rst

File lib/sqlalchemy/orm/

     The recipe decorators all require parens, even those that take no
-        @collection.adds('entity'):
+        @collection.adds('entity')
         def insert(self, position, entity): ...