Jan Brohl avatar Jan Brohl committed a4912a5

Imported from svn by Bitbucket

Comments (0)

Files changed (19)

+ABOUT
+=====
+
+PyDal is a database abstraction layer for Python.  It provides a DBAPI 2.0 wrapper for DBAPI 2.0 drivers.  Sounds strange, but even drivers that fully conform to the DBAPI can differ enough to make building database independent applications difficult.  Two major abstractions handled by PyDal are paramstyles and datetime objects.  PyDal makes it possible to use the same paramstyle and datetime types with any module that conforms to DBAPI 2.0. In addition, paramstyles and datetime types are configurable.
+
+It should work with any driver that is DBAPI 2.0 compliant.  For those that are not, adaptations are handled in configuration files.  Check out config_Example.py for examples.
+
+INSTALL
+=======
+
+You can simply copy dal somewhere in your path or use easy_install to install
+as an egg.
+
+
+USAGE
+=====
+
+Look at the documentation for the dbapi.py module for a usage example.  A very simple example works like this.
+
+import dal
+drv = dal.wrapdriver('psycopg')
+cn = drv.connect(database='mydb')
+cs = cn.cursor()
+cs.execute('select * form mytable')
+result = cs.fetchall()
+
+CUSTOM CONFIGURATION
+====================
+
+The wrapper should work natively with any DBAPI V2 compliant module, although
+some modules may require a configuration file to gain better functionality.
+These configuration files should be in the PYTHON PATH (some are included with
+this software).
+
+Here are the configuration options.
+
+function convertdt converts the native driver's datetime type to an mxDateTime
+or Python datetime object that the wrapper knows how to work with.
+
+quote_chars is a list of characters that should be considered as quote
+characters in sql statements not including the sql standard ' chracter.
+
+escape_chars is a list of characters that should be considered as escape
+characters that are not standard sql.
+
+function init1 is executed at the beginning of the wrapper initialization and
+allows custom modifications to the wrapper or native driver at runtime.
+
+function init2 is executed at the end of the wrapper initialization and
+allows custom modifications to the wrapper or native driver at runtime.
+
+There is an example config file nameed config_Example.py
+
+LICENSE
+=======
+
+Copyright (c) 2004, Peter Buschman and Randall Smith
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+    * Neither the name of the <ORGANIZATION> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AUTHORS
+=======
+
+PyDAL is the creation of Randall Smith <randall@tnr.cc> and Peter Buschman <plb@iotk.com>
+
+DOWNLOAD
+========
+
+The project is hosted on Sourceforge.
+
+Currently available from SVN:
+
+svn co http://pydal.svn.sourceforge.net/svnroot/pydal pydal
+from dbapi.dbapi import wrapdriver
+from dbapi.dbexceptions import *
+# When I use a relative import, dbapi ends up in the namespace.
+#del dbapi

Empty file added.

dal/dbapi/config_MySQLdb.py

+import MySQLdb
+import datetime
+import math
+
+# special quote/escape chracters (if any)
+quote_chars = ["`"]
+escape_chars = ['\\']
+
+def init1(wrapper):
+    # MySQLDb does not define DATETIME
+    if (not hasattr(wrapper._driver, 'DATETIME')):
+        wrapper._driver.DATETIME = wrapper._driver.DBAPISet(10, 12)
+
+# convertdt not needed b/c it used mx.DateTime

dal/dbapi/config_cx_Oracle.py

+import cx_Oracle
+import datetime
+
+def convertdt(moddt, field_desc, pref=None):
+    """convert Oracle DateTime to Python datetime"""
+    year = moddt.year
+    month = moddt.month
+    day = moddt.day
+    hour = moddt.hour
+    minute = moddt.minute
+    sec = moddt.second
+    msec = moddt.fsecond
+    pydt = datetime.datetime(year, month, day, hour, minute, sec, msec)
+    return pydt

dal/dbapi/config_psycopg.py

+import psycopg
+
+# special quote/escape chracters (if any)
+escape_chars = ['\\']
+

dal/dbapi/db_row.py

+# -*- coding: utf-8 -*-
+'''
+File:          db_row.py
+
+Authors:       Kevin Jacobs (jacobs@theopalgroup.com)
+
+Created:       May 14, 2002
+
+Abstract:      This module defines light-weight objects which allow very
+               flexible access to a fixed number of positional and named
+               attributes via several interfaces.
+
+Compatibility: Python 2.2 and above
+
+Requires:      new-style classes, Python 2.2 super builtin
+
+Version:       0.8
+
+Revision:      $Id: db_row.py 48 2009-11-24 14:58:58Z pai1ripe $
+
+Copyright (c) 2002,2003 The OPAL Group.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to
+deal in the Software without restriction, including without limitation the
+rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+sell copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+IN THE SOFTWARE.
+
+----------------------------------------------------------------------------------
+
+This module defines light-weight objects suitable for many applications,
+though the primary goal of the implementer is for storage of database query
+results.  The primary design criteria for the data-structure where:
+
+  1) store a sequence of arbitrary Python objects.
+  2) the number of items stored in each instance should be constant.
+  3) each instance must be as light-weight as possible, since many thousands
+     of them are likely to be created.
+  4) values must be retrievable by index:
+     e.g.: d[3]
+  5) values must be retrievable by field name by both Python attribute syntax and
+     item syntax:
+     e.g.: d.fields.foo == d['foo']
+  6) optionally, operations using field names should be case-insensitive
+     e.g.: d['FiElD'], d.fields.FiElD
+  7) Otherwise drop-in compatible with tuple objects.
+  8) Maintains a disjoint namespace for field names so they do not conflict
+     with method names.
+
+These criteria were chosen to simplify access to rows that are returned from
+database queries.  Lets say that you run this query:
+
+  cursor.execute('SELECT a,b,c FROM blah;')
+  results = cursor.fetchall()
+
+The resulting data-structure is typically a list of row tuples. e.g.:
+
+  results = [ (1,2,3), (3,4,5), (6,7,8) ]
+
+While entirely functional, these data types only allow integer indexed
+access.  e.g., to query the b attribute of the second row:
+
+  b = results[1][1]
+
+This requires that all query results are accessed by index, which can be
+very tedious and the code using this technique tends to be hard to maintain.
+The alternative has always been to return a list of native Python
+dictionaries, one for each row.  e.g.:
+
+  results = [ {'a':1,'b':2,'c':3}, {'a':3,'b':4,'c':5},
+              {'a':6,'b':7,'c':8} ]
+
+This has the advantage of easier access to attributes by name, e.g.:
+
+  foo = results[1]['b']
+
+however, there are several serious disadvantages.
+
+  1) each row requires a heavy-weight dictionary _per instance_.  This can
+     damage performance by forcing a loaded system to start swapping virtual
+     memory to disk when returning, say, 100,000 rows from a query.
+
+  2) access by index is lost since Python dictionaries are unordered.
+
+  3) attribute-access syntax is somewhat sub-optimal (or at least
+     inflexible) since it must use the item-access syntax.
+
+     i.e., x['a'] vs. x.a.
+
+  4) Compatibility with code that expects tuples is lost.
+
+Of course, the second and third problems can be partially addressed by
+creating a UserDict (a Python class that looks and acts like a dictionary),
+though that only magnifies the performance problems.
+
+HOWEVER, there are some new features in Python 2.2 and newer that can
+provide the best of all possible worlds.  Here is an example:
+
+  # Create a new class type to store the results from our query (we'll make
+  # field names case-insensitive just to show off)
+
+  R=IMetaRow(['a','b','c'])
+
+  # Create an instance of our new tuple class with values 1,2,3
+  r=R( (1,2,3) )
+
+  # Demonstrate all three accessor types
+  print r['a'], r[1], r.fields.c
+  > 1 2 3
+
+  # Demonstrate case-insensitive operation
+  print r['a'], r['A']
+  > 1 1
+
+  # Return the keys (column names)
+  print r.keys()
+  > ('a', 'b', 'c')
+
+  # Return the values
+  print r.values()
+  > (1, 2, 3)
+
+  # Return a list of keys and values
+  print r.items()
+  > (('a', 1), ('b', 2), ('c', 3))
+
+  # Return a dictionary of the keys and values
+  print r.dict()
+  > {'a': 1, 'c': 3, 'b': 2}
+
+  # Demonstrate slicing behavior
+  print r[1:3]
+  > (2, 3)
+
+This solution uses some new Python 2.2 features and ends up allocating only
+one dictionary _per row class_, not per row instance.  i.e., the row
+instances do not allocate a dictionary at all!  This is accomplished using
+the new-style object 'slots' mechanism.
+
+Here is how you could use these objects:
+
+  cursor.execute('SELECT a,b,c FROM blah;')
+
+  # Make a class to store the resulting rows
+  R = IMetaRow(cursor.description)
+
+  # Build the rows from the row class and each tuple returned from the cursor
+  results = [ R(row) for row in cursor.fetchall() ]
+
+  print results[1].fields.b, results[2].fields.B, results[3]['b'], results[2][1]
+
+Open implementation issues:
+
+  o Values are currently mutable, so hashing of rows is explicitly
+    disallowed.  This does not bother me much, though some may desire both
+    mutable and immutable instance types.
+
+  o The current row code returns most slicing, copying, combing operations
+    (objects resulting from the '+' and '*' operators), keys(), values(),
+    items() as tuples.  This is done to better conform to legacy code which
+    assumes that rows are always tuples.  This seems sensible enough, though
+    I welcome other opinions on the subject.
+
+   o More documentation and doc-strings are needed.
+
+   o Improve the integrated unit-tests (a la doctest or unittest, most likely)
+
+   o Add some better example code.
+
+Changes from version 0.71 -> 0.8:
+
+  o Ported to win32+Mingw32 and includes pre-compiled Win32 versions for
+    Python 2.2 and Python 2.3.
+
+  o Fixed C implmentation to allow fields with zero elements.  The behavior
+    now matches the pure-Python version.  (Reported by Anthony Baxter)
+
+  o Fixed C implementation so that accessing unitialized fields raise an
+    exception.  The behavior now matches the pure-Python version.
+
+  o Other minor cleanups and tweaks
+
+  o Added more unit tests.
+
+Changes from version 0.7 -> 0.71:
+
+  o Removed an unnecessary call to 'enumerate', which is only available in
+    Python 2.3.  Thanks to Ben Golding of Object Craft for noticing this.
+
+Changes from version 0.6 -> 0.7:
+
+  o Removed some cruft from the Python implementation of FieldsBase.
+
+  o Made the behavior of the Python base classes better match the C version
+    in a few spots.
+
+  o Cleaned up driver and description storage using properties.
+
+  o Added a dictionary-like .get(key,default=None) method to the Row class.
+
+  o Added new test cases.
+
+Changes from version 0.5 -> 0.6:
+
+  o Added missing slots declaration from Python FieldsBase object.  This
+    corrected a major flaw in the pure-Python implementation which caused
+    the allocation of per-instance dictionaries, and allowing access to
+    undeclared fields.  The C version of FieldsBase was not affected.
+
+  o Fixed exception types so that various accessors raise the appropriate
+    exceptions.  e.g., previously __getitem__ would incorrectly raise
+    AttributeError exceptions.  These changes were made in both the Python
+    and C versions.
+
+  o Added many new test cases to the regression suite, including much more
+    rigorous read/write testing.
+
+  o Removed some unnecessary (and slow!) code from the C fields_subscript
+    function.  I suspect it was a left-over from past debugging that was
+    not completely removed.
+'''
+
+__all__ = ['MetaFields', 'IMetaFields',
+           'MetaRow',    'IMetaRow',
+           'Fields',     'IFields',
+           'Row',        'IRow',
+           'FieldDescriptor']
+
+FORCE_PURE_PYTHON = 0
+
+try:
+  Nothing
+except NameError:
+  Nothing = object()
+
+class MetaFields(type):
+  '''MetaFields:
+
+     A meta-class that adds properties to a class that allow access to
+     indexed elements in the class by names specified in the __fields__
+     attribute of the class.  Indices start with __fieldoffset__ if it
+     exists, or 0 if it does not.  Field name access is case-sensitive,
+     though case-insensitive classes may be created using the
+     IMetaFields meta-class.
+  '''
+
+  __slots__ = ()
+
+  def __new__(cls, name, bases, field_dict):
+    fields = field_dict.get('__fields__',())
+    cls.build_properties(cls, fields, field_dict)
+
+    return super(MetaFields,cls).__new__(cls, name, bases, field_dict)
+
+  def build_properties(self, fields, field_dict):
+    '''Helper function that creates field properties'''
+
+    slots = list(field_dict.get('__slots__',[]))
+
+    field_names = {}
+    for s in slots:
+      field_names[s] = 1
+
+    for f in fields:
+      if type(f) is not str:
+        raise TypeError, 'Field names must be ASCII strings'
+      if not f:
+        raise ValueError, 'Field names cannot be empty'
+      if f in field_names:
+        raise ValueError, 'Field names must be unique: %s' % f
+
+      slots.append(f)
+      field_names[f] = 1
+
+    fields = tuple(fields)
+    slots  = tuple(slots)
+
+    field_dict['__fieldnames__'] = fields
+    field_dict['__fields__']     = fields
+    field_dict['__slots__']      = slots
+
+  build_properties = staticmethod(build_properties)
+
+
+class IMetaFields(MetaFields):
+  '''IMetaFields:
+
+     A meta-class that adds properties to a class that allow access to
+     indexed elements in the class by names specified in the __fields__
+     attribute of the class.  Indices start with __fieldoffset__ if it
+     exists, or 0 if it does not.  Field name access is case-insensitive,
+     though case-sensitive classes may be created using the
+     MetaFields meta-class.
+  '''
+
+  __slots__  = ()
+
+  def build_properties(cls, fields, field_dict):
+    '''Helper function that creates field properties'''
+
+    try:
+      ifields = tuple( [ f.lower() for f in fields ] )
+    except AttributeError:
+      raise TypeError, 'Field names must be ASCII strings'
+
+    super(IMetaFields,cls).build_properties(cls, ifields, field_dict)
+    field_dict['__fields__'] = tuple(fields)
+
+  build_properties = staticmethod(build_properties)
+
+
+try:
+  if FORCE_PURE_PYTHON:
+    raise ImportError
+
+  import db_rowc
+  FieldsBase  = db_rowc.abstract_fields
+  IFieldsBase = db_rowc.abstract_ifields
+
+except ImportError:
+  class FieldsBase(object):
+
+    __slots__ = ()
+
+    def __init__(self, values):
+      fields = type(self).__fieldnames__
+      for field,value in zip(fields,values):
+        setattr(self, field, value)
+
+    def __len__(self):
+      fields = type(self).__fieldnames__
+      return len(fields)
+
+    def __contains__(self, item):
+      return item in tuple(self)
+
+    def __str__(self):
+      return str(tuple(self))
+
+    def __repr__(self):
+      return repr(tuple(self))
+
+    def __getitem__(self, i):
+      try:
+        if isinstance(i, int):
+          fields = self.__fieldnames__
+          return getattr(self, fields[i])
+        else:
+          return getattr(self, i)
+      except AttributeError:
+        return None
+
+    def __setitem__(self, i, value):
+      if isinstance(i, int):
+        fields = type(self).__fieldnames__
+        setattr(self, fields[i], value)
+      else:
+        setattr(self, i, value)
+
+    def __delitem__(self, i):
+      if isinstance(i, int):
+        fields = type(self).__fieldnames__
+        delattr(self,fields[i])
+      else:
+        delattr(self,i)
+
+    def __getslice__(self, i, j):
+      return tuple(self)[i:j]
+
+    def __setslice__(self, i, j, values):
+      fields = type(self).__fieldnames__[i:j]
+      for field,value in zip(fields,values):
+        setattr(self, field, value)
+
+    def __delslice__(self, i, j):
+      fields = type(self).__fieldnames__[i:j]
+      for field in fields:
+        delattr(self,field)
+
+    def __eq__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) == tuple(other)
+      return tuple(self) == other
+
+    def __ne__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) != tuple(other)
+      return tuple(self) != other
+
+    def __lt__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) < tuple(other)
+      return tuple(self) < other
+
+    def __gt__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) > tuple(other)
+      return tuple(self) > other
+
+    def __le__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) <= tuple(other)
+      return tuple(self) <= other
+
+    def __ge__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) >= tuple(other)
+      return tuple(self) >= other
+
+    def __add__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(self) + tuple(other)
+      return tuple(self) + other
+
+    def __radd__(self, other):
+      if isinstance(other, FieldsBase):
+        return tuple(other) + tuple(self)
+      return other + tuple(self)
+
+    def __mul__(self, other):
+      return tuple(self) * other
+
+    def __delattr__(self, key):
+      super(FieldsBase, self).__setattr__(key,None)
+
+
+  class IFieldsBase(FieldsBase):
+    '''IFields:
+
+       A tuple-like base-class that gains properties to allow access to
+       indexed elements in the class by names specified in the __fields__
+       attribute when the class is declared.  Indices start with
+       __fieldoffset__ if it exists, or 0 if it does not.  Field name access
+       is case-insensitive, though case-sensitive objects may be created by
+       inheriting from the Fields base-class.
+    '''
+
+    __slots__ = ()
+
+    def __getattribute__(self, key):
+      return super(IFieldsBase, self).__getattribute__(key.lower())
+
+    def __setattr__(self, key, value):
+      super(IFieldsBase, self).__setattr__(key.lower(),value)
+
+    def __delattr__(self, key):
+      super(IFieldsBase, self).__setattr__(key.lower(),None)
+
+
+class Fields(FieldsBase):
+  '''Fields:
+
+     A tuple-like base-class that gains properties to allow access to
+     indexed elements in the class by names specified in the __fields__
+     attribute when the class is declared.  Indices start with
+     __fieldoffset__ if it exists, or 0 if it does not.  Field name access
+     is case-sensitive, though case-insensitive objects may be created by
+     inheriting from the IFields base-class.
+  '''
+
+  __metaclass__ = MetaFields
+  __slots__ = ()
+
+
+class IFields(IFieldsBase):
+  '''IFields:
+
+     A tuple-like base-class that gains properties to allow access to
+     indexed elements in the class by names specified in the __fields__
+     attribute when the class is declared.  Indices start with
+     __fieldoffset__ if it exists, or 0 if it does not.  Field name access
+     is case-insensitive, though case-sensitive objects may be created by
+     inheriting from the Fields base-class.
+  '''
+
+  __metaclass__ = IMetaFields
+  __slots__ = ()
+
+
+try:
+  if FORCE_PURE_PYTHON:
+    raise ImportError
+
+  import db_rowc
+  RowBase = db_rowc.abstract_row
+
+except ImportError:
+  class RowBase(object):
+    '''Row:
+
+       A light-weight object which allows very flexible access to a fixed
+       number of positional and named fields via several interfaces.  Field
+       access by name is case-sensitive, though case-insensitive access is
+       available via the IRow object.
+    '''
+
+    __slots__ = ('fields',)
+
+    def __getitem__(self, key):
+      if type(key) is str:
+        try:
+          return getattr(self.fields,key)
+        except AttributeError:
+          raise KeyError,key
+      return self.fields.__getitem__(key)
+
+    def __setitem__(self, key, value):
+      if type(key) is str:
+        try:
+          setattr(self.fields,key,value)
+        except AttributeError:
+          raise KeyError,key
+      else:
+        self.fields.__setitem__(key,value)
+
+    def __delitem__(self, key):
+      if type(key) is str:
+        try:
+          delattr(self.fields,key)
+        except AttributeError:
+          raise KeyError,key
+      else:
+        self.fields.__delitem__(key)
+
+    def __getslice__(self, i, j):
+      return self.fields.__getslice__(i, j)
+
+    def __setslice__(self, i, j, values):
+      self.fields.__setslice__(i, j, values)
+
+    def __delslice__(self, i, j):
+      self.fields.__delslice__(i, j)
+
+    def __hash__(self):
+      raise NotImplementedError,'Row objects are not hashable'
+
+    def __len__(self):
+      return len(self.fields)
+
+    def __contains__(self, item):
+      return item in self.fields
+
+    def __str__(self):
+      return str(self.fields)
+
+    def __repr__(self):
+      return repr(self.fields)
+
+    def __eq__(self, other):
+      if isinstance(other, Row):
+        return self.fields == other.fields
+      return self.fields == other
+
+    def __ne__(self, other):
+      if isinstance(other, Row):
+        return self.fields != other.fields
+      return self.fields != other
+
+    def __lt__(self, other):
+      if isinstance(other, Row):
+        return self.fields < other.fields
+      return self.fields < other
+
+    def __gt__(self, other):
+      if isinstance(other, Row):
+        return self.fields > other.fields
+      return self.fields > other
+
+    def __le__(self, other):
+      if isinstance(other, Row):
+        return self.fields <= other.fields
+      return self.fields <= other
+
+    def __ge__(self, other):
+      if isinstance(other, Row):
+        return self.fields >= other.fields
+      return self.fields >= other
+
+    def __add__(self, other):
+      if isinstance(other, Row):
+        return self.fields + other.fields
+      return self.fields + other
+
+    def __radd__(self, other):
+      if isinstance(other, Row):
+        return other.fields + self.fields
+      return other + self.fields
+
+    def __mul__(self, other):
+      return self.fields * other
+
+
+class Row(RowBase):
+  '''Row:
+
+     A light-weight object which allows very flexible access to a fixed
+     number of positional and named fields via several interfaces.  Field
+     access by name is case-sensitive, though case-insensitive access is
+     available via the IRow object.
+  '''
+
+  __slots__ = ()
+
+  driver = property(lambda self: type(self).driver)
+  descr  = property(lambda self: type(self).field_descriptors)
+
+  def keys(self):
+    '''r.keys() -> list of r's field names'''
+    return type(self.fields).__fields__
+
+  def items(self):
+    '''r.items() -> tuple of r's (field, value) pairs, as 2-tuples'''
+    return zip(self.keys(),self.fields)
+
+  def get(self, key, default=None):
+    if not isinstance(key, str):
+      return default
+    try:
+      return self[key]
+    except KeyError:
+      return default
+
+  def has_key(self, key):
+    '''r.has_key(k) -> 1 if r has field k, else 0'''
+    return key in type(self.fields).__fieldnames__
+
+  def dict(self):
+    '''r.dict() -> dictionary mapping r's fields to its values'''
+    return dict(self.items())
+
+  def copy(self):
+    '''r.copy() -> a shallow copy of r'''
+    return type(self)(self)
+
+  def __hash__(self):
+    raise NotImplementedError,'Row objects are not hashable'
+
+
+class IRow(Row):
+  '''IRow:
+
+     A light-weight object which allows very flexible access to a fixed
+     number of positional and named fields via several interfaces.  Field
+     access by name is case-insensitive, though case-sensitive access is
+     available via the Row object.
+  '''
+
+  __slots__ = ()
+
+  def has_key(self, key):
+    if isinstance(key, str):
+      key = key.lower()
+    return super(IRow, self).has_key(key)
+
+
+class MetaRowBase(type):
+  '''MetaRowBase:
+
+     A meta-class that builds row objects given a list of fields or field
+     schema.  Field acces is case-sensitive, though a case-insensitive
+     version, IMetaRow, is also available.
+  '''
+
+  __slots__  = ()
+
+  def __new__(cls, name, bases, cls_dict):
+    fields = tuple(cls_dict.get('__fields__',()))
+
+    field_dict = {}
+    field_dict['__slots__'] = getattr(cls.field_base,'__slots__',())
+
+    field_names = []
+    field_descriptors = []
+
+    for field in fields:
+      descriptor = FieldDescriptor(field)
+      field_names.append(descriptor.name)
+      field_descriptors.append(descriptor)
+
+    field_dict['__fields__'] = tuple(field_names)
+    field_class = type('%s_fields' % name, (cls.field_base,), field_dict)
+
+    cls_dict['field_descriptors'] = field_class(field_descriptors)
+
+    row_class = super(MetaRowBase,cls).__new__(cls, name, bases, cls_dict)
+
+    assert '__init__' not in cls_dict
+
+    def __init__(self, fields):
+      self.fields = field_class(fields)
+
+    row_class.__init__ = __init__
+
+    return row_class
+
+
+class MetaRow(MetaRowBase):
+  '''MetaRow:
+
+     A meta-class that builds row objects given a list of fields or field
+     schema.  Field acces is case-sensitive, though a case-insensitive
+     version, IMetaRow, is also available.
+  '''
+
+  __slots__  = ()
+  row_base   = Row
+  field_base = Fields
+
+  def __new__(cls, fields, driver = None):
+    cls_dict = {'__slots__' : (), '__fields__' : fields, 'driver' : driver}
+    return super(MetaRow,cls).__new__(cls, 'row', (cls.row_base,), cls_dict)
+
+
+class IMetaRow(MetaRowBase):
+  '''IMetaRow:
+
+     A meta-class that builds row objects given a list of fields or field
+     schema.  Field acces is case-insensitive, though a case-sensitive
+     version, MetaRow, is also available.
+  '''
+
+  __slots__  = ()
+  row_base   = IRow
+  field_base = IFields
+
+  def __new__(cls, fields, driver = None):
+    cls_dict = {'__slots__' : (), '__fields__' : fields, 'driver' : driver}
+    return super(IMetaRow,cls).__new__(cls, 'irow', (cls.row_base,), cls_dict)
+
+
+class FieldDescriptor(Fields):
+  '''FieldDescriptor:
+
+     A class that includes the Python DB-API 2.0 schema description fields
+     that allows read-only access to data elements by name and by index.
+  '''
+
+  __slots__  = ()
+  __fields__ = ('name', 'type_code', 'display_size', 'internal_size',
+               'precision', 'scale', 'null_ok')
+
+  def __init__(cls, desc):
+    if isinstance(desc, (tuple,list)):
+      desc = tuple(desc) + (None,)*(6-len(desc))
+    elif not isinstance(desc, FieldDescriptor):
+      desc = (desc,)+(None,)*6
+
+    super(FieldDescriptor,cls).__init__(desc)
+
+  def __str__(self):
+    fields = type(self).__fields__
+    values = ['%s: %s' % (key,repr(value)) for key,value in zip(fields, self) ]
+    return '{%s}' % ', '.join(values)
+
+  __repr__ = __str__
+
+
+class RowList(list):
+  __slots__ = ('row_class',)
+  def __init__(self, values, row_class = None):
+    super(RowList,self).__init__(values)
+    self.row_class = row_class
+  driver = property(lambda self: self.row_class.driver)
+  descr  = property(lambda self: self.row_class.field_descriptors)
+
+
+class NullRow(type(Nothing)):
+  __slots__ = ('driver','row_class','descr')
+  driver = property(lambda self: self.row_class.driver)
+  descr  = property(lambda self: self.row_class.field_descriptors)
+  def __new__(self):
+    return object.__new__(self)
+  def __init__(self, row_class = None):
+    self.row_class = row_class
+  def __eq__(self, other):
+    return 0
+  def __ne__(self, other):
+    return 1
+  def __nonzero__(self):
+    return 0
+
+
+def test(cls):
+  D=cls(['a','B','c'])
+  d=D( (1,2,3) )
+
+  assert d['a']==d[0]==d.fields.a==d.fields[0]==1
+  assert d['B']==d[1]==d.fields.B==d.fields[1]==2
+  assert d['c']==d[2]==d.fields.c==d.fields[2]==3
+
+  assert len(d) == 3
+  assert d.has_key('a')
+  assert d.has_key('B')
+  assert d.has_key('c')
+  assert 'd' not in d
+  assert 1 in d
+  assert 2 in d
+  assert 3 in d
+  assert 4 not in d
+  assert not d.has_key(4)
+  assert not d.has_key('d')
+  assert d[-1] == 3
+  assert d[1:3] == (2,3)
+
+  assert d.keys() == ('a','B','c')
+  assert d.items() == [('a', 1), ('B', 2), ('c', 3)]
+  assert d.dict()  == {'a': 1, 'c': 3, 'B': 2}
+  assert d.copy() == d
+  assert d == d.copy()
+  assert d is not d.copy()
+  assert type(d) is type(d.copy())
+  assert d.fields is not d.copy().fields
+  assert [ x for x in d ] == [1,2,3]
+
+  assert d.get('a') == 1
+  assert d.get('B') == 2
+  assert d.get('c') == 3
+
+  assert d.get('d') == None
+  assert d.get(0)   == None
+  assert d.get(3)   == None
+
+  assert d.get('d', -1) == -1
+  assert d.get(0,   -1) == -1
+  assert d.get(3,   -1) == -1
+
+  assert d.fields==d.fields
+  assert d == d
+  assert d == (1,2,3)
+  assert (1,2,3) == d
+  assert d!=()
+  assert ()<d
+  assert ()<=d
+  assert d>()
+  assert d>=()
+
+  try:
+    d[4]
+    raise AssertionError, 'Illegal index not caught'
+  except IndexError:
+    pass
+
+  try:
+    d['f']
+    raise AssertionError, 'Illegal key not caught'
+  except KeyError:
+    pass
+
+  try:
+    d.fields.f
+    raise AssertionError, 'Illegal attribute not caught'
+  except AttributeError:
+    pass
+
+
+def test_insensitive(cls):
+  D=cls(['a','B','c'])
+  d=D( (1,2,3) )
+
+  assert d['a']==d['A']==d[0]==d.fields.A==d.fields.a==d.fields[0]==1
+  assert d['b']==d['B']==d[1]==d.fields.B==d.fields.b==d.fields[1]==2
+  assert d['c']==d['C']==d[2]==d.fields.C==d.fields.c==d.fields[2]==3
+
+  assert d.has_key('a')
+  assert d.has_key('A')
+  assert d.has_key('b')
+  assert d.has_key('B')
+  assert d.has_key('c')
+  assert d.has_key('C')
+  assert not d.has_key('d')
+  assert not d.has_key('D')
+
+  assert 1 in d
+  assert 2 in d
+  assert 3 in d
+  assert 4 not in d
+  assert 'a' not in d
+  assert 'A' not in d
+  assert 'd' not in d
+  assert 'D' not in d
+
+  assert d.get('A') == 1
+  assert d.get('b') == 2
+  assert d.get('C') == 3
+
+
+def test_concat(cls):
+  D=cls(['a','B','c'])
+  d=D( (1,2,3) )
+
+  assert d+(4,5,6) == (1, 2, 3, 4, 5, 6)
+  assert (4,5,6)+d == (4, 5, 6, 1, 2, 3)
+  assert d+d       == (1, 2, 3, 1, 2, 3)
+  assert d*2       == (1, 2, 3, 1, 2, 3)
+
+
+def test_descr(cls):
+  D=cls( (('field1', 1, 2, 3, 4, 5, 6),
+          ('field2', 0, 0, 0, 0, 0, 0),
+          'field3') )
+  d = D( (1,2,3) )
+
+  assert d==(1,2,3)
+  assert len(d.descr) == 3
+  assert d.descr[0] == ('field1', 1, 2, 3, 4, 5, 6)
+  assert d.descr[0].name          == 'field1'
+  assert d.descr[0].type_code     == 1
+  assert d.descr[0].display_size  == 2
+  assert d.descr[0].internal_size == 3
+  assert d.descr[0].precision     == 4
+  assert d.descr[0].scale         == 5
+  assert d.descr[0].null_ok       == 6
+  assert d.descr[1] == ('field2', 0, 0, 0, 0, 0, 0)
+  assert d.descr[2] == ('field3', None, None, None, None, None, None)
+
+
+def test_rw(cls):
+  D=cls(['a','B','c'])
+  d=D( (1,2,3) )
+
+  assert d['a']==d[0]==d.fields.a==1
+  assert d['B']==d[1]==d.fields.B==2
+  assert d['c']==d[2]==d.fields.c==3
+
+  d['a']     = 4
+  d[1]       = 5
+  d.fields.c = 6
+
+  assert d['a']==d[0]==d.fields.a==4
+  assert d['B']==d[1]==d.fields.B==5
+  assert d['c']==d[2]==d.fields.c==6
+
+  d[:] = (7,8,9)
+
+  assert d['a']==d[0]==d.fields.a==7
+  assert d['B']==d[1]==d.fields.B==8
+  assert d['c']==d[2]==d.fields.c==9
+
+  d.fields[:] = (1,2,3)
+
+  assert d['a']==d[0]==d.fields.a==1
+  assert d['B']==d[1]==d.fields.B==2
+  assert d['c']==d[2]==d.fields.c==3
+
+  del d[0]
+
+  assert d[0] == None
+  assert d == (None, 2, 3)
+
+  del d[:]
+
+  assert d == (None, None, None)
+  assert d[0]==d[1]==d[2]==None
+
+  d.fields.a = 1
+  d['B'] = 2
+  assert d[0:2] == (1,2)
+
+  del d.fields.a
+  del d['B']
+  assert d[0:2] == (None,None)
+
+  try:
+    d['g'] = 'illegal'
+    raise AssertionError,'Illegal setitem'
+  except KeyError:
+    pass
+
+  try:
+    del d['g']
+    raise AssertionError,'Illegal delitem'
+  except KeyError:
+    pass
+
+  try:
+    d[5] = 'illegal'
+    raise AssertionError,'Illegal setitem'
+  except IndexError:
+    pass
+
+  try:
+    del d[5]
+    raise AssertionError,'Illegal delitem'
+  except IndexError:
+    pass
+
+  try:
+    d.fields.g = 'illegal'
+    raise AssertionError,'Illegal setattr'
+  except AttributeError:
+    pass
+
+  try:
+    del d.fields.g
+    raise AssertionError,'Illegal delattr'
+  except AttributeError:
+    pass
+
+
+def test_Irw(cls):
+  D=cls(['a','B','c'])
+  d=D( (1,2,3) )
+
+  assert d['a']==d[0]==d.fields.a==1
+  assert d['B']==d[1]==d.fields.B==2
+  assert d['c']==d[2]==d.fields.c==3
+
+  d['A']     = 4
+  d[1]       = 5
+  d.fields.C = 6
+
+  assert d['A']==d[0]==d.fields.A==4
+  assert d['b']==d[1]==d.fields.B==5
+  assert d['C']==d[2]==d.fields.C==6
+
+  d[:] = (7,8,9)
+
+  assert d['a']==d[0]==d.fields.a==7
+  assert d['B']==d[1]==d.fields.B==8
+  assert d['c']==d[2]==d.fields.c==9
+
+  d.fields[:] = (1,2,3)
+
+  assert d['a']==d[0]==d.fields.a==1
+  assert d['b']==d[1]==d.fields.b==2
+  assert d['c']==d[2]==d.fields.c==3
+
+  del d[0]
+
+  assert d[0] == None
+  assert d == (None, 2, 3)
+
+  del d[:]
+
+  assert d == (None, None, None)
+  assert d[0]==d[1]==d[2]==None
+
+  d.fields.A = 1
+  d['b'] = 2
+  assert d[0:2] == (1,2)
+
+  del d.fields.A
+  del d['b']
+  assert d[0:2] == (None,None)
+
+
+def test_incomplete(cls):
+  D=cls(['a','B','c'])
+  d=D( ' ' )
+
+  assert d['a'] == ' '
+  assert d.fields.a == ' '
+
+  try:
+    d['B']
+    raise AssertionError,'Illegal getitem: "%s"' % d['B']
+  except KeyError:
+    pass
+
+  try:
+    d['c']
+    raise AssertionError,'Illegal getitem'
+  except KeyError:
+    pass
+
+  try:
+    d.fields.b
+    raise AssertionError,'Illegal getattr'
+  except AttributeError:
+    pass
+
+  try:
+    d.fields.c
+    raise AssertionError,'Illegal getattr'
+  except AttributeError:
+    pass
+
+  d['c'] = 1
+  d.fields.c = 2
+
+
+def test_empty(cls):
+  D=cls([])
+  d=D([])
+
+
+if __name__ == '__main__':
+  import gc,sys
+
+  N = 100
+
+  orig_objects = len(gc.get_objects())
+
+  for i in range(N):
+    for cls in [MetaRow,IMetaRow]:
+      test(cls)
+      test_concat(cls)
+      test_descr(cls)
+      test_rw(cls)
+      test_incomplete(cls)
+      test_empty(cls)
+
+    test_insensitive(IMetaRow)
+    test_Irw(IMetaRow)
+    gc.collect()
+
+  # Detect memory leak fixed in 2.2.2 (& 2.3pre CVS)
+  gc.collect()
+  new_objects = len(gc.get_objects()) - orig_objects
+  if new_objects >= N:
+    print "WARNING: Detected memory leak of %d objects." % new_objects
+    if sys.version_info >= (2,2,2):
+      print "         Please notify jacobs@theopalgroup.com immediately."
+    else:
+      print "         You are running a Python older than 2.2.1 or older.  Several"
+      print "         memory leaks in the core interepreter were fixed in version"
+      print "         2.2.2, so we strongly recommend upgrading."
+
+  print 'Tests passed'

dal/dbapi/dbapi.py

+""" Wraps DB-API V2 compliant database drivers.
+
+LICENSE
+=======
+
+Copyright (c) 2004, Randall Smith
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+    * Neither the name of the <ORGANIZATION> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Usage Example:
+import dal
+dbmod = dal.wrapdriver('psycopg')
+# (Optional) Set the datetime type you want to use.
+# Defaults to Python's datetime module.  Can be 'mx', 'py', or 'native'.
+dbmod.dtmod = 'py' # Python datetime module.
+dbmod.paramstyle = 'qmark'
+try:
+    cn = dbmod.connect(host='myhost', database='mydb', user='me', password='mypw')
+    cs = cn.cursor()
+    query = "Select * from mytable where dtfield = ?"
+    params = [dbmod.Date(2004, 7, 1)]
+    cs.execute(query, params)
+    result = cs.fetchall()
+# all DBAPI2 exceptions from the driver are mapped to the corresponding dal 
+# exceptions so it is possible to catch them in a generic way.
+except dal.Error:
+    pass
+
+
+
+The following example shows how to solve timezone issues. These issues 
+are in fact SQL server and driver specific and unfortunately hasn't been 
+addressed in Python DBAPI spec. Below is the example for MySQLdb.
+
+import dal
+dbmod = dal.wrapdriver('MySQLdb')
+
+# Set local timezone. This gives side effect that all datetime.datetime 
+# objects returned as a result from the query will contain this tzinfo.
+
+dal.dbapi.dbtime.local_tzinfo = pytz.reference.Local
+
+# The following line tells the dal that all datetime.datetime objects
+# returned by driver and containing no tzinfo should be treated as
+# of given timezone. This also assures that all datetime.datetime objects
+# sent as a query parameters has been converted internally to this 
+# timezone before sending to driver.
+
+dal.dbapi.dbtime.server_tzinfo = pytz.timezone('UTC') 
+
+dbmod.dtmod = 'py' # Python datetime module.
+dbmod.paramstyle = 'qmark'
+
+cn = dbmod.connect(host='myhost', db='mydb', user='me', passwd='mypw')
+cs = cn.cursor()
+
+# Make sure that MySQL server will send and receive dates in UTC.
+
+cs.execute("SET time_zone='+00:00'")
+
+# Prepare query parameter.
+# Please note that param object has tzinfo set. This is absolutely 
+# necessary for internal timezone conversions to be correct.
+
+param = datetime.datetime.now(pytz.timezone('America/Vancouver')) 
+
+cs.execute("SELECT TIMESTAMP(?)", [ p ])
+
+# The following line will print the correct current time in your current 
+# local timezone.
+
+print cs.fetchone()[0]
+
+"""
+
+__revision__ = 0.1
+
+import dbtime
+import dbexceptions
+import paramstyles
+
+class MWrapper(object):
+    """Wraps DBAPI2 driver."""
+    def __init__(self, driver, drivername):
+        object.__init__(self)
+        self._driver = driver
+        self._drivername = drivername
+        # Remove backslash if it exists from paramstyle config.
+        if '\\' in paramstyles.ESCAPE_CHARS:
+            paramstyles.ESCAPE_CHARS.remove('\\')
+        # Although DB API 2.0 says nothing about BOOLEAN type it is a good
+        # idea to convert booleans in the DB result to native Python bool
+        # type. The Python bool cannot be subclassed so it is impossible
+        # to find out for example if the PgBoolean or similar class is in
+        # fact a boolean value. This feature is required for example by 
+        # XML-RPC and JSON marshallers.
+        self._convert_bool = False
+        if hasattr(driver, 'BOOLEAN'):
+            if driver.BOOLEAN != bool:
+                self._convert_bool = True
+        # Check for driver specific configuration.
+        try:
+            self._config = __import__('config_' + drivername, globals())
+            # Run init1 in config.
+            if hasattr(self._config, 'init1'):
+                self._config.init1(self)
+            # Set up escape and quote characters.
+            if hasattr(self._config, 'escape_chars'):
+                paramstyles.ESCAPE_CHARS.extend(self._config.escape_chars)
+            if hasattr(self._config, 'quote_chars'):
+                paramstyles.QUOTE_CHARS.extend(self._config.quote_chars)
+        except ImportError:
+            self._config = False
+        self.__use_db_row = False # default
+        # Set up module attributes.
+        self.apilevel = '2.0'
+        # This will change later.  It will pass thru driver's threadsafety
+        # level.
+        self.threadsafety = 0
+        # May be changed dynamically.  Default is qmark.
+        self.paramstyle = 'qmark'
+        # This is the datetime types used.
+        # Possible values are py, mx, native.
+        # Zope types are to be added.
+        self.__dtmod = 'py'
+        # These use the native driver's types.
+        self.DATETIME = self._driver.DATETIME
+        self.STRING = self._driver.STRING
+        self.BINARY = self._driver.BINARY
+        self.NUMBER = self._driver.NUMBER
+        self.ROWID = self._driver.ROWID
+        dbexceptions._setExceptions(self)
+        # Run init2 in config.
+        if hasattr(self._config, 'init2'):
+            self._config.init2(self)
+
+    def __getDtMod(self):
+        return self.__dtmod
+
+    def __setDtMod(self, dtmodname):
+        assert dtmodname in ('py', 'mx', 'native')
+        if dtmodname == 'py':
+            if not dbtime.have_datetime:
+                raise Exception, 'datetime module not available.'
+        elif dtmodname == 'mx':
+            if not dbtime.have_mxDateTime:
+                raise Exception, 'mx.DateTime module not available.'
+        self.__dtmod = dtmodname
+
+    dtmod = property(__getDtMod, __setDtMod)
+
+    def __getUseDbRow(self):
+        return self.__use_db_row
+
+    def __setUseDbRow(self, use_db_row):
+        if use_db_row:
+            import db_row
+            globals()['db_row'] = db_row
+        self.__use_db_row = use_db_row
+
+    use_db_row = property(__getUseDbRow, __setUseDbRow)
+
+    # All date constructors must be consistent with the date type we have 
+    # chosen.
+    def Date(self, year, month, day):
+        if self.dtmod == 'native':
+            result = self._driver.Date(year, month, day)
+        else:
+            result = dbtime.construct_date(self.dtmod, year, month, day)
+        return result
+
+    def Time(self, hour, minute, second):
+        if self.dtmod == 'native':
+            result = self._driver.Time(hour, minute, second)
+        else:
+            result = dbtime.construct_time(self.dtmod, hour, minute, second)
+        return result
+
+    def Timestamp(self, year, month, day, hour, minute, second):
+        if self.dtmod == 'native':
+            result = self._driver.Timestamp(year, month, day, hour, minute,
+                                            second)
+        else:
+            result = dbtime.construct_timestamp(self.dtmod, year, month, day,
+                                                hour, minute, second)
+        return result
+
+    def DateFromTicks(self, ticks):
+        if self.dtmod == 'native':
+            result = self._driver.DateFromTicks(ticks)
+        else:
+            result = dbtime.construct_datefromticks(self.dtmod, ticks)
+        return result
+
+    def TimeFromTicks(self, ticks):
+        if self.dtmod == 'native':
+            result = self._driver.TimeFromTicks(ticks)
+        else:
+            result = dbtime.construct_timefromticks(self.dtmod, ticks)
+        return result
+
+    def TimestampFromTicks(self, ticks):
+        if self.dtmod == 'native':
+            result = self._driver.TimestampFromTicks(ticks)
+        else:
+            result = dbtime.construct_timestampfromticks(self.dtmod, ticks)
+        return result
+
+    def Binary(self, string):
+        return self._driver.Binary(string)
+
+    def connect(self, *args, **kwargs):
+        """Return connection object."""
+        return Connection(self, *args, **kwargs)
+
+class Connection(object):
+    """Wrapper for connection object."""
+    def __init__(self, mwrapper, *args, **kwargs):
+        object.__init__(self)
+        self._mwrapper = mwrapper
+        self._native_cn = mwrapper._driver.connect(*args, **kwargs)
+
+    def close(self):
+        return self._native_cn.close()
+
+    def commit(self):
+        return self._native_cn.commit()
+
+    def rollback(self):
+        return self._native_cn.rollback()
+
+    def cursor(self):
+        """Return a wrapped cursor."""
+        return Cursor(self, self._native_cn)
+
+class Cursor(object):
+    """Wrapper for cursor object."""
+    def __init__(self, wrapper_cn, native_cn):
+        object.__init__(self)
+        self._wrapper_cn = wrapper_cn
+        self._mwrapper = wrapper_cn._mwrapper
+        self._driver = self._mwrapper._driver
+        self._drivername = self._mwrapper._drivername
+        self._native_cs = native_cn.cursor()
+        # arraysize should initialize at 1
+        self._native_cs.arraysize = 1
+        self._siface = False # This will probably go away.
+        self._datetimeo = True # This will also go away.
+        self.dtmod = self._mwrapper.dtmod # Takes defualt from wrapper.
+        self.__use_db_row = self._mwrapper.use_db_row
+        self.__paramstyle = self._mwrapper.paramstyle
+
+    def __getDbRow(self):
+        """Return value of use_db_row for cursor."""
+        return self.__use_db_row
+
+    def __setDbRow(self, use_db_row):
+        """Set value of use_db_row for cursor."""
+        if use_db_row:
+            import db_row
+            globals()['db_row'] = db_row
+        self.__use_db_row = use_db_row
+
+    use_db_row = property(__getDbRow, __setDbRow)
+
+    def __getParamstyle(self):
+        """Return value of paramstyle for cursor."""
+        return self.__paramstyle
+
+    def __setParamstyle(self, paramstyle):
+        """Set value of paramstyle for cursor."""
+        self.__paramstyle = paramstyle
+
+    paramstyle = property(__getParamstyle, __setParamstyle)
+
+
+    def __getDescription(self):
+        return self._native_cs.description
+
+    description = property(__getDescription)
+    
+    def __getRowCount(self):
+        return self._native_cs.rowcount
+
+    rowcount = property(__getRowCount)
+
+    def __getArraySize(self):
+        return self._native_cs.arraysize
+
+    def __setArraySize(self, new_array_size):
+        self._native_cs.arraysize = new_array_size 
+
+    arraysize = property(__getArraySize, __setArraySize)
+
+    def setinputsizes(self, sizes):
+        """Do Nothing"""
+        pass
+
+    def setoutputsize(self, size, column=None):
+        """Do Nothing"""
+        pass
+
+    def execute(self, query, params=None):
+        if params == None:
+            return self._native_cs.execute(query)
+        else:
+            newquery, newparams = self.__formatQueryParams(query, params)
+            return self._native_cs.execute(newquery, newparams)
+
+    def executemany(self, query, params=None):
+        # very inefficient
+        if params == None:
+            return self._native_cs.executemany(query)
+        else:
+            newparams = []
+            for pset in params:
+                newquery, newpset = self.__formatQueryParams(query, pset)
+                newparams.append(newpset)
+            return self._native_cs.executemany(newquery, newparams)
+
+    def fetchone(self):
+        """Like DBAPI2."""
+        native_cs = self._native_cs
+        result = native_cs.fetchone()
+        # Do not format None.
+        # Do not format if formatting not required.
+        if result != None and self.__doFormatResults():
+            new_result = self.__formatResults([result])[0]
+        elif result != None:
+            new_result = result
+        else:
+            new_result = None
+        return new_result
+
+    def fetchmany(self, size=None):
+        """Like DBAPI2."""
+        native_cs = self._native_cs
+        if size == None:
+            size = self._native_cs.arraysize
+        results = native_cs.fetchmany(size)
+        # Do not format None.
+        # Do not format if formatting not required.
+        if results != [] and self.__doFormatResults():
+            new_results = self.__formatResults(results)
+        elif results != []:
+            new_results = results
+        else:
+            new_results = [] 
+        return new_results
+
+    def close(self):
+        return self._native_cs.close()
+
+    def fetchall(self):
+        """Like DBAPI2."""
+        native_cs = self._native_cs
+        results = native_cs.fetchall()
+        # Do not format None.
+        # Do not format if formatting not required.
+        if results != None and self.__doFormatResults():
+            new_results = self.__formatResults(results)
+        elif results != None:
+            new_results = results
+        else:
+            new_results = None
+        return new_results
+
+    def __doFormatResults(self):
+        """Check to see if there is reason to format results."""
+        native_dt = self._mwrapper.dtmod == 'native'
+        use_db_row = self.use_db_row
+        if (native_dt) and (not use_db_row):
+            return False
+        else:
+            return True
+
+    def __formatResults(self, results):
+        """Format result set before returning."""
+        if type(results) == tuple:
+            results = list(results)
+        desc = self._native_cs.description
+        typelist = [descitem[1] for descitem in desc]
+        # initialize metarow
+        if self.use_db_row:
+            metarow = db_row.IMetaRow(desc)
+        # Do we have a custom datetime conversion function?
+        if hasattr(self._mwrapper._config, 'convertdt'):
+            cdtfunc = self._mwrapper._config.convertdt
+        else:
+            cdtfunc = None
+        # check for date types in description
+        datepos = []
+        boolpos = []
+        if self._datetimeo:
+            for i in range(len(typelist)):
+                if typelist[i] == self._driver.DATETIME:
+                    datepos.append(i)
+                elif self._mwrapper._convert_bool and typelist[i] == self._driver.BOOLEAN:
+                    boolpos.append(i)
+        # loop through data to make changes
+        for i in xrange(len(results)):
+            set = results[i]
+            # make datetime objects
+            if len(datepos) > 0 or len(boolpos) > 0:
+                newrow = list(set) # b/c tuple is immutable
+                for rownum in datepos:
+                    ##dto = newrow[rownum] # datetime object
+                    dt_type = typelist[rownum]
+                    ##driver = self._driver # driver
+                    inputdt = newrow[rownum]
+                    dtpref = self._mwrapper.dtmod
+                    # don't change date if set to native
+                    if self._mwrapper.dtmod != 'native':
+                        newrow[rownum] = dbtime.native2pref(inputdt, dtpref,
+                                                            dt_type, cdtfunc)
+                    ##newrow[rownum] = self.__mkdatetime(newrow[rownum])
+                for rownum in boolpos:
+                    if newrow[rownum] == None:
+                        continue
+                    if newrow[rownum]:
+                        newrow[rownum] = True
+                    else:
+                        newrow[rownum] = False
+                set = tuple(newrow) # back to a tuple
+            # db_row magic
+            if self.use_db_row:
+                results[i] = metarow(set)
+            else:
+                results[i] = set
+        return results
+
+    def __formatQueryParams(self, query, params):
+        # transform datetime args to native module objects
+        params = dbtime.dtsubnative(self._mwrapper.dtmod, self._driver, params)
+        pstyle1 = self.paramstyle
+        pstyle2 = self._driver.paramstyle
+        ##print pstyle1, pstyle2
+        return paramstyles.convert(pstyle1, pstyle2, query, params)
+
+# public module functions ****************************************
+
+def connect(*args, **kwargs):
+    """Return Connection wrapper object."""
+    return Connection(*args, **kwargs)
+
+def wrapdriver(driver_name, driver_alias=None):
+    """Wrap native driver."""
+    if driver_alias == None:
+        driver_alias = driver_name
+    try:
+        driver = __import__(driver_name, fromlist = [ '' ])
+    except ImportError:
+        raise
+    # create the MWrapper instance
+    mwrapper = MWrapper(driver, driver_alias)
+    return mwrapper

dal/dbapi/dbapi20.txt

+
+               Python Database API Specification v2.0
+
+---------------------------------------------------------------------------
+             http://www.python.org/peps/pep-0249.html
+---------------------------------------------------------------------------
+
+Introduction
+
+    This API has been defined to encourage similarity between the
+    Python modules that are used to access databases.  By doing this,
+    we hope to achieve a consistency leading to more easily understood
+    modules, code that is generally more portable across databases,
+    and a broader reach of database connectivity from Python.
+    
+    The interface specification consists of several sections:
+    
+        * Module Interface
+        * Connection Objects
+        * Cursor Objects
+        * DBI Helper Objects
+        * Type Objects and Constructors
+        * Implementation Hints
+        * Major Changes from 1.0 to 2.0
+    
+    Comments and questions about this specification may be directed
+    to the SIG for Database Interfacing with Python
+    (db-sig@python.org).
+
+    For more information on database interfacing with Python and
+    available packages see the Database Topic
+    Guide at http://www.python.org/topics/database/.
+
+    This document describes the Python Database API Specification 2.0
+    and a set of common optional extensions.  The previous version 1.0
+    version is still available as reference, in PEP 248. Package
+    writers are encouraged to use this version of the specification as
+    basis for new interfaces.
+
+
+Module Interface
+
+    Access to the database is made available through connection
+    objects. The module must provide the following constructor for
+    these:
+
+        connect(parameters...)
+
+            Constructor for creating a connection to the database.
+            Returns a Connection Object. It takes a number of
+            parameters which are database dependent. [1]
+        
+    These module globals must be defined:
+
+        apilevel
+
+            String constant stating the supported DB API level.
+            Currently only the strings '1.0' and '2.0' are allowed.
+            
+            If not given, a DB-API 1.0 level interface should be
+            assumed.
+            
+        threadsafety
+
+            Integer constant stating the level of thread safety the
+            interface supports. Possible values are:
+
+                0     Threads may not share the module.
+                1     Threads may share the module, but not connections.
+                2     Threads may share the module and connections.
+                3     Threads may share the module, connections and
+                      cursors.
+
+            Sharing in the above context means that two threads may
+            use a resource without wrapping it using a mutex semaphore
+            to implement resource locking. Note that you cannot always
+            make external resources thread safe by managing access
+            using a mutex: the resource may rely on global variables
+            or other external sources that are beyond your control.
+
+        paramstyle
+          
+            String constant stating the type of parameter marker
+            formatting expected by the interface. Possible values are
+            [2]:
+
+                'qmark'         Question mark style, 
+                                e.g. '...WHERE name=?'
+                'numeric'       Numeric, positional style, 
+                                e.g. '...WHERE name=:1'
+                'named'         Named style, 
+                                e.g. '...WHERE name=:name'
+                'format'        ANSI C printf format codes, 
+                                e.g. '...WHERE name=%s'
+                'pyformat'      Python extended format codes, 
+                                e.g. '...WHERE name=%(name)s'
+
+    The module should make all error information available through
+    these exceptions or subclasses thereof:
+
+        Warning 
+            
+            Exception raised for important warnings like data
+            truncations while inserting, etc. It must be a subclass of
+            the Python StandardError (defined in the module
+            exceptions).
+            
+        Error 
+
+            Exception that is the base class of all other error
+            exceptions. You can use this to catch all errors with one
+            single 'except' statement. Warnings are not considered
+            errors and thus should not use this class as base. It must
+            be a subclass of the Python StandardError (defined in the
+            module exceptions).
+            
+        InterfaceError
+
+            Exception raised for errors that are related to the
+            database interface rather than the database itself.  It
+            must be a subclass of Error.
+
+        DatabaseError
+
+            Exception raised for errors that are related to the
+            database.  It must be a subclass of Error.
+            
+        DataError
+          
+            Exception raised for errors that are due to problems with
+            the processed data like division by zero, numeric value
+            out of range, etc. It must be a subclass of DatabaseError.
+            
+        OperationalError
+          
+            Exception raised for errors that are related to the
+            database's operation and not necessarily under the control
+            of the programmer, e.g. an unexpected disconnect occurs,
+            the data source name is not found, a transaction could not
+            be processed, a memory allocation error occurred during
+            processing, etc.  It must be a subclass of DatabaseError.
+            
+        IntegrityError             
+          
+            Exception raised when the relational integrity of the
+            database is affected, e.g. a foreign key check fails.  It
+            must be a subclass of DatabaseError.
+            
+        InternalError 
+                      
+            Exception raised when the database encounters an internal
+            error, e.g. the cursor is not valid anymore, the
+            transaction is out of sync, etc.  It must be a subclass of
+            DatabaseError.
+            
+        ProgrammingError
+          
+            Exception raised for programming errors, e.g. table not
+            found or already exists, syntax error in the SQL
+            statement, wrong number of parameters specified, etc.  It
+            must be a subclass of DatabaseError.
+            
+        NotSupportedError
+          
+            Exception raised in case a method or database API was used
+            which is not supported by the database, e.g. requesting a
+            .rollback() on a connection that does not support
+            transaction or has transactions turned off.  It must be a
+            subclass of DatabaseError.
+        
+    This is the exception inheritance layout:
+
+        StandardError
+        |__Warning
+        |__Error
+           |__InterfaceError
+           |__DatabaseError
+              |__DataError
+              |__OperationalError
+              |__IntegrityError
+              |__InternalError
+              |__ProgrammingError
+              |__NotSupportedError
+        
+    Note: The values of these exceptions are not defined. They should
+    give the user a fairly good idea of what went wrong, though.
+        
+
+
+Connection Objects
+
+    Connection Objects should respond to the following methods:
+
+        .close() 
+          
+            Close the connection now (rather than whenever __del__ is
+            called).  The connection will be unusable from this point
+            forward; an Error (or subclass) exception will be raised
+            if any operation is attempted with the connection. The
+            same applies to all cursor objects trying to use the
+            connection.  Note that closing a connection without
+            committing the changes first will cause an implicit
+            rollback to be performed.
+
+            
+        .commit()
+          
+            Commit any pending transaction to the database. Note that
+            if the database supports an auto-commit feature, this must
+            be initially off. An interface method may be provided to
+            turn it back on.
+            
+            Database modules that do not support transactions should
+            implement this method with void functionality.
+            
+        .rollback() 
+          
+            This method is optional since not all databases provide
+            transaction support. [3]
+            
+            In case a database does provide transactions this method
+            causes the the database to roll back to the start of any
+            pending transaction.  Closing a connection without
+            committing the changes first will cause an implicit
+            rollback to be performed.
+            
+        .cursor()
+          
+            Return a new Cursor Object using the connection.  If the
+            database does not provide a direct cursor concept, the
+            module will have to emulate cursors using other means to
+            the extent needed by this specification.  [4]
+            
+
+
+Cursor Objects
+
+    These objects represent a database cursor, which is used to
+    manage the context of a fetch operation. Cursors created from 
+    the same connection are not isolated, i.e., any changes
+    done to the database by a cursor are immediately visible by the
+    other cursors. Cursors created from different connections can
+    or can not be isolated, depending on how the transaction support
+    is implemented (see also the connection's rollback() and commit() 
+    methods.)
+        
+    Cursor Objects should respond to the following methods and
+    attributes:
+
+        .description 
+          
+            This read-only attribute is a sequence of 7-item
+            sequences.  Each of these sequences contains information
+            describing one result column: (name, type_code,
+            display_size, internal_size, precision, scale,
+            null_ok). The first two items (name and type_code) are
+            mandatory, the other five are optional and must be set to
+            None if meaningfull values are not provided.
+
+            This attribute will be None for operations that
+            do not return rows or if the cursor has not had an
+            operation invoked via the executeXXX() method yet.
+            
+            The type_code can be interpreted by comparing it to the
+            Type Objects specified in the section below.
+            
+        .rowcount 
+          
+            This read-only attribute specifies the number of rows that
+            the last executeXXX() produced (for DQL statements like
+            'select') or affected (for DML statements like 'update' or
+            'insert').
+            
+            The attribute is -1 in case no executeXXX() has been
+            performed on the cursor or the rowcount of the last
+            operation is not determinable by the interface. [7]
+
+            Note: Future versions of the DB API specification could
+            redefine the latter case to have the object return None
+            instead of -1.
+            
+        .callproc(procname[,parameters])
+          
+            (This method is optional since not all databases provide
+            stored procedures. [3])
+            
+            Call a stored database procedure with the given name. The
+            sequence of parameters must contain one entry for each
+            argument that the procedure expects. The result of the
+            call is returned as modified copy of the input
+            sequence. Input parameters are left untouched, output and
+            input/output parameters replaced with possibly new values.
+            
+            The procedure may also provide a result set as
+            output. This must then be made available through the
+            standard fetchXXX() methods.
+            
+        .close()
+          
+            Close the cursor now (rather than whenever __del__ is
+            called).  The cursor will be unusable from this point
+            forward; an Error (or subclass) exception will be raised
+            if any operation is attempted with the cursor.
+            
+        .execute(operation[,parameters]) 
+          
+            Prepare and execute a database operation (query or
+            command).  Parameters may be provided as sequence or
+            mapping and will be bound to variables in the operation.
+            Variables are specified in a database-specific notation
+            (see the module's paramstyle attribute for details). [5]
+            
+            A reference to the operation will be retained by the
+            cursor.  If the same operation object is passed in again,
+            then the cursor can optimize its behavior.  This is most
+            effective for algorithms where the same operation is used,
+            but different parameters are bound to it (many times).
+            
+            For maximum efficiency when reusing an operation, it is
+            best to use the setinputsizes() method to specify the
+            parameter types and sizes ahead of time.  It is legal for
+            a parameter to not match the predefined information; the
+            implementation should compensate, possibly with a loss of
+            efficiency.
+            
+            The parameters may also be specified as list of tuples to
+            e.g. insert multiple rows in a single operation, but this
+            kind of usage is depreciated: executemany() should be used
+            instead.
+            
+            Return values are not defined.
+            
+        .executemany(operation,seq_of_parameters) 
+          
+            Prepare a database operation (query or command) and then
+            execute it against all parameter sequences or mappings
+            found in the sequence seq_of_parameters.
+            
+            Modules are free to implement this method using multiple
+            calls to the execute() method or by using array operations
+            to have the database process the sequence as a whole in
+            one call.
+            
+            Use of this method for an operation which produces one or
+            more result sets constitutes undefined behavior, and the
+            implementation is permitted (but not required) to raise 
+            an exception when it detects that a result set has been
+            created by an invocation of the operation.
+            
+            The same comments as for execute() also apply accordingly
+            to this method.
+            
+            Return values are not defined.
+            
+        .fetchone() 
+          
+            Fetch the next row of a query result set, returning a
+            single sequence, or None when no more data is
+            available. [6]
+            
+            An Error (or subclass) exception is raised if the previous
+            call to executeXXX() did not produce any result set or no
+            call was issued yet.
+
+        fetchmany([size=cursor.arraysize])
+          
+            Fetch the next set of rows of a query result, returning a
+            sequence of sequences (e.g. a list of tuples). An empty
+            sequence is returned when no more rows are available.
+            
+            The number of rows to fetch per call is specified by the
+            parameter.  If it is not given, the cursor's arraysize
+            determines the number of rows to be fetched. The method
+            should try to fetch as many rows as indicated by the size
+            parameter. If this is not possible due to the specified
+            number of rows not being available, fewer rows may be
+            returned.
+            
+            An Error (or subclass) exception is raised if the previous
+            call to executeXXX() did not produce any result set or no
+            call was issued yet.
+            
+            Note there are performance considerations involved with
+            the size parameter.  For optimal performance, it is
+            usually best to use the arraysize attribute.  If the size
+            parameter is used, then it is best for it to retain the
+            same value from one fetchmany() call to the next.
+            
+        .fetchall() 
+
+            Fetch all (remaining) rows of a query result, returning
+            them as a sequence of sequences (e.g. a list of tuples).
+            Note that the cursor's arraysize attribute can affect the
+            performance of this operation.
+            
+            An Error (or subclass) exception is raised if the previous
+            call to executeXXX() did not produce any result set or no
+            call was issued yet.
+            
+        .nextset() 
+          
+            (This method is optional since not all databases support
+            multiple result sets. [3])
+            
+            This method will make the cursor skip to the next
+            available set, discarding any remaining rows from the
+            current set.
+            
+            If there are no more sets, the method returns
+            None. Otherwise, it returns a true value and subsequent
+            calls to the fetch methods will return rows from the next
+            result set.
+            
+            An Error (or subclass) exception is raised if the previous
+            call to executeXXX() did not produce any result set or no
+            call was issued yet.
+
+        .arraysize
+          
+            This read/write attribute specifies the number of rows to
+            fetch at a time with fetchmany(). It defaults to 1 meaning
+            to fetch a single row at a time.
+            
+            Implementations must observe this value with respect to
+            the fetchmany() method, but are free to interact with the
+            database a single row at a time. It may also be used in
+            the implementation of executemany().
+            
+        .setinputsizes(sizes)
+          
+            This can be used before a call to executeXXX() to
+            predefine memory areas for the operation's parameters.
+            
+            sizes is specified as a sequence -- one item for each
+            input parameter.  The item should be a Type Object that
+            corresponds to the input that will be used, or it should
+            be an integer specifying the maximum length of a string
+            parameter.  If the item is None, then no predefined memory
+            area will be reserved for that column (this is useful to
+            avoid predefined areas for large inputs).
+            
+            This method would be used before the executeXXX() method
+            is invoked.
+            
+            Implementations are free to have this method do nothing
+            and users are free to not use it.
+            
+        .setoutputsize(size[,column])
+          
+            Set a column buffer size for fetches of large columns
+            (e.g. LONGs, BLOBs, etc.).  The column is specified as an
+            index into the result sequence.  Not specifying the column
+            will set the default size for all large columns in the
+            cursor.
+            
+            This method would be used before the executeXXX() method
+            is invoked.
+            
+            Implementations are free to have this method do nothing
+            and users are free to not use it.
+            
+
+
+Type Objects and Constructors
+
+    Many databases need to have the input in a particular format for
+    binding to an operation's input parameters.  For example, if an
+    input is destined for a DATE column, then it must be bound to the
+    database in a particular string format.  Similar problems exist
+    for "Row ID" columns or large binary items (e.g. blobs or RAW
+    columns).  This presents problems for Python since the parameters
+    to the executeXXX() method are untyped.  When the database module
+    sees a Python string object, it doesn't know if it should be bound
+    as a simple CHAR column, as a raw BINARY item, or as a DATE.
+
+    To overcome this problem, a module must provide the constructors
+    defined below to create objects that can hold special values.
+    When passed to the cursor methods, the module can then detect the
+    proper type of the input parameter and bind it accordingly.
+
+    A Cursor Object's description attribute returns information about
+    each of the result columns of a query.  The type_code must compare
+    equal to one of Type Objects defined below. Type Objects may be
+    equal to more than one type code (e.g. DATETIME could be equal to
+    the type codes for date, time and timestamp columns; see the
+    Implementation Hints below for details).
+
+    The module exports the following constructors and singletons:
+        
+        Date(year,month,day)
+
+            This function constructs an object holding a date value.
+            
+        Time(hour,minute,second)
+
+            This function constructs an object holding a time value.
+            
+        Timestamp(year,month,day,hour,minute,second)
+
+            This function constructs an object holding a time stamp
+            value.
+
+        DateFromTicks(ticks)
+
+            This function constructs an object holding a date value
+            from the given ticks value (number of seconds since the
+            epoch; see the documentation of the standard Python time
+            module for details).
+
+        TimeFromTicks(ticks)
+          
+            This function constructs an object holding a time value
+            from the given ticks value (number of seconds since the
+            epoch; see the documentation of the standard Python time
+            module for details).
+            
+        TimestampFromTicks(ticks)
+
+            This function constructs an object holding a time stamp
+            value from the given ticks value (number of seconds since
+            the epoch; see the documentation of the standard Python
+            time module for details).
+
+        Binary(string)
+          
+            This function constructs an object capable of holding a
+            binary (long) string value.
+            
+
+        STRING
+
+            This type object is used to describe columns in a database
+            that are string-based (e.g. CHAR).
+
+        BINARY
+
+            This type object is used to describe (long) binary columns
+            in a database (e.g. LONG, RAW, BLOBs).
+            
+        NUMBER
+
+            This type object is used to describe numeric columns in a
+            database.
+
+        DATETIME
+          
+            This type object is used to describe date/time columns in
+            a database.
+            
+        ROWID
+          
+            This type object is used to describe the "Row ID" column
+            in a database.
+            
+