Sheila Allen avatar Sheila Allen committed 7e0bf2f Merge

merged mainline default branch

Comments (0)

Files changed (53)

     on existing items.  Will still mark the attr as expired
     if the destination doesn't have the attr, though, which
     fulfills some contracts of deferred cols.  [ticket:1681]
-
+  
+  - Fixed bug in 0.6-reworked "many-to-one" optimizations
+    such that a many-to-one that is against a non-primary key
+    column on the remote table (i.e. foreign key against a 
+    UNIQUE column) will pull the "old" value in from the
+    database during a change, since if it's in the session
+    we will need it for proper history/backref accounting,
+    and we can't pull from the local identity map on a 
+    non-primary key column. [ticket:1737]
+    
   - session.merge() works with relations that specifically
     don't include "merge" in their cascade options - the target
     is ignored completely.
     from_statement() to start with since it no longer modifies
     the query.  [ticket:1688]
 
+  - query.get() now returns None if queried for an identifier
+    that is present in the identity map with a different class 
+    than the one requested, i.e. when using polymorphic loading.  
+    [ticket:1727]
+    
   - A major fix in query.join(), when the "on" clause is an
     attribute of an aliased() construct, but there is already
     an existing join made out to a compatible target, query properly
   - Python unicode objects as binds result in the Unicode type, 
     not string, thus eliminating a certain class of unicode errors
     on drivers that don't support unicode binds.
+
+  - Added "logging_name" argument to create_engine(), Pool() constructor
+    as well as "pool_logging_name" argument to create_engine() which
+    filters down to that of Pool.   Issues the given string name
+    within the "name" field of logging messages instead of the default
+    hex identifier string.  [ticket:1555]
     
+  - The visit_pool() method of Dialect is removed, and replaced with
+    on_connect().  This method returns a callable which receives
+    the raw DBAPI connection after each one is created.   The callable
+    is assembled into a first_connect/connect pool listener by the 
+    connection strategy if non-None.   Provides a simpler interface 
+    for dialects.
+        
 - metadata
   - Added the ability to strip schema information when using
     "tometadata" by passing "schema=None" as an argument. If schema
    - "out" parameters require a type that is supported by
      cx_oracle.  An error will be raised if no cx_oracle
      type can be found.
+
    - Oracle 'DATE' now does not perform any result processing,
      as the DATE type in Oracle stores full date+time objects,
      that's what you'll get.  Note that the generic types.Date
      type *will* still call value.date() on incoming values, 
      however.  When reflecting a table, the reflected type
      will be 'DATE'.
+
+   - Added preliminary support for Oracle's WITH_UNICODE
+     mode.  At the very least this establishes initial
+     support for cx_Oracle with Python 3.  When WITH_UNICODE
+     mode is used in Python 2.xx, a large and scary warning
+     is emitted asking that the user seriously consider
+     the usage of this difficult mode of operation.
+     [ticket:1670]
      
 - sqlite
    - Added "native_datetime=True" flag to create_engine().
      compatible with the "func.current_date()", which 
      will be returned as a string. [ticket:1685]
 
+- sybase
+   - Implemented a preliminary working dialect for Sybase
+     based on the Python-Sybase driver.  Handles table
+     creates/drops and basic round trip functionality.
+     Does not yet include reflection or comprehensive
+     support of unicode/special expressions/etc.
+     
 - examples
    - Changed the beaker cache example a bit to have a separate
      RelationCache option for lazyload caching.  This object
 The test suite will be creating and dropping many tables and other DDL, and
 preexisting tables will interfere with the tests.
 
-Several tests require alternate schemas to be present.   This requirement
-applies to all backends except SQLite and Firebird.   These schemas are:
+Several tests require alternate usernames or schemas to be present, which
+are used to test dotted-name access scenarios.  On some databases such
+as Oracle or Sybase, these are usernames, and others such as Postgresql
+and MySQL they are schemas.   The requirement applies to all backends
+except SQLite and Firebird.  The names are:
 
     test_schema
-    test_schema_2
+    test_schema_2 (only used on Postgresql)
 
 Please refer to your vendor documentation for the proper syntax to create 
-these schemas - the database user must have permission to create and drop
+these namespaces - the database user must have permission to create and drop
 tables within these schemas.  Its perfectly fine to run the test suite
-without these schemas present, it only means that a handful of tests which
+without these namespaces present, it only means that a handful of tests which
 expect them to be present will fail.
 
 Additional steps specific to individual databases are as follows:
 
-    ORACLE: the test_schema and test_schema_2 schemas are created as
-    users, as the "owner" in Oracle is considered like a "schema" in
-    SQLAlchemy.
+    ORACLE: a user named "test_schema" is created.
     
-     The primary database user needs to be able to create and drop tables,
-    synonyms, and constraints in these schemas. Unfortunately, many hours of
-    googling and experimentation cannot find a GRANT option that allows the
-    primary user the "REFERENCES" role in a remote schema for tables not yet
-    defined (REFERENCES is per-table) - the only thing that works is to put
-    the user in the "DBA" role:
+    The primary database user needs to be able to create and drop tables,
+    synonyms, and constraints within the "test_schema" user.   For this
+    to work fully, including that the user has the "REFERENCES" role
+    in a remote shcema for tables not yet defined (REFERENCES is per-table),
+    it is required that the test the user be present in the "DBA" role:
     
-     grant dba to scott;
+        grant dba to scott;
     
-     Any ideas on what specific privileges within "DBA" allow an open-ended
-    REFERENCES grant would be appreciated, or if in fact "DBA" has some kind
-    of "magic" flag not accessible otherwise. So, running SQLA tests on oracle
-    requires access to a completely open Oracle database - Oracle XE is
-    obviously a terrific choice since its just a local engine. As always,
-    leaving the schemas out means those few dozen tests will fail and is
-    otherwise harmless. 
-    
+    SYBASE: Similar to Oracle, "test_schema" is created as a user, and the
+    primary test user needs to have the "sa_role". 
+ 
+    It's also recommened to turn on "trunc log on chkpt" and to use a
+    separate transaction log device - Sybase basically seizes up when 
+    the transaction log is full otherwise.
+
+    A full series of setup assuming sa/master: 
+   
+        disk init name="translog", physname="/opt/sybase/data/translog.dat", size="10M"
+        create database sqlalchemy on default log on translog="10M"
+        sp_dboption sqlalchemy, "trunc log on chkpt", true
+        sp_addlogin scott, "tiger7"
+        sp_addlogin test_schema, "tiger7"
+        use sqlalchemy
+        sp_adduser scott
+        sp_adduser test_schema
+        grant all to scott
+        sp_role "grant", sa_role, scott
+
+    Sybase will still freeze for up to a minute when the log becomes
+    full.  To manually dump the log:
+
+        dump tran sqlalchemy with truncate_only
 
     MSSQL: Tests that involve multiple connections require Snapshot Isolation
     ability implented on the test database in order to prevent deadlocks that

doc/build/dbengine.rst

 
 Supported Databases
 ====================
-Recall that the :class:`~sqlalchemy.engine.base.Dialect` is used to describe how to talk to a specific kind of database.  Dialects are included with SQLAlchemy for many different backends; these can be seen as a Python package within the :mod:`~sqlalchemy.dialect` package.  Each dialect requires the appropriate DBAPI drivers to be installed separately.
 
-Dialects included with SQLAlchemy fall under one of three categories: supported, experimental, and third party.  Supported drivers are those which work against the most common databases available in the open source world, including SQLite, PostgreSQL, MySQL, and Firebird.   Very popular commercial databases which provide easy access to test platforms are also supported, these currently include MSSQL and Oracle.   These dialects are tested frequently and the level of support should be close to 100% for each.
+SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various 
+backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package.  A 
+SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
 
-The experimental category is for drivers against less common database platforms, or commercial platforms for which no freely available and easily usable test platform is provided.   These include Access, MaxDB, Informix, and Sybase at the time of this writing.  These are not-yet-functioning
-or partially-functioning dialects for which the SQLAlchemy project is not able to provide regular test support.  If you're interested in supporting one of these backends, contact the mailing list.
+The table below summarizes the state of DBAPI support in SQLAlchemy 0.6.  The values 
+translate as:
 
-There are also third-party dialects available - currently IBM offers a DB2/Informix IDS dialect for SQLAlchemy.
+* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.   
+* yes / OS platform - The DBAPI supports that platform.
+* no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.  
+* no / OS platform - The DBAPI does not support that platform.
+* partial - the DBAPI is partially usable on the target platform but has major unresolved issues.
+* development - a development version of the dialect exists, but is not yet usable.
+* thirdparty - the dialect itself is maintained by a third party, who should be consulted for
+  information on current support.
+* \* - indicates the given DBAPI is the "default" for SQLAlchemy, i.e. when just the database name is specified
 
-Downloads for each DBAPI at the time of this writing are as follows:
+=========================  ===========================  ===========  ===========   ===========  =================  ============
+Driver                     Connect string               Py2K         Py3K          Jython       Unix               Windows
+=========================  ===========================  ===========  ===========   ===========  =================  ============
+**DB2/Informix IDS**
+-------------------------------------------------------------------------------------------------------------------------------
+ibm-db_                    thirdparty                   thirdparty   thirdparty    thirdparty   thirdparty         thirdparty
+**Firebird**
+-------------------------------------------------------------------------------------------------------------------------------
+kinterbasdb_               ``firebird+kinterbasdb``\*   yes          development   no           yes                yes
+**Informix**
+-------------------------------------------------------------------------------------------------------------------------------
+informixdb_                ``informix+informixdb``\*    development  development   no           unknown            unknown
+**MaxDB**
+-------------------------------------------------------------------------------------------------------------------------------
+sapdb_                     ``maxdb+sapdb``\*            development  development   no           yes                unknown
+**Microsoft Access**
+-------------------------------------------------------------------------------------------------------------------------------
+pyodbc_                    ``access+pyodbc``\*          development  development   no           unknown            yes
+**Microsoft SQL Server**
+-------------------------------------------------------------------------------------------------------------------------------
+adodbapi_                  ``mssql+adodbapi``           development  development   no           no                 yes
+`jTDS JDBC Driver`_        ``mssql+zxjdbc``             no           no            development  yes                yes
+mxodbc_                    ``mssql+mxodbc``             yes          development   no           yes with FreeTDS_  yes
+pyodbc_                    ``mssql+pyodbc``\*           yes          development   no           yes with FreeTDS_  yes
+pymssql_                   ``mssql+pymssql``            development  development   no           yes                yes
+**MySQL**
+-------------------------------------------------------------------------------------------------------------------------------
+`MySQL Connector/J`_       ``mysql+zxjdbc``             no           no            yes          yes                yes
+`MySQL Connector/Python`_  ``mysql+mysqlconnector``     yes          partial       no           yes                yes
+mysql-python_              ``mysql+mysqldb``\*          yes          development   no           yes                yes
+OurSQL_                    ``mysql+oursql``             yes          partial       no           yes                yes
+**Oracle**
+-------------------------------------------------------------------------------------------------------------------------------
+cx_oracle_                 ``oracle+cx_oracle``\*       yes          development   no           yes                yes
+`Oracle JDBC Driver`_      ``oracle+zxjdbc``            no           no            yes          yes                yes
+**Postgresql**
+-------------------------------------------------------------------------------------------------------------------------------
+pg8000_                    ``postgresql+pg8000``        yes          yes           no           yes                yes
+`PostgreSQL JDBC Driver`_  ``postgresql+zxjdbc``        no           no            yes          yes                yes
+psycopg2_                  ``postgresql+psycopg2``\*    yes          development   no           yes                yes
+pypostgresql_              ``postgresql+pypostgresql``  no           partial       no           yes                yes
+**SQLite**
+-------------------------------------------------------------------------------------------------------------------------------
+pysqlite_                  ``sqlite+pysqlite``\*        yes          yes           no           yes                yes
+sqlite3_                   ``sqlite+pysqlite``\*        yes          yes           no           yes                yes
+**Sybase ASE**
+-------------------------------------------------------------------------------------------------------------------------------
+mxodbc_                    ``sybase+mxodbc``            development  development   no           yes                yes
+pyodbc_                    ``sybase+pyodbc``            development  development   no           unknown            unknown
+python-sybase_             ``sybase+pysybase``\*        partial      development   no           yes                yes
+=========================  ===========================  ===========  ===========   ===========  =================  ============
 
-* Supported Dialects
+.. _psycopg2: http://www.initd.org/
+.. _pg8000: http://pybrary.net/pg8000/
+.. _pypostgresql: http://python.projects.postgresql.org/
+.. _mysql-python: http://sourceforge.net/projects/mysql-python
+.. _MySQL Connector/Python: https://launchpad.net/myconnpy
+.. _OurSQL: http://packages.python.org/oursql/
+.. _PostgreSQL JDBC Driver: http://jdbc.postgresql.org/
+.. _sqlite3: http://docs.python.org/library/sqlite3.html
+.. _pysqlite: http://pypi.python.org/pypi/pysqlite/
+.. _MySQL Connector/J: http://dev.mysql.com/downloads/connector/j/
+.. _cx_Oracle: http://cx-oracle.sourceforge.net/
+.. _Oracle JDBC Driver: http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
+.. _kinterbasdb:  http://firebirdsql.org/index.php?op=devel&sub=python
+.. _pyodbc: http://code.google.com/p/pyodbc/
+.. _mxodbc: http://www.egenix.com/products/python/mxODBC/
+.. _FreeTDS: http://www.freetds.org/
+.. _adodbapi: http://adodbapi.sourceforge.net/
+.. _pymssql: http://pymssql.sourceforge.net/
+.. _jTDS JDBC Driver: http://jtds.sourceforge.net/
+.. _ibm-db: http://code.google.com/p/ibm-db/
+.. _informixdb: http://informixdb.sourceforge.net/
+.. _sapdb: http://www.sapdb.org/sapdbapi.html
+.. _python-sybase: http://python-sybase.sourceforge.net/
 
- - PostgreSQL:  `psycopg2 <http://www.initd.org/tracker/psycopg/wiki/PsycopgTwo>`_ * `pg8000 <http://pybrary.net/pg8000/>`_
- - PostgreSQL on Jython: `PostgreSQL JDBC Driver <http://jdbc.postgresql.org/>`_
- - SQLite:  `sqlite3 <http://www.python.org/doc/2.5.2/lib/module-sqlite3.html>`_ (included in Python 2.5 or greater) * `pysqlite <http://initd.org/tracker/pysqlite>`_
- - MySQL:   `MySQLdb (a.k.a. mysql-python) <http://sourceforge.net/projects/mysql-python>`_ * `MySQL Connector/Python <https://launchpad.net/myconnpy>`_ * `OurSQL <http://packages.python.org/oursql/>`_
- - MySQL on Jython: `MySQL Connector/J JDBC driver <http://dev.mysql.com/downloads/connector/j/>`_
- - Oracle:  `cx_Oracle <http://cx-oracle.sourceforge.net/>`_
- - Oracle on Jython:  `Oracle JDBC Driver <http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html>`_
- - Firebird:  `kinterbasdb <http://firebirdsql.org/index.php?op=devel&sub=python>`_
- - MS-SQL, MSAccess:  `pyodbc <http://pyodbc.sourceforge.net/>`_ (recommended) * `adodbapi <http://adodbapi.sourceforge.net/>`_ * `pymssql <http://pymssql.sourceforge.net/>`_
- - MS-SQL on Jython:  `jTDS JDBC Driver <http://jtds.sourceforge.net/>`_
-
-* Experimental Dialects
-
- - MSAccess:  `pyodbc <http://pyodbc.sourceforge.net/>`_
- - Informix:  `informixdb <http://informixdb.sourceforge.net/>`_
- - Sybase:   TODO
- - MAXDB:    `sapdb <http://www.sapdb.org/sapdbapi.html>`_
-
-* Third Party Dialects
-
- - DB2/Informix IDS: `ibm-db <http://code.google.com/p/ibm-db/>`_
-
-The SQLAlchemy Wiki contains a page of database notes, describing whatever quirks and behaviors have been observed.  Its a good place to check for issues with specific databases.  `Database Notes <http://www.sqlalchemy.org/trac/wiki/DatabaseNotes>`_
+Further detail on dialects is available at :ref:`sqlalchemy.dialects_toplevel` as well as additional notes on the wiki at `Database Notes <http://www.sqlalchemy.org/trac/wiki/DatabaseNotes>`_
 
 create_engine() URL Arguments
 ==============================
 By default, the log level is set to ``logging.ERROR`` within the entire ``sqlalchemy`` namespace so that no log operations occur, even within an application that has logging enabled otherwise.
 
 The ``echo`` flags present as keyword arguments to :func:`~sqlalchemy.create_engine` and others as well as the ``echo`` property on :class:`~sqlalchemy.engine.base.Engine`, when set to ``True``, will first attempt to ensure that logging is enabled.  Unfortunately, the ``logging`` module provides no way of determining if output has already been configured (note we are referring to if a logging configuration has been set up, not just that the logging level is set).  For this reason, any ``echo=True`` flags will result in a call to ``logging.basicConfig()`` using sys.stdout as the destination.  It also sets up a default format using the level name, timestamp, and logger name.  Note that this configuration has the affect of being configured **in addition** to any existing logger configurations.  Therefore, **when using Python logging, ensure all echo flags are set to False at all times**, to avoid getting duplicate log lines.
+
+The logger name of instance such as an :class:`~sqlalchemy.engine.base.Engine` or :class:`~sqlalchemy.pool.Pool` defaults to using a truncated hex identifier string.  To set this to a specific name, use the "logging_name" and "pool_logging_name" keyword arguments with :func:`sqlalchemy.create_engine`.

doc/build/reference/dialects/index.rst

-.. _sqlalchemy.dialects:
+.. _sqlalchemy.dialects_toplevel:
 
 sqlalchemy.dialects
 ====================

doc/build/reference/dialects/sybase.rst

 ======
 
 .. automodule:: sqlalchemy.dialects.sybase.base
+
+python-sybase notes
+-------------------
+
+.. automodule:: sqlalchemy.dialects.sybase.pysybase
+
+pyodbc notes
+------------
+
+.. automodule:: sqlalchemy.dialects.sybase.pyodbc
+
+mxodbc notes
+------------
+
+.. automodule:: sqlalchemy.dialects.sybase.mxodbc
+

doc/build/reference/sqlalchemy/types.rst

 ---------------------
 
 Database-specific types are also available for import from each
-database's dialect module. See the :ref:`sqlalchemy.dialects`
+database's dialect module. See the :ref:`sqlalchemy.dialects_toplevel`
 reference for the database you're interested in.
 
 For example, MySQL has a ``BIGINTEGER`` type and PostgreSQL has an

lib/sqlalchemy/connectors/mxodbc.py

             raise ImportError, "Unrecognized platform for mxODBC import"
         return module
 
-    def visit_pool(self, pool):
-        def connect(conn, rec):
+    def on_connect(self):
+        def connect(conn):
             conn.stringformat = self.dbapi.MIXED_STRINGFORMAT
             conn.datetimeformat = self.dbapi.PYDATETIME_DATETIMEFORMAT
             conn.errorhandler = error_handler
             # Alternatives to experiment with:
             #conn.bindmethod = self.dbapi.BIND_USING_PYTHONTYPE
             #conn.bindmethod = self.dbapi.BIND_USING_SQLTYPE
-
-        pool.add_listener({'connect':connect})
+        return connect
 
     def create_connect_args(self, url):
         """ Return a tuple of *args,**kwargs for creating a connection.

lib/sqlalchemy/dialects/firebird/kinterbasdb.py

 from sqlalchemy.dialects.firebird.base import FBDialect, FBCompiler
 
 
-class Firebird_kinterbasdb(FBDialect):
+class FBDialect_kinterbasdb(FBDialect):
     driver = 'kinterbasdb'
     supports_sane_rowcount = False
     supports_sane_multi_rowcount = False
 
     def __init__(self, type_conv=200, concurrency_level=1, **kwargs):
-        super(Firebird_kinterbasdb, self).__init__(**kwargs)
+        super(FBDialect_kinterbasdb, self).__init__(**kwargs)
 
         self.type_conv = type_conv
         self.concurrency_level = concurrency_level
         else:
             return False
 
-dialect = Firebird_kinterbasdb
+dialect = FBDialect_kinterbasdb

lib/sqlalchemy/dialects/informix/informixdb.py

 from sqlalchemy.dialects.informix.base import InformixDialect
 from sqlalchemy.engine import default
 
-class InfoExecutionContext(default.DefaultExecutionContext):
+class InformixExecutionContext_informixdb(default.DefaultExecutionContext):
     def post_exec(self):
         if self.isinsert:
             self._lastrowid = [self.cursor.sqlerrd[1]]
 
 
-class Informix_informixdb(InformixDialect):
+class InformixDialect_informixdb(InformixDialect):
     driver = 'informixdb'
     default_paramstyle = 'qmark'
-    execution_context_cls = InfoExecutionContext
+    execution_context_cls = InformixExecutionContext_informixdb
 
     @classmethod
     def dbapi(cls):
             return False
 
 
-dialect = Informix_informixdb
+dialect = InformixDialect_informixdb

lib/sqlalchemy/dialects/maxdb/sapdb.py

 from sqlalchemy.dialects.maxdb.base import MaxDBDialect
 
-class MaxDB_sapdb(MaxDBDialect):
+class MaxDBDialect_sapdb(MaxDBDialect):
     driver = 'sapdb'
     
     @classmethod
         return [], opts
 
 
-dialect = MaxDB_sapdb
+dialect = MaxDBDialect_sapdb

lib/sqlalchemy/dialects/mssql/zxjdbc.py

 from sqlalchemy.dialects.mssql.base import MSDialect, MSExecutionContext
 from sqlalchemy.engine import base
 
-class MS_zxjdbcExecutionContext(MSExecutionContext):
+class MSExecutionContext_zxjdbc(MSExecutionContext):
 
     _embedded_scope_identity = False
 
     def pre_exec(self):
-        super(MS_zxjdbcExecutionContext, self).pre_exec()
+        super(MSExecutionContext_zxjdbc, self).pre_exec()
         # scope_identity after the fact returns null in jTDS so we must
         # embed it
         if self._select_lastrowid and self.dialect.use_scope_identity:
             self.cursor.execute("SET IDENTITY_INSERT %s OFF" % table)
 
 
-class MS_zxjdbc(ZxJDBCConnector, MSDialect):
+class MSDialect_zxjdbc(ZxJDBCConnector, MSDialect):
     jdbc_db_name = 'jtds:sqlserver'
     jdbc_driver_name = 'net.sourceforge.jtds.jdbc.Driver'
 
-    execution_ctx_cls = MS_zxjdbcExecutionContext
+    execution_ctx_cls = MSExecutionContext_zxjdbc
 
     def _get_server_version_info(self, connection):
         return tuple(int(x) for x in connection.connection.dbversion.split('.'))
 
-dialect = MS_zxjdbc
+dialect = MSDialect_zxjdbc

lib/sqlalchemy/dialects/mysql/mysqlconnector.py

 from sqlalchemy import exc, log, schema, sql, types as sqltypes, util
 from sqlalchemy import processors
 
-class MySQL_mysqlconnectorExecutionContext(MySQLExecutionContext):
+class MySQLExecutionContext_mysqlconnector(MySQLExecutionContext):
 
     def get_lastrowid(self):
         return self.cursor.lastrowid
 
 
-class MySQL_mysqlconnectorCompiler(MySQLCompiler):
+class MySQLCompiler_mysqlconnector(MySQLCompiler):
     def visit_mod(self, binary, **kw):
         return self.process(binary.left) + " %% " + self.process(binary.right)
 
 class _myconnpyNumeric(_DecimalType, NUMERIC):
     pass
 
-class MySQL_mysqlconnectorIdentifierPreparer(MySQLIdentifierPreparer):
+class MySQLIdentifierPreparer_mysqlconnector(MySQLIdentifierPreparer):
 
     def _escape_identifier(self, value):
         value = value.replace(self.escape_quote, self.escape_to_quote)
 
         return None
 
-class MySQL_mysqlconnector(MySQLDialect):
+class MySQLDialect_mysqlconnector(MySQLDialect):
     driver = 'mysqlconnector'
     supports_unicode_statements = True
     supports_unicode_binds = True
     supports_sane_multi_rowcount = True
 
     default_paramstyle = 'format'
-    execution_ctx_cls = MySQL_mysqlconnectorExecutionContext
-    statement_compiler = MySQL_mysqlconnectorCompiler
+    execution_ctx_cls = MySQLExecutionContext_mysqlconnector
+    statement_compiler = MySQLCompiler_mysqlconnector
 
-    preparer = MySQL_mysqlconnectorIdentifierPreparer
+    preparer = MySQLIdentifierPreparer_mysqlconnector
 
     colspecs = util.update_copy(
         MySQLDialect.colspecs,
     def _compat_fetchone(self, rp, charset=None):
         return rp.fetchone()
 
-dialect = MySQL_mysqlconnector
+dialect = MySQLDialect_mysqlconnector

lib/sqlalchemy/dialects/mysql/mysqldb.py

 from sqlalchemy import exc, log, schema, sql, types as sqltypes, util
 from sqlalchemy import processors
 
-class MySQL_mysqldbExecutionContext(MySQLExecutionContext):
+class MySQLExecutionContext_mysqldb(MySQLExecutionContext):
     
     @property
     def rowcount(self):
             return self.cursor.rowcount
         
         
-class MySQL_mysqldbCompiler(MySQLCompiler):
+class MySQLCompiler_mysqldb(MySQLCompiler):
     def visit_mod(self, binary, **kw):
         return self.process(binary.left) + " %% " + self.process(binary.right)
     
 class _MySQLdbDecimal(_DecimalType, DECIMAL):
     pass
 
-class MySQL_mysqldbIdentifierPreparer(MySQLIdentifierPreparer):
+class MySQLIdentifierPreparer_mysqldb(MySQLIdentifierPreparer):
     
     def _escape_identifier(self, value):
         value = value.replace(self.escape_quote, self.escape_to_quote)
         return value.replace("%", "%%")
 
-class MySQL_mysqldb(MySQLDialect):
+class MySQLDialect_mysqldb(MySQLDialect):
     driver = 'mysqldb'
     supports_unicode_statements = False
     supports_sane_rowcount = True
     supports_sane_multi_rowcount = True
 
     default_paramstyle = 'format'
-    execution_ctx_cls = MySQL_mysqldbExecutionContext
-    statement_compiler = MySQL_mysqldbCompiler
-    preparer = MySQL_mysqldbIdentifierPreparer
+    execution_ctx_cls = MySQLExecutionContext_mysqldb
+    statement_compiler = MySQLCompiler_mysqldb
+    preparer = MySQLIdentifierPreparer_mysqldb
     
     colspecs = util.update_copy(
         MySQLDialect.colspecs,
                 return 'latin1'
 
 
-dialect = MySQL_mysqldb
+dialect = MySQLDialect_mysqldb

lib/sqlalchemy/dialects/mysql/oursql.py

         return None
 
 
-class MySQL_oursqlExecutionContext(MySQLExecutionContext):
+class MySQLExecutionContext_oursql(MySQLExecutionContext):
 
     @property
     def plain_query(self):
         return self.execution_options.get('_oursql_plain_query', False)
     
-class MySQL_oursql(MySQLDialect):
+class MySQLDialect_oursql(MySQLDialect):
     driver = 'oursql'
 # Py3K
 #    description_encoding = None
 
     supports_sane_rowcount = True
     supports_sane_multi_rowcount = True
-    execution_ctx_cls = MySQL_oursqlExecutionContext
+    execution_ctx_cls = MySQLExecutionContext_oursql
 
     colspecs = util.update_copy(
         MySQLDialect.colspecs,
         return rp.first()
 
 
-dialect = MySQL_oursql
+dialect = MySQLDialect_oursql

lib/sqlalchemy/dialects/mysql/pyodbc.py

 from sqlalchemy import util
 import re
 
-class MySQL_pyodbcExecutionContext(MySQLExecutionContext):
+class MySQLExecutionContext_pyodbc(MySQLExecutionContext):
 
     def get_lastrowid(self):
         cursor = self.create_cursor()
         cursor.close()
         return lastrowid
 
-class MySQL_pyodbc(PyODBCConnector, MySQLDialect):
+class MySQLDialect_pyodbc(PyODBCConnector, MySQLDialect):
     supports_unicode_statements = False
-    execution_ctx_cls = MySQL_pyodbcExecutionContext
+    execution_ctx_cls = MySQLExecutionContext_pyodbc
 
     pyodbc_driver_name = "MySQL"
     
     def __init__(self, **kw):
         # deal with http://code.google.com/p/pyodbc/issues/detail?id=25
         kw.setdefault('convert_unicode', True)
-        super(MySQL_pyodbc, self).__init__(**kw)
+        super(MySQLDialect_pyodbc, self).__init__(**kw)
 
     def _detect_charset(self, connection):
         """Sniff out the character set in use for connection results."""
         else:
             return None
 
-dialect = MySQL_pyodbc
+dialect = MySQLDialect_pyodbc

lib/sqlalchemy/dialects/mysql/zxjdbc.py

         return process
 
 
-class MySQL_zxjdbcExecutionContext(MySQLExecutionContext):
+class MySQLExecutionContext_zxjdbc(MySQLExecutionContext):
     def get_lastrowid(self):
         cursor = self.create_cursor()
         cursor.execute("SELECT LAST_INSERT_ID()")
         return lastrowid
 
 
-class MySQL_zxjdbc(ZxJDBCConnector, MySQLDialect):
+class MySQLDialect_zxjdbc(ZxJDBCConnector, MySQLDialect):
     jdbc_db_name = 'mysql'
     jdbc_driver_name = 'com.mysql.jdbc.Driver'
 
-    execution_ctx_cls = MySQL_zxjdbcExecutionContext
+    execution_ctx_cls = MySQLExecutionContext_zxjdbc
 
     colspecs = util.update_copy(
         MySQLDialect.colspecs,
                 version.append(n)
         return tuple(version)
 
-dialect = MySQL_zxjdbc
+dialect = MySQLDialect_zxjdbc

lib/sqlalchemy/dialects/oracle/__init__.py

 from sqlalchemy.dialects.oracle.base import \
     VARCHAR, NVARCHAR, CHAR, DATE, DATETIME, NUMBER,\
     BLOB, BFILE, CLOB, NCLOB, TIMESTAMP, RAW,\
-    FLOAT, DOUBLE_PRECISION, LONG, dialect, INTERVAL
+    FLOAT, DOUBLE_PRECISION, LONG, dialect, INTERVAL,\
+    VARCHAR2, NVARCHAR2
 
 
 __all__ = (
 'VARCHAR', 'NVARCHAR', 'CHAR', 'DATE', 'DATETIME', 'NUMBER',
 'BLOB', 'BFILE', 'CLOB', 'NCLOB', 'TIMESTAMP', 'RAW',
-'FLOAT', 'DOUBLE_PRECISION', 'LONG', 'dialect', 'INTERVAL'
+'FLOAT', 'DOUBLE_PRECISION', 'LONG', 'dialect', 'INTERVAL',
+'VARCHAR2', 'NVARCHAR2'
 )

lib/sqlalchemy/dialects/oracle/base.py

 from sqlalchemy.types import VARCHAR, NVARCHAR, CHAR, DATE, DATETIME, \
                 BLOB, CLOB, TIMESTAMP, FLOAT
                 
-RESERVED_WORDS = set('''SHARE RAW DROP BETWEEN FROM DESC OPTION PRIOR LONG THEN DEFAULT ALTER IS INTO MINUS INTEGER NUMBER GRANT IDENTIFIED ALL TO ORDER ON FLOAT DATE HAVING CLUSTER NOWAIT RESOURCE ANY TABLE INDEX FOR UPDATE WHERE CHECK SMALLINT WITH DELETE BY ASC REVOKE LIKE SIZE RENAME NOCOMPRESS NULL GROUP VALUES AS IN VIEW EXCLUSIVE COMPRESS SYNONYM SELECT INSERT EXISTS NOT TRIGGER ELSE CREATE INTERSECT PCTFREE DISTINCT USER CONNECT SET MODE OF UNIQUE VARCHAR2 VARCHAR LOCK OR CHAR DECIMAL UNION PUBLIC AND START UID COMMENT'''.split()) 
+RESERVED_WORDS = set('SHARE RAW DROP BETWEEN FROM DESC OPTION PRIOR LONG THEN '
+                     'DEFAULT ALTER IS INTO MINUS INTEGER NUMBER GRANT IDENTIFIED '
+                     'ALL TO ORDER ON FLOAT DATE HAVING CLUSTER NOWAIT RESOURCE ANY '
+                     'TABLE INDEX FOR UPDATE WHERE CHECK SMALLINT WITH DELETE BY ASC '
+                     'REVOKE LIKE SIZE RENAME NOCOMPRESS NULL GROUP VALUES AS IN VIEW '
+                     'EXCLUSIVE COMPRESS SYNONYM SELECT INSERT EXISTS NOT TRIGGER '
+                     'ELSE CREATE INTERSECT PCTFREE DISTINCT USER CONNECT SET MODE '
+                     'OF UNIQUE VARCHAR2 VARCHAR LOCK OR CHAR DECIMAL UNION PUBLIC '
+                     'AND START UID COMMENT'.split()) 
 
 class RAW(sqltypes.LargeBinary):
     pass
     def normalize_name(self, name):
         if name is None:
             return None
-        elif (name.upper() == name and
-              not self.identifier_preparer._requires_quotes(name.lower().decode(self.encoding))):
-            return name.lower().decode(self.encoding)
+        # Py2K
+        if isinstance(name, str):
+            name = name.decode(self.encoding)
+        # end Py2K
+        if name.upper() == name and \
+              not self.identifier_preparer._requires_quotes(name.lower()):
+            return name.lower()
         else:
-            return name.decode(self.encoding)
+            return name
 
     def denormalize_name(self, name):
         if name is None:
             return None
         elif name.lower() == name and not self.identifier_preparer._requires_quotes(name.lower()):
-            return name.upper().encode(self.encoding)
+            name = name.upper()
+        # Py2K
+        if not self.supports_unicode_binds:
+            name = name.encode(self.encoding)
         else:
-            return name.encode(self.encoding)
+            name = unicode(name)
+        # end Py2K
+        return name
 
     def _get_default_schema_name(self, connection):
-        return self.normalize_name(connection.execute('SELECT USER FROM DUAL').scalar())
+        return self.normalize_name(connection.execute(u'SELECT USER FROM DUAL').scalar())
 
     def table_names(self, connection, schema):
         # note that table_names() isnt loading DBLINKed or synonym'ed tables
                                  resolve_synonyms=False, dblink='', **kw):
 
         if resolve_synonyms:
-            actual_name, owner, dblink, synonym = self._resolve_synonym(connection, desired_owner=self.denormalize_name(schema), desired_synonym=self.denormalize_name(table_name))
+            actual_name, owner, dblink, synonym = self._resolve_synonym(
+                                                         connection, 
+                                                         desired_owner=self.denormalize_name(schema), 
+                                                         desired_synonym=self.denormalize_name(table_name)
+                                                   )
         else:
             actual_name, owner, dblink, synonym = None, None, None, None
         if not actual_name:

lib/sqlalchemy/dialects/oracle/cx_oracle.py

             # return the cx_oracle.LOB directly.
             return None
             
-        super_process = super(_LOBMixin, self).result_processor(dialect, coltype)
-        if super_process:
-            def process(value):
-                if value is not None:
-                    return super_process(value.read())
-                else:
-                    return super_process(value)
-        else:
-            def process(value):
-                if value is not None:
-                    return value.read()
-                else:
-                    return value
+        def process(value):
+            if value is not None:
+                return value.read()
+            else:
+                return value
         return process
 
 class _NativeUnicodeMixin(object):
+    # Py2K
+    def bind_processor(self, dialect):
+        if dialect._cx_oracle_with_unicode:
+            def process(value):
+                if value is None:
+                    return value
+                else:
+                    return unicode(value)
+            return process
+        else:
+            return super(_NativeUnicodeMixin, self).bind_processor(dialect)
+    # end Py2K
+    
     def result_processor(self, dialect, coltype):
         # if we know cx_Oracle will return unicode,
         # don't process results
-        if self.convert_unicode != 'force' and \
+        if dialect._cx_oracle_with_unicode:
+            return None
+        elif self.convert_unicode != 'force' and \
                     dialect._cx_oracle_native_nvarchar and \
-                    coltype == dialect.dbapi.UNICODE:
+                    coltype in dialect._cx_oracle_unicode_types:
             return None
         else:
             return super(_NativeUnicodeMixin, self).result_processor(dialect, coltype)
 
 class _OracleNVarChar(_NativeUnicodeMixin, sqltypes.NVARCHAR):
     def get_dbapi_type(self, dbapi):
-        return dbapi.UNICODE
+        return getattr(dbapi, 'UNICODE', dbapi.STRING)
         
 class _OracleText(_LOBMixin, sqltypes.Text):
     def get_dbapi_type(self, dbapi):
 class _OracleString(_NativeUnicodeMixin, sqltypes.String):
     pass
 
-class _OracleUnicodeText(_NativeUnicodeMixin, sqltypes.UnicodeText):
+class _OracleUnicodeText(_LOBMixin, _NativeUnicodeMixin, sqltypes.UnicodeText):
     def get_dbapi_type(self, dbapi):
         return dbapi.NCLOB
 
     def result_processor(self, dialect, coltype):
-        if not dialect.auto_convert_lobs:
-            # return the cx_oracle.LOB directly.
+        lob_processor = _LOBMixin.result_processor(self, dialect, coltype)
+        if lob_processor is None:
             return None
 
-        if dialect._cx_oracle_native_nvarchar:
+        string_processor = _NativeUnicodeMixin.result_processor(self, dialect, coltype)
+
+        if string_processor is None:
+            return lob_processor
+        else:
             def process(value):
-                if value is not None:
-                    return value.read()
-                else:
-                    return value
+                return string_processor(lob_processor(value))
             return process
-        else:
-            # TODO: this is wrong - we are getting a LOB here
-            # no matter what version of oracle, so process() 
-            # is still needed
-            return super(_OracleUnicodeText, self).result_processor(dialect, coltype)
 
 class _OracleInteger(sqltypes.Integer):
     def result_processor(self, dialect, coltype):
     sqltypes.NVARCHAR : _OracleNVarChar,
 }
 
-class Oracle_cx_oracleCompiler(OracleCompiler):
+class OracleCompiler_cx_oracle(OracleCompiler):
     def bindparam_string(self, name):
         if self.preparer._bindparam_requires_quotes(name):
             quoted_name = '"%s"' % name
         else:
             return OracleCompiler.bindparam_string(self, name)
 
-class Oracle_cx_oracleExecutionContext(OracleExecutionContext):
+    
+class OracleExecutionContext_cx_oracle(OracleExecutionContext):
+    
     def pre_exec(self):
         quoted_bind_names = getattr(self.compiled, '_quoted_bind_names', {})
         if quoted_bind_names:
             # on String, including that outparams/RETURNING
             # breaks for varchars
             self.set_input_sizes(quoted_bind_names, 
-                                     exclude_types=[
-                                              self.dialect.dbapi.STRING, 
-                                              self.dialect.dbapi.UNICODE])
+                                     exclude_types=self.dialect._cx_oracle_string_types
+                                )
             
         if len(self.compiled_parameters) == 1:
             for key in self.compiled.binds:
         if self.cursor.description is not None:
             for column in self.cursor.description:
                 type_code = column[1]
-                if type_code in self.dialect.ORACLE_BINARY_TYPES:
+                if type_code in self.dialect._cx_oracle_binary_types:
                     result = base.BufferedColumnResultProxy(self)
         
         if result is None:
 
         return result
 
+class OracleExecutionContext_cx_oracle_with_unicode(OracleExecutionContext_cx_oracle):
+    """Support WITH_UNICODE in Python 2.xx.
+    
+    WITH_UNICODE allows cx_Oracle's Python 3 unicode handling behavior under Python 2.x.
+    This mode in some cases disallows and in other cases silently 
+    passes corrupted data when non-Python-unicode strings (a.k.a. plain old Python strings) 
+    are passed as arguments to connect(), the statement sent to execute(), or any of the bind
+    parameter keys or values sent to execute().  This optional context
+    therefore ensures that all statements are passed as Python unicode objects.
+    
+    """
+    def __init__(self, *arg, **kw):
+        OracleExecutionContext_cx_oracle.__init__(self, *arg, **kw)
+        self.statement = unicode(self.statement)
+
+    def _execute_scalar(self, stmt):
+        return super(OracleExecutionContext_cx_oracle_with_unicode, self).\
+                            _execute_scalar(unicode(stmt))
+                            
 class ReturningResultProxy(base.FullyBufferedResultProxy):
     """Result proxy which stuffs the _returning clause + outparams into the fetch."""
     
         return ret
     
     def _buffer_rows(self):
-        return [tuple(self._returning_params["ret_%d" % i] for i, c in enumerate(self._returning_params))]
+        return [tuple(self._returning_params["ret_%d" % i] 
+                    for i, c in enumerate(self._returning_params))]
 
-class Oracle_cx_oracle(OracleDialect):
-    execution_ctx_cls = Oracle_cx_oracleExecutionContext
-    statement_compiler = Oracle_cx_oracleCompiler
+class OracleDialect_cx_oracle(OracleDialect):
+    execution_ctx_cls = OracleExecutionContext_cx_oracle
+    statement_compiler = OracleCompiler_cx_oracle
     driver = "cx_oracle"
     colspecs = colspecs
     
         self.auto_setinputsizes = auto_setinputsizes
         self.auto_convert_lobs = auto_convert_lobs
         
-        def vers(num):
-            return tuple([int(x) for x in num.split('.')])
-
         if hasattr(self.dbapi, 'version'):
-            cx_oracle_ver = vers(self.dbapi.version)
+            cx_oracle_ver = tuple([int(x) for x in self.dbapi.version.split('.')])
             self.supports_unicode_binds = cx_oracle_ver >= (5, 0)
             self._cx_oracle_native_nvarchar = cx_oracle_ver >= (5, 0)
+        else:  
+           cx_oracle_ver = None
             
-        if self.dbapi is None or not self.auto_convert_lobs or not 'CLOB' in self.dbapi.__dict__:
+        def types(*names):
+            return set([getattr(self.dbapi, name, None) for name in names]).difference([None])
+
+        self._cx_oracle_string_types = types("STRING", "UNICODE", "NCLOB", "CLOB")
+        self._cx_oracle_unicode_types = types("UNICODE", "NCLOB")
+        self._cx_oracle_binary_types = types("BFILE", "CLOB", "NCLOB", "BLOB") 
+
+        if cx_oracle_ver is None:
+            # this occurs in tests with mock DBAPIs
+            self._cx_oracle_string_types = set()
+            self._cx_oracle_with_unicode = False
+        elif cx_oracle_ver >= (5,) and not hasattr(self.dbapi, 'UNICODE'):
+            # cx_Oracle WITH_UNICODE mode.  *only* python
+            # unicode objects accepted for anything
+            self.supports_unicode_statements = True
+            self.supports_unicode_binds = True
+            self._cx_oracle_with_unicode = True
+            # Py2K
+            # There's really no reason to run with WITH_UNICODE under Python 2.x.
+            # Give the user a hint.
+            util.warn("cx_Oracle is compiled under Python 2.xx using the "
+                        "WITH_UNICODE flag.  Consider recompiling cx_Oracle without "
+                        "this flag, which is in no way necessary for full support of Unicode. "
+                        "Otherwise, all string-holding bind parameters must "
+                        "be explicitly typed using SQLAlchemy's String type or one of its subtypes,"
+                        "or otherwise be passed as Python unicode.  Plain Python strings "
+                        "passed as bind parameters will be silently corrupted by cx_Oracle."
+                        )
+            self.execution_ctx_cls = OracleExecutionContext_cx_oracle_with_unicode
+            # end Py2K
+        else:
+            self._cx_oracle_with_unicode = False
+
+        if cx_oracle_ver is None or \
+                    not self.auto_convert_lobs or \
+                    not hasattr(self.dbapi, 'CLOB'):
             self.dbapi_type_map = {}
-            self.ORACLE_BINARY_TYPES = []
         else:
             # only use this for LOB objects.  using it for strings, dates
             # etc. leads to a little too much magic, reflection doesn't know if it should
                 self.dbapi.BLOB: oracle.BLOB(),
                 self.dbapi.BINARY: oracle.RAW(),
             }
-            self.ORACLE_BINARY_TYPES = [getattr(self.dbapi, k) for k in ["BFILE", "CLOB", "NCLOB", "BLOB"] if hasattr(self.dbapi, k)]
     
     @classmethod
     def dbapi(cls):
             threaded=self.threaded,
             twophase=self.allow_twophase,
             )
+
+        # Py2K
+        if self._cx_oracle_with_unicode:
+            for k, v in opts.items():
+                if isinstance(v, str):
+                    opts[k] = unicode(v)
+        # end Py2K
+
         if 'mode' in url.query:
             opts['mode'] = url.query['mode']
             if isinstance(opts['mode'], basestring):
                     opts['mode'] = self.dbapi.SYSOPER
                 else:
                     util.coerce_kw_type(opts, 'mode', int)
-        # Can't set 'handle' or 'pool' via URL query args, use connect_args
-
         return ([], opts)
 
     def _get_server_version_info(self, connection):
     def do_recover_twophase(self, connection):
         pass
 
-dialect = Oracle_cx_oracle
+dialect = OracleDialect_cx_oracle

lib/sqlalchemy/dialects/oracle/zxjdbc.py

         return process
 
 
-class Oracle_zxjdbcCompiler(OracleCompiler):
+class OracleCompiler_zxjdbc(OracleCompiler):
 
     def returning_clause(self, stmt, returning_cols):
-        columnlist = list(expression._select_iterables(returning_cols))
+        self.returning_cols = list(expression._select_iterables(returning_cols))
 
         # within_columns_clause=False so that labels (foo AS bar) don't render
         columns = [self.process(c, within_columns_clause=False, result_map=self.result_map)
-                   for c in columnlist]
+                   for c in self.returning_cols]
 
         if not hasattr(self, 'returning_parameters'):
             self.returning_parameters = []
 
         binds = []
-        for i, col in enumerate(columnlist):
+        for i, col in enumerate(self.returning_cols):
             dbtype = col.type.dialect_impl(self.dialect).get_dbapi_type(self.dialect.dbapi)
             self.returning_parameters.append((i + 1, dbtype))
 
         return 'RETURNING ' + ', '.join(columns) +  " INTO " + ", ".join(binds)
 
 
-class Oracle_zxjdbcExecutionContext(OracleExecutionContext):
+class OracleExecutionContext_zxjdbc(OracleExecutionContext):
 
     def pre_exec(self):
         if hasattr(self.compiled, 'returning_parameters'):
         super(ReturningResultProxy, self).__init__(context)
 
     def _cursor_description(self):
-        returning = self.context.compiled.returning
-
         ret = []
-        for c in returning:
+        for c in self.context.compiled.returning_cols:
             if hasattr(c, 'name'):
                 ret.append((c.name, c.type))
             else:
                                                    self.type)
 
 
-class Oracle_zxjdbc(ZxJDBCConnector, OracleDialect):
+class OracleDialect_zxjdbc(ZxJDBCConnector, OracleDialect):
     jdbc_db_name = 'oracle'
     jdbc_driver_name = 'oracle.jdbc.OracleDriver'
 
-    statement_compiler = Oracle_zxjdbcCompiler
-    execution_ctx_cls = Oracle_zxjdbcExecutionContext
+    statement_compiler = OracleCompiler_zxjdbc
+    execution_ctx_cls = OracleExecutionContext_zxjdbc
 
     colspecs = util.update_copy(
         OracleDialect.colspecs,
     )
 
     def __init__(self, *args, **kwargs):
-        super(Oracle_zxjdbc, self).__init__(*args, **kwargs)
+        super(OracleDialect_zxjdbc, self).__init__(*args, **kwargs)
         global SQLException, zxJDBC
         from java.sql import SQLException
         from com.ziclix.python.sql import zxJDBC
         self.DataHandler = OracleReturningDataHandler
 
     def initialize(self, connection):
-        super(Oracle_zxjdbc, self).initialize(connection)
+        super(OracleDialect_zxjdbc, self).initialize(connection)
         self.implicit_returning = connection.connection.driverversion >= '10.2'
 
     def _create_jdbc_url(self, url):
         version = re.search(r'Release ([\d\.]+)', connection.connection.dbversion).group(1)
         return tuple(int(x) for x in version.split('.'))
 
-dialect = Oracle_zxjdbc
+dialect = OracleDialect_zxjdbc

lib/sqlalchemy/dialects/postgresql/base.py

         if not self.supports_native_enum:
             self.colspecs = self.colspecs.copy()
             del self.colspecs[ENUM]
+
+    def on_connect(self):
+        if self.isolation_level is not None:
+            def connect(conn):
+                cursor = conn.cursor()
+                cursor.execute("SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL %s"
+                               % self.isolation_level)
+                cursor.execute("COMMIT")
+                cursor.close()
+            return connect
+        else:
+            return None
             
-    def visit_pool(self, pool):
-        if self.isolation_level is not None:
-            class SetIsolationLevel(object):
-                def __init__(self, isolation_level):
-                    self.isolation_level = isolation_level
-
-                def connect(self, conn, rec):
-                    cursor = conn.cursor()
-                    cursor.execute("SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL %s"
-                                   % self.isolation_level)
-                    cursor.execute("COMMIT")
-                    cursor.close()
-            pool.add_listener(SetIsolationLevel(self.isolation_level))
-
     def do_begin_twophase(self, connection, xid):
         self.do_begin(connection.connection)
 

lib/sqlalchemy/dialects/postgresql/pg8000.py

             else:
                 raise exc.InvalidRequestError("Unknown PG numeric type: %d" % coltype)
 
-class PostgreSQL_pg8000ExecutionContext(PGExecutionContext):
+class PGExecutionContext_pg8000(PGExecutionContext):
     pass
 
 
-class PostgreSQL_pg8000Compiler(PGCompiler):
+class PGCompiler_pg8000(PGCompiler):
     def visit_mod(self, binary, **kw):
         return self.process(binary.left) + " %% " + self.process(binary.right)
 
         return text.replace('%', '%%')
 
 
-class PostgreSQL_pg8000IdentifierPreparer(PGIdentifierPreparer):
+class PGIdentifierPreparer_pg8000(PGIdentifierPreparer):
     def _escape_identifier(self, value):
         value = value.replace(self.escape_quote, self.escape_to_quote)
         return value.replace('%', '%%')
 
     
-class PostgreSQL_pg8000(PGDialect):
+class PGDialect_pg8000(PGDialect):
     driver = 'pg8000'
 
     supports_unicode_statements = True
     
     default_paramstyle = 'format'
     supports_sane_multi_rowcount = False
-    execution_ctx_cls = PostgreSQL_pg8000ExecutionContext
-    statement_compiler = PostgreSQL_pg8000Compiler
-    preparer = PostgreSQL_pg8000IdentifierPreparer
+    execution_ctx_cls = PGExecutionContext_pg8000
+    statement_compiler = PGCompiler_pg8000
+    preparer = PGIdentifierPreparer_pg8000
     
     colspecs = util.update_copy(
         PGDialect.colspecs,
     def is_disconnect(self, e):
         return "connection is closed" in str(e)
 
-dialect = PostgreSQL_pg8000
+dialect = PGDialect_pg8000

lib/sqlalchemy/dialects/postgresql/psycopg2.py

     r'\s*SELECT',
     re.I | re.UNICODE)
 
-class PostgreSQL_psycopg2ExecutionContext(PGExecutionContext):
+class PGExecutionContext_psycopg2(PGExecutionContext):
     def create_cursor(self):
         # TODO: coverage for server side cursors + select.for_update()
         
             return base.ResultProxy(self)
 
 
-class PostgreSQL_psycopg2Compiler(PGCompiler):
+class PGCompiler_psycopg2(PGCompiler):
     def visit_mod(self, binary, **kw):
         return self.process(binary.left) + " %% " + self.process(binary.right)
     
         return text.replace('%', '%%')
 
 
-class PostgreSQL_psycopg2IdentifierPreparer(PGIdentifierPreparer):
+class PGIdentifierPreparer_psycopg2(PGIdentifierPreparer):
     def _escape_identifier(self, value):
         value = value.replace(self.escape_quote, self.escape_to_quote)
         return value.replace('%', '%%')
 
-class PostgreSQL_psycopg2(PGDialect):
+class PGDialect_psycopg2(PGDialect):
     driver = 'psycopg2'
     supports_unicode_statements = False
     default_paramstyle = 'pyformat'
     supports_sane_multi_rowcount = False
-    execution_ctx_cls = PostgreSQL_psycopg2ExecutionContext
-    statement_compiler = PostgreSQL_psycopg2Compiler
-    preparer = PostgreSQL_psycopg2IdentifierPreparer
+    execution_ctx_cls = PGExecutionContext_psycopg2
+    statement_compiler = PGCompiler_psycopg2
+    preparer = PGIdentifierPreparer_psycopg2
 
     colspecs = util.update_copy(
         PGDialect.colspecs,
         psycopg = __import__('psycopg2')
         return psycopg
     
-    _unwrap_connection = None
-    
-    def visit_pool(self, pool):
+    def on_connect(self):
+        base_on_connect = super(PGDialect_psycopg2, self).on_connect()
         if self.dbapi and self.use_native_unicode:
             extensions = __import__('psycopg2.extensions').extensions
-            def connect(conn, rec):
-                if self._unwrap_connection:
-                    conn = self._unwrap_connection(conn)
-                    if conn is None:
-                        return
+            def connect(conn):
                 extensions.register_type(extensions.UNICODE, conn)
-            pool.add_listener({'first_connect': connect, 'connect':connect})
-        super(PostgreSQL_psycopg2, self).visit_pool(pool)
-        
+                if base_on_connect:
+                    base_on_connect(conn)
+            return connect
+        else:
+            return base_on_connect
+            
     def create_connect_args(self, url):
         opts = url.translate_connect_args(username='user')
         if 'port' in opts:
         else:
             return False
 
-dialect = PostgreSQL_psycopg2
+dialect = PGDialect_psycopg2
     

lib/sqlalchemy/dialects/postgresql/pypostgresql.py

         else:
             return processors.to_float
 
-class PostgreSQL_pypostgresqlExecutionContext(PGExecutionContext):
+class PGExecutionContext_pypostgresql(PGExecutionContext):
     pass
 
-class PostgreSQL_pypostgresql(PGDialect):
+class PGDialect_pypostgresql(PGDialect):
     driver = 'pypostgresql'
 
     supports_unicode_statements = True
     supports_sane_rowcount = True
     supports_sane_multi_rowcount = False
 
-    execution_ctx_cls = PostgreSQL_pypostgresqlExecutionContext
+    execution_ctx_cls = PGExecutionContext_pypostgresql
     colspecs = util.update_copy(
         PGDialect.colspecs,
         {
     def is_disconnect(self, e):
         return "connection is closed" in str(e)
 
-dialect = PostgreSQL_pypostgresql
+dialect = PGDialect_pypostgresql

lib/sqlalchemy/dialects/postgresql/zxjdbc.py

 from sqlalchemy.connectors.zxJDBC import ZxJDBCConnector
 from sqlalchemy.dialects.postgresql.base import PGDialect
 
-class PostgreSQL_zxjdbc(ZxJDBCConnector, PGDialect):
+class PGDialect_zxjdbc(ZxJDBCConnector, PGDialect):
     jdbc_db_name = 'postgresql'
     jdbc_driver_name = 'org.postgresql.Driver'
 
     def _get_server_version_info(self, connection):
         return tuple(int(x) for x in connection.connection.dbversion.split('.'))
 
-dialect = PostgreSQL_zxjdbc
+dialect = PGDialect_zxjdbc

lib/sqlalchemy/dialects/sqlite/base.py

         # hypothetical driver ?)
         self.native_datetime = native_datetime
         
-    def visit_pool(self, pool):
+    def on_connect(self):
         if self.isolation_level is not None:
-            class SetIsolationLevel(object):
-                def __init__(self, isolation_level):
-                    if isolation_level == 'READ UNCOMMITTED':
-                        self.isolation_level = 1
-                    else:
-                        self.isolation_level = 0
-
-                def connect(self, conn, rec):
-                    cursor = conn.cursor()
-                    cursor.execute("PRAGMA read_uncommitted = %d" % self.isolation_level)
-                    cursor.close()
-            pool.add_listener(SetIsolationLevel(self.isolation_level))
-
+            if self.isolation_level == 'READ UNCOMMITTED':
+                isolation_level = 1
+            else:
+                isolation_level = 0
+                
+            def connect(conn):
+                cursor = conn.cursor()
+                cursor.execute("PRAGMA read_uncommitted = %d" % isolation_level)
+                cursor.close()
+            return connect
+        else:
+            return None
+    
     def table_names(self, connection, schema):
         if schema is not None:
             qschema = self.identifier_preparer.quote_identifier(schema)

lib/sqlalchemy/dialects/sqlite/pysqlite.py

         else:
             return DATE.result_processor(self, dialect, coltype)
 
-class SQLite_pysqlite(SQLiteDialect):
+class SQLiteDialect_pysqlite(SQLiteDialect):
     default_paramstyle = 'qmark'
     poolclass = pool.SingletonThreadPool
 
     def is_disconnect(self, e):
         return isinstance(e, self.dbapi.ProgrammingError) and "Cannot operate on a closed database." in str(e)
 
-dialect = SQLite_pysqlite
+dialect = SQLiteDialect_pysqlite

lib/sqlalchemy/dialects/sybase/__init__.py

-from sqlalchemy.dialects.sybase import base, pyodbc
+from sqlalchemy.dialects.sybase import base, pysybase
+
+
+from base import CHAR, VARCHAR, TIME, NCHAR, NVARCHAR,\
+                            TEXT,DATE,DATETIME, FLOAT, NUMERIC,\
+                            BIGINT,INT, INTEGER, SMALLINT, BINARY,\
+                            VARBINARY,UNITEXT,UNICHAR,UNIVARCHAR,\
+                           IMAGE,BIT,MONEY,SMALLMONEY,TINYINT
 
 # default dialect
-base.dialect = pyodbc.dialect
+base.dialect = pysybase.dialect
+
+__all__ = (
+     'CHAR', 'VARCHAR', 'TIME', 'NCHAR', 'NVARCHAR',
+    'TEXT','DATE','DATETIME', 'FLOAT', 'NUMERIC',
+    'BIGINT','INT', 'INTEGER', 'SMALLINT', 'BINARY',
+    'VARBINARY','UNITEXT','UNICHAR','UNIVARCHAR',
+   'IMAGE','BIT','MONEY','SMALLMONEY','TINYINT',
+   'dialect'
+)

lib/sqlalchemy/dialects/sybase/base.py

 # This module is part of SQLAlchemy and is released under
 # the MIT License: http://www.opensource.org/licenses/mit-license.php
 
-"""Support for the Sybase iAnywhere database.  
+"""Support for Sybase Adaptive Server Enterprise (ASE).
 
-This is not (yet) a full backend for Sybase ASE.
+Note that this dialect is no longer specific to Sybase iAnywhere.
+ASE is the primary support platform.
 
-This dialect is *not* ported to SQLAlchemy 0.6.
-
-This dialect is *not* tested on SQLAlchemy 0.6.
-
-
-Known issues / TODO:
-
- * Uses the mx.ODBC driver from egenix (version 2.1.0)
- * The current version of sqlalchemy.databases.sybase only supports
-   mx.ODBC.Windows (other platforms such as mx.ODBC.unixODBC still need
-   some development)
- * Support for pyodbc has been built in but is not yet complete (needs
-   further development)
- * Results of running tests/alltests.py:
-     Ran 934 tests in 287.032s
-     FAILED (failures=3, errors=1)
- * Tested on 'Adaptive Server Anywhere 9' (version 9.0.1.1751)
 """
 
-import datetime, operator
-
-from sqlalchemy import util, sql, schema, exc
-from sqlalchemy.sql import compiler, expression
-from sqlalchemy.engine import default, base
+import operator
+from sqlalchemy.sql import compiler, expression, text, bindparam
+from sqlalchemy.engine import default, base, reflection
 from sqlalchemy import types as sqltypes
 from sqlalchemy.sql import operators as sql_operators
-from sqlalchemy import MetaData, Table, Column
-from sqlalchemy import String, Integer, SMALLINT, CHAR, ForeignKey
-from sqlalchemy.dialects.sybase.schema import *
+from sqlalchemy import schema as sa_schema
+from sqlalchemy import util, sql, exc
+
+from sqlalchemy.types import CHAR, VARCHAR, TIME, NCHAR, NVARCHAR,\
+                            TEXT,DATE,DATETIME, FLOAT, NUMERIC,\
+                            BIGINT,INT, INTEGER, SMALLINT, BINARY,\
+                            VARBINARY, DECIMAL, TIMESTAMP, Unicode
 
 RESERVED_WORDS = set([
     "add", "all", "alter", "and",
     ])
 
 
-class SybaseImage(sqltypes.LargeBinary):
-    __visit_name__ = 'IMAGE'
+class UNICHAR(sqltypes.Unicode):
+    __visit_name__ = 'UNICHAR'
 
-class SybaseBit(sqltypes.TypeEngine):
+class UNIVARCHAR(sqltypes.Unicode):
+    __visit_name__ = 'UNIVARCHAR'
+
+class UNITEXT(sqltypes.UnicodeText):
+    __visit_name__ = 'UNITEXT'
+
+class TINYINT(sqltypes.Integer):
+    __visit_name__ = 'TINYINT'
+
+class BIT(sqltypes.TypeEngine):
     __visit_name__ = 'BIT'
     
-class SybaseMoney(sqltypes.TypeEngine):
+class MONEY(sqltypes.TypeEngine):
     __visit_name__ = "MONEY"
 
-class SybaseSmallMoney(SybaseMoney):
+class SMALLMONEY(sqltypes.TypeEngine):
     __visit_name__ = "SMALLMONEY"
 
-class SybaseUniqueIdentifier(sqltypes.TypeEngine):
+class UNIQUEIDENTIFIER(sqltypes.TypeEngine):
     __visit_name__ = "UNIQUEIDENTIFIER"
-    
-class SybaseBoolean(sqltypes.Boolean):
-    pass
+  
+class IMAGE(sqltypes.LargeBinary):
+    __visit_name__ = 'IMAGE'
+ 
 
 class SybaseTypeCompiler(compiler.GenericTypeCompiler):
     def visit_large_binary(self, type_):
     
     def visit_boolean(self, type_):
         return self.visit_BIT(type_)
+
+    def visit_UNICHAR(self, type_):
+        return "UNICHAR(%d)" % type_.length
+
+    def visit_UNITEXT(self, type_):
+        return "UNITEXT"
+
+    def visit_TINYINT(self, type_):
+        return "TINYINT"
         
     def visit_IMAGE(self, type_):
         return "IMAGE"
         return "UNIQUEIDENTIFIER"
         
 colspecs = {
-    sqltypes.LargeBinary : SybaseImage,
-    sqltypes.Boolean : SybaseBoolean,
 }
 
 ischema_names = {
-    'integer' : sqltypes.INTEGER,
-    'unsigned int' : sqltypes.Integer,
-    'unsigned smallint' : sqltypes.SmallInteger,
-    'unsigned bigint' : sqltypes.BigInteger,
-    'bigint': sqltypes.BIGINT,
-    'smallint' : sqltypes.SMALLINT,
-    'tinyint' : sqltypes.SmallInteger,
-    'varchar' : sqltypes.VARCHAR,
-    'long varchar' : sqltypes.Text,
-    'char' : sqltypes.CHAR,
-    'decimal' : sqltypes.DECIMAL,
-    'numeric' : sqltypes.NUMERIC,
-    'float' : sqltypes.FLOAT,
-    'double' : sqltypes.Numeric,
-    'binary' : sqltypes.LargeBinary,
-    'long binary' : sqltypes.LargeBinary,
-    'varbinary' : sqltypes.LargeBinary,
-    'bit': SybaseBit,
-    'image' : SybaseImage,
-    'timestamp': sqltypes.TIMESTAMP,
-    'money': SybaseMoney,
-    'smallmoney': SybaseSmallMoney,
-    'uniqueidentifier': SybaseUniqueIdentifier,
+    'integer' : INTEGER,
+    'unsigned int' : INTEGER, # TODO: unsigned flags
+    'unsigned smallint' : SMALLINT, # TODO: unsigned flags
+    'unsigned bigint' : BIGINT, # TODO: unsigned flags
+    'bigint': BIGINT,
+    'smallint' : SMALLINT,
+    'tinyint' : TINYINT,
+    'varchar' : VARCHAR,
+    'long varchar' : TEXT, # TODO
+    'char' : CHAR,
+    'decimal' : DECIMAL,
+    'numeric' : NUMERIC,
+    'float' : FLOAT,
+    'double' : NUMERIC, # TODO
+    'binary' : BINARY,
+    'varbinary' : VARBINARY,
+    'bit': BIT,
+    'image' : IMAGE,
+    'timestamp': TIMESTAMP,
+    'money': MONEY,
+    'smallmoney': MONEY,
+    'uniqueidentifier': UNIQUEIDENTIFIER,
 
 }
 
 
 class SybaseExecutionContext(default.DefaultExecutionContext):
+    _enable_identity_insert = False
+
+    def pre_exec(self):
+        if self.isinsert:
+            tbl = self.compiled.statement.table
+            seq_column = tbl._autoincrement_column
+            insert_has_sequence = seq_column is not None
+            
+            if insert_has_sequence:
+                self._enable_identity_insert = seq_column.key in self.compiled_parameters[0]
+            else:
+                self._enable_identity_insert = False
+            
+            if self._enable_identity_insert:
+                self.cursor.execute("SET IDENTITY_INSERT %s ON" % 
+                    self.dialect.identifier_preparer.format_table(tbl))
 
     def post_exec(self):
-        if self.compiled.isinsert:
-            table = self.compiled.statement.table
-            # get the inserted values of the primary key
+        
+       if self._enable_identity_insert:
+            self.cursor.execute(
+                        "SET IDENTITY_INSERT %s OFF" %  
+                                self.dialect.identifier_preparer.
+                                    format_table(self.compiled.statement.table)
+                        )
 
-            # get any sequence IDs first (using @@identity)
-            self.cursor.execute("SELECT @@identity AS lastrowid")
-            row = self.cursor.fetchone()
-            lastrowid = int(row[0])
-            if lastrowid > 0:
-                # an IDENTITY was inserted, fetch it
-                # FIXME: always insert in front ? This only works if the IDENTITY is the first column, no ?!
-                if not hasattr(self, '_last_inserted_ids') or self._last_inserted_ids is None:
-                    self._last_inserted_ids = [lastrowid]
-                else:
-                    self._last_inserted_ids = [lastrowid] + self._last_inserted_ids[1:]
-
+    def get_lastrowid(self):
+        cursor = self.create_cursor()
+        cursor.execute("SELECT @@identity AS lastrowid")
+        lastrowid = cursor.fetchone()[0]
+        cursor.close()
+        return lastrowid
 
 class SybaseSQLCompiler(compiler.SQLCompiler):
 
     def visit_mod(self, binary, **kw):
         return "MOD(%s, %s)" % (self.process(binary.left), self.process(binary.right))
 
-    def bindparam_string(self, name):
-        res = super(SybaseSQLCompiler, self).bindparam_string(name)
-        if name.lower().startswith('literal'):
-            res = 'STRING(%s)' % res
-        return res
-
     def get_select_precolumns(self, select):
         s = select._distinct and "DISTINCT " or ""
         if select._limit:
         # Limit in sybase is after the select keyword
         return ""
 
-    def visit_binary(self, binary):
+    def dont_visit_binary(self, binary):
         """Move bind parameters to the right-hand side of an operator, where possible."""
         if isinstance(binary.left, expression._BindParamClause) and binary.operator == operator.eq:
             return self.process(expression._BinaryExpression(binary.right, binary.left, binary.operator))
         else:
             return super(SybaseSQLCompiler, self).visit_binary(binary)
 
-    def label_select_column(self, select, column, asfrom):
+    def dont_label_select_column(self, select, column, asfrom):
         if isinstance(column, expression.Function):
             return column.label(None)
         else:
             return super(SybaseSQLCompiler, self).label_select_column(select, column, asfrom)
 
-    function_rewrites =  {'current_date': 'getdate',
-                         }
-    def visit_function(self, func):
-        func.name = self.function_rewrites.get(func.name, func.name)
-        res = super(SybaseSQLCompiler, self).visit_function(func)
-        if func.name.lower() == 'getdate':
-            # apply CAST operator
-            # FIXME: what about _pyodbc ?
-            cast = expression._Cast(func, SybaseDate_mxodbc)
-            # infinite recursion
-            # res = self.visit_cast(cast)
-            res = "CAST(%s AS %s)" % (res, self.process(cast.typeclause))
-        return res
+#    def visit_getdate_func(self, fn, **kw):
+         # TODO: need to cast? something ?
+#        pass
 
     def visit_extract(self, extract):
         field = self.extract_map.get(extract.field, extract.field)
 
 class SybaseDDLCompiler(compiler.DDLCompiler):
     def get_column_specification(self, column, **kwargs):
+        colspec = self.preparer.format_column(column) + " " + \
+                                   self.dialect.type_compiler.process(column.type)
 
-        colspec = self.preparer.format_column(column)
+        if column.table is None:
+            raise exc.InvalidRequestError("The Sybase dialect requires Table-bound "\
+                                                   "columns in order to generate DDL")
+        seq_col = column.table._autoincrement_column
 
-        if (not getattr(column.table, 'has_sequence', False)) and column.primary_key and \
-                column.autoincrement and isinstance(column.type, sqltypes.Integer):
-            if column.default is None or (isinstance(column.default, schema.Sequence) and column.default.optional):
-                column.sequence = schema.Sequence(column.name + '_seq')
+            
 
-        if hasattr(column, 'sequence'):
-            column.table.has_sequence = column
-            #colspec += " numeric(30,0) IDENTITY"
-            colspec += " Integer IDENTITY"
+        # install a IDENTITY Sequence if we have an implicit IDENTITY column
+        if seq_col is column:
+            sequence = isinstance(column.default, sa_schema.Sequence) and column.default
+            if sequence:
+                start, increment = sequence.start or 1, sequence.increment or 1
+            else:
+                start, increment = 1, 1
+            if (start, increment) == (1, 1):
+                colspec += " IDENTITY"
+            else:
+                # TODO: need correct syntax for this
+                colspec += " IDENTITY(%s,%s)" % (start, increment)
         else:
-            colspec += " " + self.dialect.type_compiler.process(column.type)
+            if column.nullable is not None:
+                if not column.nullable or column.primary_key:
+                    colspec += " NOT NULL"
+                else:
+                    colspec += " NULL"
 
-        if not column.nullable:
-            colspec += " NOT NULL"
-
-        default = self.get_column_default_string(column)
-        if default is not None:
-            colspec += " DEFAULT " + default
+            default = self.get_column_default_string(column)
+            if default is not None:
+                colspec += " DEFAULT " + default
 
         return colspec
 
     supports_unicode_statements = False
     supports_sane_rowcount = False
     supports_sane_multi_rowcount = False
+
+    supports_native_boolean = False
+    supports_unicode_binds = False
+    postfetch_lastrowid = True
+
     colspecs = colspecs
     ischema_names = ischema_names
 
     ddl_compiler = SybaseDDLCompiler
     preparer = SybaseIdentifierPreparer
 
-    ported_sqla_06 = False
+    def _get_default_schema_name(self, connection):
+        return connection.scalar(
+                     text("SELECT user_name() as user_name", typemap={'user_name':Unicode})
+             )
 
-    schema_name = "dba"
-
-    def __init__(self, **params):
-        super(SybaseDialect, self).__init__(**params)
-        self.text_as_varchar = False
-
-    def last_inserted_ids(self):
-        return self.context.last_inserted_ids
-
-    def _get_default_schema_name(self, connection):
-        # TODO
-        return self.schema_name
+    @reflection.cache
+    def get_table_names(self, connection, schema=None, **kw):
+        if schema is None:
+            schema = self.default_schema_name
+        return self.table_names(connection, schema)
 
     def table_names(self, connection, schema):
-        """Ignore the schema and the charset for now."""
-        s = sql.select([tables.c.table_name],
-                       sql.not_(tables.c.table_name.like("SYS%")) and
-                       tables.c.creator >= 100
-                       )
-        rp = connection.execute(s)
-        return [row[0] for row in rp.fetchall()]
+
+        result = connection.execute(
+                    text("select sysobjects.name from sysobjects, sysusers "
+                         "where sysobjects.uid=sysusers.uid and "
+                         "sysusers.name=:schemaname and "
+                         "sysobjects.type='U'",
+                         bindparams=[
+                                  bindparam('schemaname', schema)
+                                  ])
+         )
+        return [r[0] for r in result]
 
     def has_table(self, connection, tablename, schema=None):
-        # FIXME: ignore schemas for sybase
-        s = sql.select([tables.c.table_name], tables.c.table_name == tablename)
-        return connection.execute(s).first() is not None
+        if schema is None:
+            schema = self.default_schema_name
+
+        result = connection.execute(
+                    text("select sysobjects.name from sysobjects, sysusers "
+                         "where sysobjects.uid=sysusers.uid and "
+                         "sysobjects.name=:tablename and "
+                         "sysusers.name=:schemaname and "
+                         "sysobjects.type='U'",
+                         bindparams=[
+                                  bindparam('tablename', tablename),
+                                  bindparam('schemaname', schema)
+                                  ])
+                 )
+        return result.scalar() is not None
 
     def reflecttable(self, connection, table, include_columns):
-        # Get base columns
-        if table.schema is not None:
-            current_schema = table.schema
-        else:
-            current_schema = self.default_schema_name
+        raise NotImplementedError()
 
-        s = sql.select([columns, domains], tables.c.table_name==table.name, from_obj=[columns.join(tables).join(domains)], order_by=[columns.c.column_id])
-
-        c = connection.execute(s)
-        found_table = False
-        # makes sure we append the columns in the correct order
-        while True:
-            row = c.fetchone()
-            if row is None:
-                break
-            found_table = True
-            (name, type, nullable, charlen, numericprec, numericscale, default, primary_key, max_identity, table_id, column_id) = (
-                row[columns.c.column_name],
-                row[domains.c.domain_name],
-                row[columns.c.nulls] == 'Y',
-                row[columns.c.width],
-                row[domains.c.precision],
-                row[columns.c.scale],
-                row[columns.c.default],
-                row[columns.c.pkey] == 'Y',
-                row[columns.c.max_identity],
-                row[tables.c.table_id],
-                row[columns.c.column_id],
-            )
-            if include_columns and name not in include_columns:
-                continue
-
-            # FIXME: else problems with SybaseBinary(size)
-            if numericscale == 0:
-                numericscale = None
-
-            args = []
-            for a in (charlen, numericprec, numericscale):
-                if a is not None:
-                    args.append(a)
-            coltype = self.ischema_names.get(type, None)
-            if coltype == SybaseString and charlen == -1:
-                coltype = SybaseText()
-            else:
-                if coltype is None:
-                    util.warn("Did not recognize type '%s' of column '%s'" %
-                              (type, name))
-                    coltype = sqltypes.NULLTYPE
-                coltype = coltype(*args)
-            colargs = []
-            if default is not None:
-                colargs.append(schema.DefaultClause(sql.text(default)))
-
-            # any sequences ?
-            col = schema.Column(name, coltype, nullable=nullable, primary_key=primary_key, *colargs)
-            if int(max_identity) > 0:
-                col.sequence = schema.Sequence(name + '_identity')
-                col.sequence.start = int(max_identity)
-                col.sequence.increment = 1
-
-            # append the column
-            table.append_column(col)
-
-        # any foreign key constraint for this table ?
-        # note: no multi-column foreign keys are considered
-        s = "select st1.table_name, sc1.column_name, st2.table_name, sc2.column_name from systable as st1 join sysfkcol on st1.table_id=sysfkcol.foreign_table_id join sysforeignkey join systable as st2 on sysforeignkey.primary_table_id = st2.table_id join syscolumn as sc1 on sysfkcol.foreign_column_id=sc1.column_id and sc1.table_id=st1.table_id join syscolumn as sc2 on sysfkcol.primary_column_id=sc2.column_id and sc2.table_id=st2.table_id where st1.table_name='%(table_name)s';" % { 'table_name' : table.name }
-        c = connection.execute(s)
-        foreignKeys = {}
-        while True:
-            row = c.fetchone()
-            if row is None:
-                break
-            (foreign_table, foreign_column, primary_table, primary_column) = (
-                row[0], row[1], row[2], row[3],
-            )
-            if not primary_table in foreignKeys.keys():
-                foreignKeys[primary_table] = [['%s' % (foreign_column)], ['%s.%s'%(primary_table, primary_column)]]
-            else:
-                foreignKeys[primary_table][0].append('%s'%(foreign_column))
-                foreignKeys[primary_table][1].append('%s.%s'%(primary_table, primary_column))
-        for primary_table in foreignKeys.iterkeys():
-            #table.append_constraint(schema.ForeignKeyConstraint(['%s.%s'%(foreign_table, foreign_column)], ['%s.%s'%(primary_table,primary_column)]))
-            table.append_constraint(schema.ForeignKeyConstraint(foreignKeys[primary_table][0], foreignKeys[primary_table][1], link_to_name=True))
-
-        if not found_table:
-            raise exc.NoSuchTableError(table.name)
-

lib/sqlalchemy/dialects/sybase/mxodbc.py

+"""
+Support for Sybase via mxodbc.
+
+This dialect is a stub only and is likely non functional at this time.
+
+
+"""
 from sqlalchemy.dialects.sybase.base import SybaseDialect, SybaseExecutionContext
 from sqlalchemy.connectors.mxodbc import MxODBCConnector
 
 class SybaseExecutionContext_mxodbc(SybaseExecutionContext):
     pass
 
-class Sybase_mxodbc(MxODBCConnector, SybaseDialect):
+class SybaseDialect_mxodbc(MxODBCConnector, SybaseDialect):
     execution_ctx_cls = SybaseExecutionContext_mxodbc
 
-dialect = Sybase_mxodbc
+dialect = SybaseDialect_mxodbc

lib/sqlalchemy/dialects/sybase/pyodbc.py

+"""
+Support for Sybase via pyodbc.
+