Source

sqlalchemy / doc / build / core / schema.rst

The default branch has multiple heads

Full commit

Schema Definition Language

Describing Databases with MetaData

The core of SQLAlchemy's query and object mapping operations are supported by database metadata, which is comprised of Python objects that describe tables and other schema-level objects. These objects are at the core of three major types of operations - issuing CREATE and DROP statements (known as DDL), constructing SQL queries, and expressing information about structures that already exist within the database.

Database metadata can be expressed by explicitly naming the various components and their properties, using constructs such as :class:`~sqlalchemy.schema.Table`, :class:`~sqlalchemy.schema.Column`, :class:`~sqlalchemy.schema.ForeignKey` and :class:`~sqlalchemy.schema.Sequence`, all of which are imported from the sqlalchemy.schema package. It can also be generated by SQLAlchemy using a process called reflection, which means you start with a single object such as :class:`~sqlalchemy.schema.Table`, assign it a name, and then instruct SQLAlchemy to load all the additional information related to that name from a particular engine source.

A key feature of SQLAlchemy's database metadata constructs is that they are designed to be used in a declarative style which closely resembles that of real DDL. They are therefore most intuitive to those who have some background in creating real schema generation scripts.

A collection of metadata entities is stored in an object aptly named :class:`~sqlalchemy.schema.MetaData`:

from sqlalchemy import *

metadata = MetaData()

:class:`~sqlalchemy.schema.MetaData` is a container object that keeps together many different features of a database (or multiple databases) being described.

To represent a table, use the :class:`~sqlalchemy.schema.Table` class. Its two primary arguments are the table name, then the :class:`~sqlalchemy.schema.MetaData` object which it will be associated with. The remaining positional arguments are mostly :class:`~sqlalchemy.schema.Column` objects describing each column:

user = Table('user', metadata,
    Column('user_id', Integer, primary_key = True),
    Column('user_name', String(16), nullable = False),
    Column('email_address', String(60)),
    Column('password', String(20), nullable = False)
)

Above, a table called user is described, which contains four columns. The primary key of the table consists of the user_id column. Multiple columns may be assigned the primary_key=True flag which denotes a multi-column primary key, known as a composite primary key.

Note also that each column describes its datatype using objects corresponding to genericized types, such as :class:`~sqlalchemy.types.Integer` and :class:`~sqlalchemy.types.String`. SQLAlchemy features dozens of types of varying levels of specificity as well as the ability to create custom types. Documentation on the type system can be found at :ref:`types`.

Accessing Tables and Columns

The :class:`~sqlalchemy.schema.MetaData` object contains all of the schema constructs we've associated with it. It supports a few methods of accessing these table objects, such as the sorted_tables accessor which returns a list of each :class:`~sqlalchemy.schema.Table` object in order of foreign key dependency (that is, each table is preceded by all tables which it references):

>>> for t in metadata.sorted_tables:
...    print t.name
user
user_preference
invoice
invoice_item

In most cases, individual :class:`~sqlalchemy.schema.Table` objects have been explicitly declared, and these objects are typically accessed directly as module-level variables in an application. Once a :class:`~sqlalchemy.schema.Table` has been defined, it has a full set of accessors which allow inspection of its properties. Given the following :class:`~sqlalchemy.schema.Table` definition:

employees = Table('employees', metadata,
    Column('employee_id', Integer, primary_key=True),
    Column('employee_name', String(60), nullable=False),
    Column('employee_dept', Integer, ForeignKey("departments.department_id"))
)

Note the :class:`~sqlalchemy.schema.ForeignKey` object used in this table - this construct defines a reference to a remote table, and is fully described in :ref:`metadata_foreignkeys`. Methods of accessing information about this table include:

# access the column "EMPLOYEE_ID":
employees.columns.employee_id

# or just
employees.c.employee_id

# via string
employees.c['employee_id']

# iterate through all columns
for c in employees.c:
    print c

# get the table's primary key columns
for primary_key in employees.primary_key:
    print primary_key

# get the table's foreign key objects:
for fkey in employees.foreign_keys:
    print fkey

# access the table's MetaData:
employees.metadata

# access the table's bound Engine or Connection, if its MetaData is bound:
employees.bind

# access a column's name, type, nullable, primary key, foreign key
employees.c.employee_id.name
employees.c.employee_id.type
employees.c.employee_id.nullable
employees.c.employee_id.primary_key
employees.c.employee_dept.foreign_keys

# get the "key" of a column, which defaults to its name, but can
# be any user-defined string:
employees.c.employee_name.key

# access a column's table:
employees.c.employee_id.table is employees

# get the table related by a foreign key
list(employees.c.employee_dept.foreign_keys)[0].column.table

Creating and Dropping Database Tables

Once you've defined some :class:`~sqlalchemy.schema.Table` objects, assuming you're working with a brand new database one thing you might want to do is issue CREATE statements for those tables and their related constructs (as an aside, it's also quite possible that you don't want to do this, if you already have some preferred methodology such as tools included with your database or an existing scripting system - if that's the case, feel free to skip this section - SQLAlchemy has no requirement that it be used to create your tables).

The usual way to issue CREATE is to use :func:`~sqlalchemy.schema.MetaData.create_all` on the :class:`~sqlalchemy.schema.MetaData` object. This method will issue queries that first check for the existence of each individual table, and if not found will issue the CREATE statements:

engine = create_engine('sqlite:///:memory:')

metadata = MetaData()

user = Table('user', metadata,
    Column('user_id', Integer, primary_key = True),
    Column('user_name', String(16), nullable = False),
    Column('email_address', String(60), key='email'),
    Column('password', String(20), nullable = False)
)

user_prefs = Table('user_prefs', metadata,
    Column('pref_id', Integer, primary_key=True),
    Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
    Column('pref_name', String(40), nullable=False),
    Column('pref_value', String(100))
)

{sql}metadata.create_all(engine)
PRAGMA table_info(user){}
CREATE TABLE user(
        user_id INTEGER NOT NULL PRIMARY KEY,
        user_name VARCHAR(16) NOT NULL,
        email_address VARCHAR(60),
        password VARCHAR(20) NOT NULL
)
PRAGMA table_info(user_prefs){}
CREATE TABLE user_prefs(
        pref_id INTEGER NOT NULL PRIMARY KEY,
        user_id INTEGER NOT NULL REFERENCES user(user_id),
        pref_name VARCHAR(40) NOT NULL,
        pref_value VARCHAR(100)
)

:func:`~sqlalchemy.schema.MetaData.create_all` creates foreign key constraints between tables usually inline with the table definition itself, and for this reason it also generates the tables in order of their dependency. There are options to change this behavior such that ALTER TABLE is used instead.

Dropping all tables is similarly achieved using the :func:`~sqlalchemy.schema.MetaData.drop_all` method. This method does the exact opposite of :func:`~sqlalchemy.schema.MetaData.create_all` - the presence of each table is checked first, and tables are dropped in reverse order of dependency.

Creating and dropping individual tables can be done via the create() and drop() methods of :class:`~sqlalchemy.schema.Table`. These methods by default issue the CREATE or DROP regardless of the table being present:

engine = create_engine('sqlite:///:memory:')

meta = MetaData()

employees = Table('employees', meta,
    Column('employee_id', Integer, primary_key=True),
    Column('employee_name', String(60), nullable=False, key='name'),
    Column('employee_dept', Integer, ForeignKey("departments.department_id"))
)
{sql}employees.create(engine)
CREATE TABLE employees(
employee_id SERIAL NOT NULL PRIMARY KEY,
employee_name VARCHAR(60) NOT NULL,
employee_dept INTEGER REFERENCES departments(department_id)
)
{}

drop() method:

{sql}employees.drop(engine)
DROP TABLE employees
{}

To enable the "check first for the table existing" logic, add the checkfirst=True argument to create() or drop():

employees.create(engine, checkfirst=True)
employees.drop(engine, checkfirst=False)

Altering Schemas through Migrations

While SQLAlchemy directly supports emitting CREATE and DROP statements for schema constructs, the ability to alter those constructs, usually via the ALTER statement as well as other database-specific constructs, is outside of the scope of SQLAlchemy itself. While it's easy enough to emit ALTER statements and similar by hand, such as by passing a string to :meth:`.Connection.execute` or by using the :class:`.DDL` construct, it's a common practice to automate the maintenance of database schemas in relation to application code using schema migration tools.

There are two major migration tools available for SQLAlchemy:

  • Alembic - Written by the author of SQLAlchemy, Alembic features a highly customizable environment and a minimalistic usage pattern, supporting such features as transactional DDL, automatic generation of "candidate" migrations, an "offline" mode which generates SQL scripts, and support for branch resolution.
  • SQLAlchemy-Migrate - The original migration tool for SQLAlchemy, SQLAlchemy-Migrate is widely used and continues under active development. SQLAlchemy-Migrate includes features such as SQL script generation, ORM class generation, ORM model comparison, and extensive support for SQLite migrations.

Specifying the Schema Name

Some databases support the concept of multiple schemas. A :class:`~sqlalchemy.schema.Table` can reference this by specifying the schema keyword argument:

financial_info = Table('financial_info', meta,
    Column('id', Integer, primary_key=True),
    Column('value', String(100), nullable=False),
    schema='remote_banks'
)

Within the :class:`~sqlalchemy.schema.MetaData` collection, this table will be identified by the combination of financial_info and remote_banks. If another table called financial_info is referenced without the remote_banks schema, it will refer to a different :class:`~sqlalchemy.schema.Table`. :class:`~sqlalchemy.schema.ForeignKey` objects can specify references to columns in this table using the form remote_banks.financial_info.id.

The schema argument should be used for any name qualifiers required, including Oracle's "owner" attribute and similar. It also can accommodate a dotted name for longer schemes:

schema="dbo.scott"

Backend-Specific Options

:class:`~sqlalchemy.schema.Table` supports database-specific options. For example, MySQL has different table backend types, including "MyISAM" and "InnoDB". This can be expressed with :class:`~sqlalchemy.schema.Table` using mysql_engine:

addresses = Table('engine_email_addresses', meta,
    Column('address_id', Integer, primary_key = True),
    Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
    Column('email_address', String(20)),
    mysql_engine='InnoDB'
)

Other backends may support table-level options as well - these would be described in the individual documentation sections for each dialect.

Column, Table, MetaData API

Reflecting Database Objects

A :class:`~sqlalchemy.schema.Table` object can be instructed to load information about itself from the corresponding database schema object already existing within the database. This process is called reflection. In the most simple case you need only specify the table name, a :class:`~sqlalchemy.schema.MetaData` object, and the autoload=True flag. If the :class:`~sqlalchemy.schema.MetaData` is not persistently bound, also add the autoload_with argument:

>>> messages = Table('messages', meta, autoload=True, autoload_with=engine)
>>> [c.name for c in messages.columns]
['message_id', 'message_name', 'date']

The above operation will use the given engine to query the database for information about the messages table, and will then generate :class:`~sqlalchemy.schema.Column`, :class:`~sqlalchemy.schema.ForeignKey`, and other objects corresponding to this information as though the :class:`~sqlalchemy.schema.Table` object were hand-constructed in Python.

When tables are reflected, if a given table references another one via foreign key, a second :class:`~sqlalchemy.schema.Table` object is created within the :class:`~sqlalchemy.schema.MetaData` object representing the connection. Below, assume the table shopping_cart_items references a table named shopping_carts. Reflecting the shopping_cart_items table has the effect such that the shopping_carts table will also be loaded:

>>> shopping_cart_items = Table('shopping_cart_items', meta, autoload=True, autoload_with=engine)
>>> 'shopping_carts' in meta.tables:
True

The :class:`~sqlalchemy.schema.MetaData` has an interesting "singleton-like" behavior such that if you requested both tables individually, :class:`~sqlalchemy.schema.MetaData` will ensure that exactly one :class:`~sqlalchemy.schema.Table` object is created for each distinct table name. The :class:`~sqlalchemy.schema.Table` constructor actually returns to you the already-existing :class:`~sqlalchemy.schema.Table` object if one already exists with the given name. Such as below, we can access the already generated shopping_carts table just by naming it:

shopping_carts = Table('shopping_carts', meta)

Of course, it's a good idea to use autoload=True with the above table regardless. This is so that the table's attributes will be loaded if they have not been already. The autoload operation only occurs for the table if it hasn't already been loaded; once loaded, new calls to :class:`~sqlalchemy.schema.Table` with the same name will not re-issue any reflection queries.

Overriding Reflected Columns

Individual columns can be overridden with explicit values when reflecting tables; this is handy for specifying custom datatypes, constraints such as primary keys that may not be configured within the database, etc.:

>>> mytable = Table('mytable', meta,
... Column('id', Integer, primary_key=True),   # override reflected 'id' to have primary key
... Column('mydata', Unicode(50)),    # override reflected 'mydata' to be Unicode
... autoload=True)

Reflecting Views

The reflection system can also reflect views. Basic usage is the same as that of a table:

my_view = Table("some_view", metadata, autoload=True)

Above, my_view is a :class:`~sqlalchemy.schema.Table` object with :class:`~sqlalchemy.schema.Column` objects representing the names and types of each column within the view "some_view".

Usually, it's desired to have at least a primary key constraint when reflecting a view, if not foreign keys as well. View reflection doesn't extrapolate these constraints.

Use the "override" technique for this, specifying explicitly those columns which are part of the primary key or have foreign key constraints:

my_view = Table("some_view", metadata,
                Column("view_id", Integer, primary_key=True),
                Column("related_thing", Integer, ForeignKey("othertable.thing_id")),
                autoload=True
)

Reflecting All Tables at Once

The :class:`~sqlalchemy.schema.MetaData` object can also get a listing of tables and reflect the full set. This is achieved by using the :func:`~sqlalchemy.schema.MetaData.reflect` method. After calling it, all located tables are present within the :class:`~sqlalchemy.schema.MetaData` object's dictionary of tables:

meta = MetaData()
meta.reflect(bind=someengine)
users_table = meta.tables['users']
addresses_table = meta.tables['addresses']

metadata.reflect() also provides a handy way to clear or delete all the rows in a database:

meta = MetaData()
meta.reflect(bind=someengine)
for table in reversed(meta.sorted_tables):
    someengine.execute(table.delete())

Fine Grained Reflection with Inspector

A low level interface which provides a backend-agnostic system of loading lists of schema, table, column, and constraint descriptions from a given database is also available. This is known as the "Inspector":

from sqlalchemy import create_engine
from sqlalchemy.engine import reflection
engine = create_engine('...')
insp = reflection.Inspector.from_engine(engine)
print insp.get_table_names()

Column Insert/Update Defaults

SQLAlchemy provides a very rich featureset regarding column level events which take place during INSERT and UPDATE statements. Options include:

  • Scalar values used as defaults during INSERT and UPDATE operations
  • Python functions which execute upon INSERT and UPDATE operations
  • SQL expressions which are embedded in INSERT statements (or in some cases execute beforehand)
  • SQL expressions which are embedded in UPDATE statements
  • Server side default values used during INSERT
  • Markers for server-side triggers used during UPDATE

The general rule for all insert/update defaults is that they only take effect if no value for a particular column is passed as an execute() parameter; otherwise, the given value is used.

Scalar Defaults

The simplest kind of default is a scalar value used as the default value of a column:

Table("mytable", meta,
    Column("somecolumn", Integer, default=12)
)

Above, the value "12" will be bound as the column value during an INSERT if no other value is supplied.

A scalar value may also be associated with an UPDATE statement, though this is not very common (as UPDATE statements are usually looking for dynamic defaults):

Table("mytable", meta,
    Column("somecolumn", Integer, onupdate=25)
)

Python-Executed Functions

The default and onupdate keyword arguments also accept Python functions. These functions are invoked at the time of insert or update if no other value for that column is supplied, and the value returned is used for the column's value. Below illustrates a crude "sequence" that assigns an incrementing counter to a primary key column:

# a function which counts upwards
i = 0
def mydefault():
    global i
    i += 1
    return i

t = Table("mytable", meta,
    Column('id', Integer, primary_key=True, default=mydefault),
)

It should be noted that for real "incrementing sequence" behavior, the built-in capabilities of the database should normally be used, which may include sequence objects or other autoincrementing capabilities. For primary key columns, SQLAlchemy will in most cases use these capabilities automatically. See the API documentation for :class:`~sqlalchemy.schema.Column` including the autoincrement flag, as well as the section on :class:`~sqlalchemy.schema.Sequence` later in this chapter for background on standard primary key generation techniques.

To illustrate onupdate, we assign the Python datetime function now to the onupdate attribute:

import datetime

t = Table("mytable", meta,
    Column('id', Integer, primary_key=True),

    # define 'last_updated' to be populated with datetime.now()
    Column('last_updated', DateTime, onupdate=datetime.datetime.now),
)

When an update statement executes and no value is passed for last_updated, the datetime.datetime.now() Python function is executed and its return value used as the value for last_updated. Notice that we provide now as the function itself without calling it (i.e. there are no parenthesis following) - SQLAlchemy will execute the function at the time the statement executes.

Context-Sensitive Default Functions

The Python functions used by default and onupdate may also make use of the current statement's context in order to determine a value. The context of a statement is an internal SQLAlchemy object which contains all information about the statement being executed, including its source expression, the parameters associated with it and the cursor. The typical use case for this context with regards to default generation is to have access to the other values being inserted or updated on the row. To access the context, provide a function that accepts a single context argument:

def mydefault(context):
    return context.current_parameters['counter'] + 12

t = Table('mytable', meta,
    Column('counter', Integer),
    Column('counter_plus_twelve', Integer, default=mydefault, onupdate=mydefault)
)

Above we illustrate a default function which will execute for all INSERT and UPDATE statements where a value for counter_plus_twelve was otherwise not provided, and the value will be that of whatever value is present in the execution for the counter column, plus the number 12.

While the context object passed to the default function has many attributes, the current_parameters member is a special member provided only during the execution of a default function for the purposes of deriving defaults from its existing values. For a single statement that is executing many sets of bind parameters, the user-defined function is called for each set of parameters, and current_parameters will be provided with each individual parameter set for each execution.

SQL Expressions

The "default" and "onupdate" keywords may also be passed SQL expressions, including select statements or direct function calls:

t = Table("mytable", meta,
    Column('id', Integer, primary_key=True),

    # define 'create_date' to default to now()
    Column('create_date', DateTime, default=func.now()),

    # define 'key' to pull its default from the 'keyvalues' table
    Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1)),

    # define 'last_modified' to use the current_timestamp SQL function on update
    Column('last_modified', DateTime, onupdate=func.utc_timestamp())
    )

Above, the create_date column will be populated with the result of the now() SQL function (which, depending on backend, compiles into NOW() or CURRENT_TIMESTAMP in most cases) during an INSERT statement, and the key column with the result of a SELECT subquery from another table. The last_modified column will be populated with the value of UTC_TIMESTAMP(), a function specific to MySQL, when an UPDATE statement is emitted for this table.

Note that when using func functions, unlike when using Python datetime functions we do call the function, i.e. with parenthesis "()" - this is because what we want in this case is the return value of the function, which is the SQL expression construct that will be rendered into the INSERT or UPDATE statement.

The above SQL functions are usually executed "inline" with the INSERT or UPDATE statement being executed, meaning, a single statement is executed which embeds the given expressions or subqueries within the VALUES or SET clause of the statement. Although in some cases, the function is "pre-executed" in a SELECT statement of its own beforehand. This happens when all of the following is true:

  • the column is a primary key column
  • the database dialect does not support a usable cursor.lastrowid accessor (or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as well as some MySQL dialects.
  • the dialect does not support the "RETURNING" clause or similar, or the implicit_returning flag is set to False for the dialect. Dialects which support RETURNING currently include Postgresql, Oracle, Firebird, and MS-SQL.
  • the statement is a single execution, i.e. only supplies one set of parameters and doesn't use "executemany" behavior
  • the inline=True flag is not set on the :class:`~sqlalchemy.sql.expression.Insert()` or :class:`~sqlalchemy.sql.expression.Update()` construct, and the statement has not defined an explicit returning() clause.

Whether or not the default generation clause "pre-executes" is not something that normally needs to be considered, unless it is being addressed for performance reasons.

When the statement is executed with a single set of parameters (that is, it is not an "executemany" style execution), the returned :class:`~sqlalchemy.engine.ResultProxy` will contain a collection accessible via result.postfetch_cols() which contains a list of all :class:`~sqlalchemy.schema.Column` objects which had an inline-executed default. Similarly, all parameters which were bound to the statement, including all Python and SQL expressions which were pre-executed, are present in the last_inserted_params() or last_updated_params() collections on :class:`~sqlalchemy.engine.ResultProxy`. The inserted_primary_key collection contains a list of primary key values for the row inserted (a list so that single-column and composite-column primary keys are represented in the same format).

Server Side Defaults

A variant on the SQL expression default is the server_default, which gets placed in the CREATE TABLE statement during a create() operation:

t = Table('test', meta,
    Column('abc', String(20), server_default='abc'),
    Column('created_at', DateTime, server_default=text("sysdate"))
)

A create call for the above table will produce:

CREATE TABLE test (
    abc varchar(20) default 'abc',
    created_at datetime default sysdate
)

The behavior of server_default is similar to that of a regular SQL default; if it's placed on a primary key column for a database which doesn't have a way to "postfetch" the ID, and the statement is not "inlined", the SQL expression is pre-executed; otherwise, SQLAlchemy lets the default fire off on the database side normally.

Triggered Columns

Columns with values set by a database trigger or other external process may be called out using :class:`.FetchedValue` as a marker:

t = Table('test', meta,
    Column('abc', String(20), server_default=FetchedValue()),
    Column('def', String(20), server_onupdate=FetchedValue())
)

These markers do not emit a "default" clause when the table is created, however they do set the same internal flags as a static server_default clause, providing hints to higher-level tools that a "post-fetch" of these rows should be performed after an insert or update.

Note

It's generally not appropriate to use :class:`.FetchedValue` in conjunction with a primary key column, particularly when using the ORM or any other scenario where the :attr:`.ResultProxy.inserted_primary_key` attribute is required. This is becaue the "post-fetch" operation requires that the primary key value already be available, so that the row can be selected on its primary key.

For a server-generated primary key value, all databases provide special accessors or other techniques in order to acquire the "last inserted primary key" column of a table. These mechanisms aren't affected by the presence of :class:`.FetchedValue`. For special situations where triggers are used to generate primary key values, and the database in use does not support the RETURNING clause, it may be necessary to forego the usage of the trigger and instead apply the SQL expression or function as a "pre execute" expression:

t = Table('test', meta,
        Column('abc', MyType, default=func.generate_new_value(), primary_key=True)
)

Where above, when :meth:`.Table.insert` is used, the func.generate_new_value() expression will be pre-executed in the context of a scalar SELECT statement, and the new value will be applied to the subsequent INSERT, while at the same time being made available to the :attr:`.ResultProxy.inserted_primary_key` attribute.

Defining Sequences

SQLAlchemy represents database sequences using the :class:`~sqlalchemy.schema.Sequence` object, which is considered to be a special case of "column default". It only has an effect on databases which have explicit support for sequences, which currently includes Postgresql, Oracle, and Firebird. The :class:`~sqlalchemy.schema.Sequence` object is otherwise ignored.

The :class:`~sqlalchemy.schema.Sequence` may be placed on any column as a "default" generator to be used during INSERT operations, and can also be configured to fire off during UPDATE operations if desired. It is most commonly used in conjunction with a single integer primary key column:

table = Table("cartitems", meta,
    Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
    Column("description", String(40)),
    Column("createdate", DateTime())
)

Where above, the table "cartitems" is associated with a sequence named "cart_id_seq". When INSERT statements take place for "cartitems", and no value is passed for the "cart_id" column, the "cart_id_seq" sequence will be used to generate a value.

When the :class:`~sqlalchemy.schema.Sequence` is associated with a table, CREATE and DROP statements issued for that table will also issue CREATE/DROP for the sequence object as well, thus "bundling" the sequence object with its parent table.

The :class:`~sqlalchemy.schema.Sequence` object also implements special functionality to accommodate Postgresql's SERIAL datatype. The SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if a :class:`~sqlalchemy.schema.Table` object defines a :class:`~sqlalchemy.schema.Sequence` on its primary key column so that it works with Oracle and Firebird, the :class:`~sqlalchemy.schema.Sequence` would get in the way of the "implicit" sequence that PG would normally use. For this use case, add the flag optional=True to the :class:`~sqlalchemy.schema.Sequence` object - this indicates that the :class:`~sqlalchemy.schema.Sequence` should only be used if the database provides no other option for generating primary key identifiers.

The :class:`~sqlalchemy.schema.Sequence` object also has the ability to be executed standalone like a SQL expression, which has the effect of calling its "next value" function:

seq = Sequence('some_sequence')
nextid = connection.execute(seq)

Default Objects API

Defining Constraints and Indexes

Defining Foreign Keys

A foreign key in SQL is a table-level construct that constrains one or more columns in that table to only allow values that are present in a different set of columns, typically but not always located on a different table. We call the columns which are constrained the foreign key columns and the columns which they are constrained towards the referenced columns. The referenced columns almost always define the primary key for their owning table, though there are exceptions to this. The foreign key is the "joint" that connects together pairs of rows which have a relationship with each other, and SQLAlchemy assigns very deep importance to this concept in virtually every area of its operation.

In SQLAlchemy as well as in DDL, foreign key constraints can be defined as additional attributes within the table clause, or for single-column foreign keys they may optionally be specified within the definition of a single column. The single column foreign key is more common, and at the column level is specified by constructing a :class:`~sqlalchemy.schema.ForeignKey` object as an argument to a :class:`~sqlalchemy.schema.Column` object:

user_preference = Table('user_preference', metadata,
    Column('pref_id', Integer, primary_key=True),
    Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
    Column('pref_name', String(40), nullable=False),
    Column('pref_value', String(100))
)

Above, we define a new table user_preference for which each row must contain a value in the user_id column that also exists in the user table's user_id column.

The argument to :class:`~sqlalchemy.schema.ForeignKey` is most commonly a string of the form <tablename>.<columnname>, or for a table in a remote schema or "owner" of the form <schemaname>.<tablename>.<columnname>. It may also be an actual :class:`~sqlalchemy.schema.Column` object, which as we'll see later is accessed from an existing :class:`~sqlalchemy.schema.Table` object via its c collection:

ForeignKey(user.c.user_id)

The advantage to using a string is that the in-python linkage between user and user_preference is resolved only when first needed, so that table objects can be easily spread across multiple modules and defined in any order.

Foreign keys may also be defined at the table level, using the :class:`~sqlalchemy.schema.ForeignKeyConstraint` object. This object can describe a single- or multi-column foreign key. A multi-column foreign key is known as a composite foreign key, and almost always references a table that has a composite primary key. Below we define a table invoice which has a composite primary key:

invoice = Table('invoice', metadata,
    Column('invoice_id', Integer, primary_key=True),
    Column('ref_num', Integer, primary_key=True),
    Column('description', String(60), nullable=False)
)

And then a table invoice_item with a composite foreign key referencing invoice:

invoice_item = Table('invoice_item', metadata,
    Column('item_id', Integer, primary_key=True),
    Column('item_name', String(60), nullable=False),
    Column('invoice_id', Integer, nullable=False),
    Column('ref_num', Integer, nullable=False),
    ForeignKeyConstraint(['invoice_id', 'ref_num'], ['invoice.invoice_id', 'invoice.ref_num'])
)

It's important to note that the :class:`~sqlalchemy.schema.ForeignKeyConstraint` is the only way to define a composite foreign key. While we could also have placed individual :class:`~sqlalchemy.schema.ForeignKey` objects on both the invoice_item.invoice_id and invoice_item.ref_num columns, SQLAlchemy would not be aware that these two values should be paired together - it would be two individual foreign key constraints instead of a single composite foreign key referencing two columns.

Creating/Dropping Foreign Key Constraints via ALTER

In all the above examples, the :class:`~sqlalchemy.schema.ForeignKey` object causes the "REFERENCES" keyword to be added inline to a column definition within a "CREATE TABLE" statement when :func:`~sqlalchemy.schema.MetaData.create_all` is issued, and :class:`~sqlalchemy.schema.ForeignKeyConstraint` invokes the "CONSTRAINT" keyword inline with "CREATE TABLE". There are some cases where this is undesireable, particularly when two tables reference each other mutually, each with a foreign key referencing the other. In such a situation at least one of the foreign key constraints must be generated after both tables have been built. To support such a scheme, :class:`~sqlalchemy.schema.ForeignKey` and :class:`~sqlalchemy.schema.ForeignKeyConstraint` offer the flag use_alter=True. When using this flag, the constraint will be generated using a definition similar to "ALTER TABLE <tablename> ADD CONSTRAINT <name> ...". Since a name is required, the name attribute must also be specified. For example:

node = Table('node', meta,
    Column('node_id', Integer, primary_key=True),
    Column('primary_element', Integer,
        ForeignKey('element.element_id', use_alter=True, name='fk_node_element_id')
    )
)

element = Table('element', meta,
    Column('element_id', Integer, primary_key=True),
    Column('parent_node_id', Integer),
    ForeignKeyConstraint(
        ['parent_node_id'],
        ['node.node_id'],
        use_alter=True,
        name='fk_element_parent_node_id'
    )
)

ON UPDATE and ON DELETE

Most databases support cascading of foreign key values, that is the when a parent row is updated the new value is placed in child rows, or when the parent row is deleted all corresponding child rows are set to null or deleted. In data definition language these are specified using phrases like "ON UPDATE CASCADE", "ON DELETE CASCADE", and "ON DELETE SET NULL", corresponding to foreign key constraints. The phrase after "ON UPDATE" or "ON DELETE" may also other allow other phrases that are specific to the database in use. The :class:`~sqlalchemy.schema.ForeignKey` and :class:`~sqlalchemy.schema.ForeignKeyConstraint` objects support the generation of this clause via the onupdate and ondelete keyword arguments. The value is any string which will be output after the appropriate "ON UPDATE" or "ON DELETE" phrase:

child = Table('child', meta,
    Column('id', Integer,
            ForeignKey('parent.id', onupdate="CASCADE", ondelete="CASCADE"),
            primary_key=True
    )
)

composite = Table('composite', meta,
    Column('id', Integer, primary_key=True),
    Column('rev_id', Integer),
    Column('note_id', Integer),
    ForeignKeyConstraint(
                ['rev_id', 'note_id'],
                ['revisions.id', 'revisions.note_id'],
                onupdate="CASCADE", ondelete="SET NULL"
    )
)

Note that these clauses are not supported on SQLite, and require InnoDB tables when used with MySQL. They may also not be supported on other databases.

UNIQUE Constraint

Unique constraints can be created anonymously on a single column using the unique keyword on :class:`~sqlalchemy.schema.Column`. Explicitly named unique constraints and/or those with multiple columns are created via the :class:`~sqlalchemy.schema.UniqueConstraint` table-level construct.

meta = MetaData()
mytable = Table('mytable', meta,

    # per-column anonymous unique constraint
    Column('col1', Integer, unique=True),

    Column('col2', Integer),
    Column('col3', Integer),

    # explicit/composite unique constraint.  'name' is optional.
    UniqueConstraint('col2', 'col3', name='uix_1')
    )

CHECK Constraint

Check constraints can be named or unnamed and can be created at the Column or Table level, using the :class:`~sqlalchemy.schema.CheckConstraint` construct. The text of the check constraint is passed directly through to the database, so there is limited "database independent" behavior. Column level check constraints generally should only refer to the column to which they are placed, while table level constraints can refer to any columns in the table.

Note that some databases do not actively support check constraints such as MySQL.

meta = MetaData()
mytable = Table('mytable', meta,

    # per-column CHECK constraint
    Column('col1', Integer, CheckConstraint('col1>5')),

    Column('col2', Integer),
    Column('col3', Integer),

    # table level CHECK constraint.  'name' is optional.
    CheckConstraint('col2 > col3 + 5', name='check1')
    )

{sql}mytable.create(engine)
CREATE TABLE mytable (
    col1 INTEGER  CHECK (col1>5),
    col2 INTEGER,
    col3 INTEGER,
    CONSTRAINT check1  CHECK (col2 > col3 + 5)
){stop}

Setting up Constraints when using the Declarative ORM Extension

The :class:`.Table` is the SQLAlchemy Core construct that allows one to define table metadata, which among other things can be used by the SQLAlchemy ORM as a target to map a class. The :ref:`Declarative <declarative_toplevel>` extension allows the :class:`.Table` object to be created automatically, given the contents of the table primarily as a mapping of :class:`.Column` objects.

To apply table-level constraint objects such as :class:`.ForeignKeyConstraint` to a table defined using Declarative, use the __table_args__ attribute, described at :ref:`declarative_table_args`.

Constraints API

Indexes

Indexes can be created anonymously (using an auto-generated name ix_<column label>) for a single column using the inline index keyword on :class:`~sqlalchemy.schema.Column`, which also modifies the usage of unique to apply the uniqueness to the index itself, instead of adding a separate UNIQUE constraint. For indexes with specific names or which encompass more than one column, use the :class:`~sqlalchemy.schema.Index` construct, which requires a name.

Below we illustrate a :class:`~sqlalchemy.schema.Table` with several :class:`~sqlalchemy.schema.Index` objects associated. The DDL for "CREATE INDEX" is issued right after the create statements for the table:

meta = MetaData()
mytable = Table('mytable', meta,
    # an indexed column, with index "ix_mytable_col1"
    Column('col1', Integer, index=True),

    # a uniquely indexed column with index "ix_mytable_col2"
    Column('col2', Integer, index=True, unique=True),

    Column('col3', Integer),
    Column('col4', Integer),

    Column('col5', Integer),
    Column('col6', Integer),
    )

# place an index on col3, col4
Index('idx_col34', mytable.c.col3, mytable.c.col4)

# place a unique index on col5, col6
Index('myindex', mytable.c.col5, mytable.c.col6, unique=True)

{sql}mytable.create(engine)
CREATE TABLE mytable (
    col1 INTEGER,
    col2 INTEGER,
    col3 INTEGER,
    col4 INTEGER,
    col5 INTEGER,
    col6 INTEGER
)
CREATE INDEX ix_mytable_col1 ON mytable (col1)
CREATE UNIQUE INDEX ix_mytable_col2 ON mytable (col2)
CREATE UNIQUE INDEX myindex ON mytable (col5, col6)
CREATE INDEX idx_col34 ON mytable (col3, col4){stop}

Note in the example above, the :class:`.Index` construct is created externally to the table which it corresponds, using :class:`.Column` objects directly. :class:`.Index` also supports "inline" definition inside the :class:`.Table`, using string names to identify columns:

meta = MetaData()
mytable = Table('mytable', meta,
    Column('col1', Integer),

    Column('col2', Integer),

    Column('col3', Integer),
    Column('col4', Integer),

    # place an index on col1, col2
    Index('idx_col12', 'col1', 'col2'),

    # place a unique index on col3, col4
    Index('idx_col34', 'col3', 'col4', unique=True)
)

The :class:`~sqlalchemy.schema.Index` object also supports its own create() method:

i = Index('someindex', mytable.c.col5)
{sql}i.create(engine)
CREATE INDEX someindex ON mytable (col5){stop}

Customizing DDL

In the preceding sections we've discussed a variety of schema constructs including :class:`~sqlalchemy.schema.Table`, :class:`~sqlalchemy.schema.ForeignKeyConstraint`, :class:`~sqlalchemy.schema.CheckConstraint`, and :class:`~sqlalchemy.schema.Sequence`. Throughout, we've relied upon the create() and :func:`~sqlalchemy.schema.MetaData.create_all` methods of :class:`~sqlalchemy.schema.Table` and :class:`~sqlalchemy.schema.MetaData` in order to issue data definition language (DDL) for all constructs. When issued, a pre-determined order of operations is invoked, and DDL to create each table is created unconditionally including all constraints and other objects associated with it. For more complex scenarios where database-specific DDL is required, SQLAlchemy offers two techniques which can be used to add any DDL based on any condition, either accompanying the standard generation of tables or by itself.

Controlling DDL Sequences

The sqlalchemy.schema package contains SQL expression constructs that provide DDL expressions. For example, to produce a CREATE TABLE statement:

from sqlalchemy.schema import CreateTable
{sql}engine.execute(CreateTable(mytable))
CREATE TABLE mytable (
    col1 INTEGER,
    col2 INTEGER,
    col3 INTEGER,
    col4 INTEGER,
    col5 INTEGER,
    col6 INTEGER
){stop}

Above, the :class:`~sqlalchemy.schema.CreateTable` construct works like any other expression construct (such as select(), table.insert(), etc.). A full reference of available constructs is in :ref:`schema_api_ddl`.

The DDL constructs all extend a common base class which provides the capability to be associated with an individual :class:`~sqlalchemy.schema.Table` or :class:`~sqlalchemy.schema.MetaData` object, to be invoked upon create/drop events. Consider the example of a table which contains a CHECK constraint:

users = Table('users', metadata,
               Column('user_id', Integer, primary_key=True),
               Column('user_name', String(40), nullable=False),
               CheckConstraint('length(user_name) >= 8',name="cst_user_name_length")
               )

{sql}users.create(engine)
CREATE TABLE users (
    user_id SERIAL NOT NULL,
    user_name VARCHAR(40) NOT NULL,
    PRIMARY KEY (user_id),
    CONSTRAINT cst_user_name_length  CHECK (length(user_name) >= 8)
){stop}

The above table contains a column "user_name" which is subject to a CHECK constraint that validates that the length of the string is at least eight characters. When a create() is issued for this table, DDL for the :class:`~sqlalchemy.schema.CheckConstraint` will also be issued inline within the table definition.

The :class:`~sqlalchemy.schema.CheckConstraint` construct can also be constructed externally and associated with the :class:`~sqlalchemy.schema.Table` afterwards:

constraint = CheckConstraint('length(user_name) >= 8',name="cst_user_name_length")
users.append_constraint(constraint)

So far, the effect is the same. However, if we create DDL elements corresponding to the creation and removal of this constraint, and associate them with the :class:`.Table` as events, these new events will take over the job of issuing DDL for the constraint. Additionally, the constraint will be added via ALTER:

from sqlalchemy import event

event.listen(
    users,
    "after_create",
    AddConstraint(constraint)
)
event.listen(
    users,
    "before_drop",
    DropConstraint(constraint)
)

{sql}users.create(engine)
CREATE TABLE users (
    user_id SERIAL NOT NULL,
    user_name VARCHAR(40) NOT NULL,
    PRIMARY KEY (user_id)
)

ALTER TABLE users ADD CONSTRAINT cst_user_name_length  CHECK (length(user_name) >= 8){stop}

{sql}users.drop(engine)
ALTER TABLE users DROP CONSTRAINT cst_user_name_length
DROP TABLE users{stop}

The real usefulness of the above becomes clearer once we illustrate the :meth:`.DDLEvent.execute_if` method. This method returns a modified form of the DDL callable which will filter on criteria before responding to a received event. It accepts a parameter dialect, which is the string name of a dialect or a tuple of such, which will limit the execution of the item to just those dialects. It also accepts a callable_ parameter which may reference a Python callable which will be invoked upon event reception, returning True or False indicating if the event should proceed.

If our :class:`~sqlalchemy.schema.CheckConstraint` was only supported by Postgresql and not other databases, we could limit its usage to just that dialect:

event.listen(
    users,
    'after_create',
    AddConstraint(constraint).execute_if(dialect='postgresql')
)
event.listen(
    users,
    'before_drop',
    DropConstraint(constraint).execute_if(dialect='postgresql')
)

Or to any set of dialects:

event.listen(
    users,
    "after_create",
    AddConstraint(constraint).execute_if(dialect=('postgresql', 'mysql'))
)
event.listen(
    users,
    "before_drop",
    DropConstraint(constraint).execute_if(dialect=('postgresql', 'mysql'))
)

When using a callable, the callable is passed the ddl element, the :class:`.Table` or :class:`.MetaData` object whose "create" or "drop" event is in progress, and the :class:`.Connection` object being used for the operation, as well as additional information as keyword arguments. The callable can perform checks, such as whether or not a given item already exists. Below we define should_create() and should_drop() callables that check for the presence of our named constraint:

def should_create(ddl, target, connection, **kw):
    row = connection.execute("select conname from pg_constraint where conname='%s'" % ddl.element.name).scalar()
    return not bool(row)

def should_drop(ddl, target, connection, **kw):
    return not should_create(ddl, target, connection, **kw)

event.listen(
    users,
    "after_create",
    AddConstraint(constraint).execute_if(callable_=should_create)
)
event.listen(
    users,
    "before_drop",
    DropConstraint(constraint).execute_if(callable_=should_drop)
)

{sql}users.create(engine)
CREATE TABLE users (
    user_id SERIAL NOT NULL,
    user_name VARCHAR(40) NOT NULL,
    PRIMARY KEY (user_id)
)

select conname from pg_constraint where conname='cst_user_name_length'
ALTER TABLE users ADD CONSTRAINT cst_user_name_length  CHECK (length(user_name) >= 8){stop}

{sql}users.drop(engine)
select conname from pg_constraint where conname='cst_user_name_length'
ALTER TABLE users DROP CONSTRAINT cst_user_name_length
DROP TABLE users{stop}

Custom DDL

Custom DDL phrases are most easily achieved using the :class:`~sqlalchemy.schema.DDL` construct. This construct works like all the other DDL elements except it accepts a string which is the text to be emitted:

event.listen(
    metadata,
    "after_create",
    DDL("ALTER TABLE users ADD CONSTRAINT "
        "cst_user_name_length "
        " CHECK (length(user_name) >= 8)")
)

A more comprehensive method of creating libraries of DDL constructs is to use custom compilation - see :ref:`sqlalchemy.ext.compiler_toplevel` for details.

DDL Expression Constructs API