Add the ability to generate and run straight .sql files like sqlalchemy-migrate

Issue #52 closed
Yuen Ho Wong
created an issue

No description provided.

Comments (7)

  1. Yuen Ho Wong reporter

    I know about --sql but what I mean is generating versioned sql scripts that Alembic knows how to linearize and run. I found that unless Alembic's Python migration script generation gets really good fast, I really need to run plain SQL migration scripts. Case in point, triggers, indices, unique constraints, enum types for PostgreSQL, custom column data types, foreign key constraints cascades, GeoAlchemy DDLs and functions… etc.

    I really would like to use Alembic's way of versioning scripts, it's a clear advantage over sqlalchemy-migrate, but I would also like to be able to dump the data from the database, paste the inserts to a generated, versioned SQL script instead of converting the data back to possibly 100s of Alembic inserts.

    I think this is a very useful use case.

  2. Michael Bayer repo owner

    You can put triggers, GeoAlchemy, constraint cascades, whatever you want, into an Alembic migration script right now. That's what op.execute() is for.

    You can build the whole script using all op.execute() and all literal SQL, if you like. Does that solve your issue ?

  3. Michael Bayer repo owner

    yeah, I am now familiar with migrate's .sql file thing and IMO it's not very useful. Such files would not be portable to Alembic anyway as Alembic requires the file itself to include metadata about the version and its dependencies, which means we'd need to invent some new markup system (like YAML inside of a sql comment or something) and it might as well remain a Python file with a SQL string inside of it.

  4. dx

    Well, FWIW, my idea was the usual python file with something like op.execute_file("whatever.sql"), where execute_file does sqlparse.split() (or equivalent) since op.execute() doesn't like large payloads. I don't really care about the part of using sql files as if they were the whole script.

  5. Log in to comment