Detect if limits on identifier names in the db are exceeded

Issue #22 resolved
Former user created an issue

(original reporter: rtl) Postgres has a limit of 63 characters on the length of any identifier name (table, column, etc) and it discards any characters past this limit. This limit also includes bind parameters, which are named as 'table_column', effectively limiting the total of both table name and column to 63. This is a problem in queries, as this name is used as a key in the ResultProxy and Postgres will return the truncated names, but ResultProxy does lookups on the full table_column names, so no match will be found and the query fails. I'm not sure how Postgres specific this is, but the safest option (rather than silently truncating as Postgres does) is to detect the excessive length and report an error when creating the table. Any other suggestions?

Comments (2)

  1. Mike Bayer repo owner

    well if the limit is 63 for everything then yeah that makes the limit a lot less since it joins together table + column name for column labels.

    adding a check for "greater than 63" is one thing, Im thinking of a way to make those labels smaller if someone has a table + column thats greater than 63.

    the column label is determined in just one place, in sqlalchemy.sql.ColumnImpl. Postgres can override to return its own version, PGColumnImpl, that when asked for its "label", can check for over 63 chars and if so, truncate it to something short and append a some kind of hash value or random string to it. this just makes it tricky for someone to write a from-scratch query to match the label.

  2. Mike Bayer repo owner

    this has been long fixed as pretty much all identifiers used in queries are chopped down to 30 chars in length

  3. Log in to comment