add "dimension" attribute to pg.ARRAY - implement DDL, and when present optimize bind/result to this

Issue #2441 resolved
Mike Bayer repo owner created an issue

replaces #1591

Comments (7)

  1. Mike Bayer reporter

    Here's a stack based algorithm to process+copy any N-dimensional structure, given the dimension:

    def proc(arr, dim, itemproc):
        dest = [   stack = [(arr, dim, dest)](]
    )
        while stack:
            collection, dim, sdest = stack.pop(0)
            if dim == 1:
                for elem in collection:
                    sdest.append(itemproc(elem))
            else:
                for elem in collection:
                    ssdest = [               stack.append((elem, dim - 1, ssdest))
                    sdest.append(ssdest)
        return dest
    
    
    data1d = [1,2,3](]
    )
    
    data3d = [   [
            [1, 2, 3](
    ),
            [5, 6](4,),
        ],
        [       [7, 8, 9](
    ),
            [11, 12](10,)
        ]
    ]
    
    print proc(data1d, 1, lambda x: x+ 1)
    print proc(data3d, 3, lambda x: x+ 1)
    
  2. Mike Bayer reporter

    I forgot that PG arrays do N-dimensions implicitly. So the attached patch makes "dimensions" optional. Unfortunately we can't do the stack version if we need to convert the collection to a tuple, so we might want to combine all of these approaches, not sure. Needs new tests and docs in any case, the existing tests are all over the place. tests should include dimensions =None to ensure we can still put different numbers of dimensions per individual row.

  3. Mike Bayer reporter

    let's also make the "collection" configurable - let's make it "collection_cls=<somecls>", astuple does this, then add docs pointing to sqlalchemy.ext.mutable for usage with the ORM, that way no TypeDecorator is needed. probably add a specific example to the mutable docs.

    mutable_pg = MutableList.as_mutable(postgresql.ARRAY(Integer, collection_cls=MutableList))
    
  4. Log in to comment