I just found that for an (stripped down!) parameter file of my production simulations HTTPD creates a Hash table via functions in CACTUS_HOME/src/util/Hash.c that consumes 1GB of memory (for the array of pointers that is the top level hash table structure) to hold about 10000 entries. I am not sure if this is due to a poor choice of hashing function (util_HashHash) or the fact that it doubles the size of the table until the number of entries is smaller than the number of hash slots in (in Util_HashRehash and Util_HashAdd). It was somewhat unexpected that a non-science thorn would use that much memory.
Alternatives to use less memory might be to increase the filling factor ie. only rehash if hash->keys > 10*hash->fill (maybe starting from some limit of keys) or to use something like the binary tree implementation in BinaryTree.c (but not that one since it is broken in at least two places).
I simple linear list might also be sufficient since HTTPD does not have to be lightning fast and serve hundreds of request per second I expect.