Large cache item fail
Memcached has a default item size limit of 1MB. The python-memcached library will silently fail on an attempt to add an item > 1MB to memcached. This causes major problems with the dogpile lock.
Basically one thread gets the lock, gets the data, sends it to memcached (it fails silently), then releases the lock. All the other threads that are waiting to acquire the lock expect the cached item to be in memcached. Since it is not, the next thread to get the lock has to go get the data, send it to memcached (it fails), then continue.
Effectively the dogpile lock results in a single file line - only one thread gets through at a time, the rest are waiting.
I'd submit a pull request, but I really have no idea how to fix this. For our system we increased the max item size on the memcached server, then updated the python-memcached library with the new higher max size. Still possible to have the problem this way, but we shouldn't ever have an item that reaches our new max size.