Large cache item fail

Issue #23 new
created an issue

Memcached has a default item size limit of 1MB. The python-memcached library will silently fail on an attempt to add an item > 1MB to memcached. This causes major problems with the dogpile lock.

Basically one thread gets the lock, gets the data, sends it to memcached (it fails silently), then releases the lock. All the other threads that are waiting to acquire the lock expect the cached item to be in memcached. Since it is not, the next thread to get the lock has to go get the data, send it to memcached (it fails), then continue.

Effectively the dogpile lock results in a single file line - only one thread gets through at a time, the rest are waiting.

I'd submit a pull request, but I really have no idea how to fix this. For our system we increased the max item size on the memcached server, then updated the python-memcached library with the new higher max size. Still possible to have the problem this way, but we shouldn't ever have an item that reaches our new max size.

Comments (8)

  1. Michael Bayer repo owner

    seems like the problem is silent failure, why not add routines to the backend to babysit for these uncaught failures in python-memcached (and perhaps submit bug reports to python-memcached) ? I assume these issues are local to python-memcached, if you use pylibmc (much better library) these failures should not be silent ?

  2. brianfrantz reporter

    From a memcached point of view, returning silently is the correct response. It will cause a cache miss and a regen each time - defaults to a working system. It just doesn't fit the dogpile cache approach.

    Haven't tried pylibmc - my assumption is that they handled it the same way (but maybe I'm wrong).

  3. Michael Bayer repo owner

    python-memcached doesn't even return a "0" or something like that? I'd say we just add an option, "guard-on-max-size", to just that backend, and default it to 1M. though this adds a len() call to every cache put.

  4. Morgan Fainberg

    python-memcache returns a zero when there is a failure (similar to if the server is down) (see below), "junk" is a 2M file sourced from disk. We can at the very least know at the base level if we failed. Not why we failed.

    >>> c.set('_junk', junk)
    >>> c.get('_junk')
    >>> test = 'hi'
    >>> c.set('_junk', test)
    >>> c.get('_junk')

    This might be newer versions of python-memcache client only.

  5. Log in to comment