restarting memcached server and pylibmc backend

Issue #26 resolved
created an issue

i've experienced an issue where restarting the memcached server can cause an issue with the pylibmc backend

the wonderful error reporting is this:

  File "/var/www/sites/MyApp.In-virtualenv/local/lib/python2.7/site-packages/dogpile.cache-0.4.0-py2.7.egg/dogpile/cache/", line 393, in delete
  File "/var/www/sites/MyApp.In-virtualenv/local/lib/python2.7/site-packages/dogpile.cache-0.4.0-py2.7.egg/dogpile/cache/backends/", line 159, in delete
_pylibmc.UnknownReadFailure: error 7 from memcached_delete(group:id:56): UNKNOWN READ FAILURE

This is totally an issue with pylibmc : they've raised the error and have some meager reporting

I just wanted to put this on the radar that errors like this happen -- as they will cause a webpage to error- out. I'm wondering if there should be a config option to catch and suppress errors like this.

Comments (10)

  1. Michael Bayer repo owner

    I tracked down an email regarding a similar request, here was my response on that:

    the "don't die" thing is a little strange (if the cache goes down, the site typically goes down anyway if it's loaded up), I'd build that as a wrapper around the backend. I'd support adding a generic "ProxyBackend" class for this, it would include as a config parameter the name of the "real" backend and provide easily overrideable methods.

    because people seem to want to catch different kinds of failures and handle them in various ways. e.g. if a get() raises an exception, suppress that? return None? I'd rather dogpile not get into that directly.

  2. jvanasco reporter

    Yeah, I had the same concern ( which is why i left my issue as the open ended "I'm wondering" ).

    I originally thought of wrapping my requests -- but i have nearly 100 calls like cache_region[region].get() ; i don't want to mess with that much code.

    In my case, the easiest solution would be to subclass the backend and just overwrite the del function. This only seems to happen on delete and only with the pylibmc backend ( i use DBM and Memory on dev )

    I think your response would make a good FAQ item for the docs.

  3. Michael Bayer repo owner

    well I'd want to make it easy to augment an existing backend. ProxyBackend would be like TypeDecorator for backends. Another thing I'll welcome pull requests for :)

  4. Marcos Araujo Sobrinho

    Yep. Some of our apps are in a really bad datacenter (3rd world problems...) and our memcache connection sometimes dies. The exception is slighthy different but happens.

    I'm thinking about falling back to DBM if memcache isn't available, for example.

  5. Tim Hanus

    I've got some work on this that I'm trying to clean up for a pull request. As a matter of style there are several ways that this could hook into dogpile.cache. This comment is my way of soliciting feedback to figure out what would be most helpful/ fit into the overall scheme of dogpile the best.

    I have two types of Proxys defined here. For the sake of argument, let's assume that these are both things that you would actually want to do. LogProxy. All get/set/delete operations will be logged to a file ExceptionProxy. All get/set/delete operations will be wrapped in a try/except. In the event of a failure we will just return NO_VALUE.

    Option 1: Type into the existing region creation

    You could specify proxies to use on region creation. New proxies are registered using something like the existing PluginLoader

    region = dogpile.cache.make_region().configure(
        expiration_time = 600,
        arguments = {...},
        proxies = { 
            'dogpile.cache.exception_proxy': [ args to exception proxy ],
            'dogpile.cache.logging_proxy': [  args to logging proxy ]

    Option 2: Provide Mixins for DIY Proxy Composition

    Provide a way to easily compose your own classes with some kind of mixin-like functionality. This would require a little more work on the users part to use things like 'make_region' as each new backend class needs to be registered. This seems a little clumsy to me, but I'm including it here as a possibility.

    class MyBackend(ExceptionProxy, LoggingProxy, MemcachedBackend):
        def __init__(self, *args, **kwargs):
             MemcachedBackend.__init__(self, ...memcached arguments ... )
             LoggingProxy.__init__(self, logging args)
             ExceptionProxy.__init__(self, exception args) 

    Option 3: Chain Proxys together at runtime

    Create a region the same way as you would today, then build up your proxies at runtime. This has the advantage of being the most legible (to my eye anyway). The downsides to this approach are that it seems to blur the line between backend and region which I'm not crazy about.

    my_region = dogpile.cache.make_region(...).configure(...) 
    my_region = LoggingProxy(... args to logging proxy ...).wraps(my_region)
    my_region = ExceptionProxy(... args to exception proxy ...).wraps(my_region)
    def test():

    Any comments or suggestions are most welcome. Additionally, if anyone else has any use cases for the proxy that may help provide some direction.


  6. Michael Bayer repo owner

    I had in mind an abstract class that knows how to wrap an existing backend. It would be fairly similar to SQLAlchemy's TypeDecorator though maybe a little more open ended, like a filter.

    If I wanted to build an ExceptionProxy I'd do this:

    class DontRaiseOnDelete(ProxyBackend):
        def delete(self, key):
            except IDontCareException:
                log.error("Got an IDontCare but we don't care...", exc_info=True)

    plug it in:

    region = make_region().configure("dogpile.cache.memcached",
                                                   wrap=[DontRaiseOnDelete, SomeOtherProxy, ...])

    ProxyBackend would do all the other work to handle the constructor and supplying "proxied".

  7. Log in to comment