I ran into an issue while troubleshooting my read-through cache.
in Production, we use
pylibmc ; in Development we usually use
dbm , but sometimes switch to
memory as it is much faster. our cached data has quite a bit of processing applied to it, and on a recent test run -- unwanted 'per-request' data was somehow persisting. i spent a few hours upgrading logging, and trying to track down phantom cache "sets".
then i realized what was going on -- we weren't calling a set, but the data was persisting because of the memory backend. we weren't pulling something out of a cache, it was the same object each time.
in order to get our tests to pass and ensure some amount of parity between dev and production, i put together a quick "MemoryPickle" backend. I couldn't think of a better way to handle this; it's the memory backend , with get/set wrapped in pickle loads/gets.
this does nothing but ensure that you get an item out of the cache that only has "set" data. there are probably better ways to handle this , which is why I'm just proposing this with a gist, and not doing a pull request.
this sort of thing could be handled with a wrapper or custom backend. i think something like this is very useful and belongs in core, only because it allows users the ability to leverage the speed of a memory backend with the same behavior of external backends ( dbm , pylibmc , redis , etc )... and it would only require a 1 line config change.
here's the working copy.