Memoization for the fresco framework

Memoize the result of an expensive function. This caches results in memory and never expires anything:

from fresco_memoize import memoize

def expensive_func():

Store cached items in an lru dict (uses repoze.lru. Other lru implementations are available). Expire keys after 10s:

from fresco_memoize import memoize, LRUCacheDict

@memoize(expires=10, storefactory=LRUCacheDict)
def expensive_func():

Modify the key used to store cached items. Here we are making the cache key case-insensitive:

from fresco_memoize import memoize

@memoize(keyfunc=lambda s: s.lower())
def function_with_a_string_arg(s):

Expire cached items after a certain number of hits:

from fresco_memoize import memoize

def expensive_func(s):

In the current implementation cache hit counters are maintained per-process, giving rise to the following limitations:

  • Cache stores that are shared between processes will see repeated invalidation as each process's hit counter reaches zero, leading to unnecessary cache misses usually clustered together. It is not recommended to use max_hits in this configuration.
  • Cache stores that are shared between threads may not account correctly for simultaneous accesses, leading to extra cache hits.

Cache values on the filesystem. Keys have to be strings in this case, so use picklekeys to convert arbitrary args to strings:

from fresco_memoize import FileCacher, memoize, picklekeys
store = FileCacher('/tmp/cachedir')

@memoize(keyfunc=picklekeys, storefactory=lambda: FileCacher)
def expensive_func():

Invalidate a key:

from fresco_memoize import Memoizer

Memoizer.of(expensive_func).invalidate_key('updated', 'key')

(Note that the arguments to invalidate_key should be the same as were originally fed to expensive_func)

Invalidate all keys:

from fresco_memoize import Memoizer


Note that fresco-memoize assumes you might be sharing the cache backend between multiple memoized functions. invalidate_all is therefore slow because it must check every cache key.

If you are not sharing the cache backend (or don't care about collateral damage) this would be more efficient:

from fresco_memoize import Memoizer


Cache zones

Each memoizer is associated with a named zone (if not set, a zone named 'default' is used).

This makes it easy to clear a group of related caches in one go. For example:

book_memoizer = Zone('Books')

class Book(object):

    def get_synopsis(self):
        return fetch_synopsis_from_db()

    def get_reviews(self):
        return fetch_reviews_from_db()

A single command will then clear all memoizers in one go:


A variation on that is to be able to clear cached items memoizers related to a single book. You can do this by passing a function to Zone that, given the memoized function arguments, returns a dictionary of dependency information:

author_memoizer = Zone('Authors', lambda self:

class Author(object):

    def get_rating(self):
        return calculate_author_popularity()

You can then invalidate cached data for a given author by calling:


Individual memoizers can override the zone's key function if required:

class Book(object):

    @author_memoizer(zonekey=lambda self:})
    def get_author_biography(self):