Using thread-local dictionaries for storage

Issue #115 new
Dan LaManna
created an issue

I'm trying to use a thread-local variable as the cache_dict to the MemoryBackend. The issue is that the region must exist at import time to decorate the functions, so this would result in different threads reconfiguring the cache region which exists across threads.

Is there a mechanism for creating a lazy or "fake" region which can intercept all calls to it and delegate them to a thread-local region, creating and configuring the region if it didn't exist?

Does this make sense and/or sound sane?

Comments (2)

  1. jvanasco

    The closest things I've done in this area are:

    • the MemoryPickle backend. that was to get around the issue (mostly with unit tests) in which MemoryBackend values were stored as-is as python objects, so updating an object fetched from the cache inherently changed the value in the cache without an explicit SET operation.

    • a lot of work with custom backends

    If I understand the situation correctly, I have a few suggestions:

    1. Custom Backends using a standard region.
    2. You could use custom decorators that create/call the regions in a threadsafe way.

    I'd lean towards a custom Backend as @Michael Bayer suggested

    The default MemoryBackend just accesses self._cache, so a subclass might only set that as a property... and use that property to call a threadsafe cache function. or you could use the threading module to create/initialize a threadlocal var in a lazy manner.

    class ThreadsafeMemoryBackend(MemoryBackend):
        def __init__(self, arguments):
            self._f_thread_cache = arguments["thread_cache_function"]
        def _cache(self):
            return self._f_thread_cache()
  2. Log in to comment