I've run into a situation with the
distributed_lock in the redis backend (ie, locks that redis manages).
I realized through a few Keys which are backed by
create functions that take too long or fail, that a
lock_timeout basically must be set – otherwise you can jam your application by setting an eternal lock for a key that is never set.
So i set the lock_timeout, and that solves one problem, but creates another – if the process ends up completing after the lock_timeout, a LockError is raised in the redis package's lock.py
raise LockError("Cannot release a lock that's no longer owned")
For context, this is happening in a webapp and there can be a variable time on the lock_timeout as it can be caused due to a slow-connection, dropped connection, process killed by a long-process monitor or something else. Since this is only being used for caching, I opted to expire the redis lock using the shortest possible time.
A hopeful fix would have been to configure the dogpile redis backend to ignore this particular lock error. However, looking at the traceback and source, this looks to be really tricky.
dogpile/cache/region.py", line 651, in `get_or_create` dogpile/core/dogpile.py", line 158, in `__enter__` dogpile/core/dogpile.py", line 98, in `_enter` dogpile/core/dogpile.py", line 153, in `_enter_create` redis/lock.py", line 135, in `release` (raised on line 264)
In order to keep the value AND suppress the timeout, the config data in dogpile.cache would need to be used in dogpile.core. that is a lot of updating.
another approach might be catching this particular lock error in region, and then immediately getting the value again (which should not have an error). that too looks tricky though.
does anyone have suggestions for an elegant fix?