I'm trying to integrate dogpilecache with sqlalchemy to create a real-time cache layer above RDBS.
To make it real-time, I'm gonna need the ability to invalidate a key during the creator. For example:
@region.cache_on_arguements() def get(user_id): return DBSession().using_bind("master").query(User).get(user_id)
And there's a event listener that will listen for
after_commit signal from sqlalchemy, if a user model is updated in commit, I'll issue a delete command on the corresponding user_id to expire it.
So what if a commit happens and issued a delete command right in the middle of creator generating a value? The delete command will be ignored since that key is not in the cache(only a lock).
This problem would be solved if I can invalidate the stale value during it's creation. And another get_or_create may also enter creator once the previous creator marked invalid.
Currently I think it can be implemented by create a customized lock with unique id inside.
The creator_a enter and require a lock with unique id, if a invalidate issued, it delete the value and lock(if exists). And another creator_b get the lock begin value generation. When creator_a go out and want to release the lock, it find out the unique id not matched, so it directly returned without write value to cache. Finally creator_b goes out and release the lock, the unique id matched so it write value to cache and returned.
This will provide a better answer to refresh fn in #36 too. The current implementation is not usable in a distribute situation, if the refresh called in multiple servers without lock, it may result in data inconsistent situation. If it can be replaced by a 'invalidate -> get_or_create' then it works better as a real refresh.
How do you think about the idea or do you have a better solution to this use case?