timeouts stacked. This is mostly used in the test cases, there is a
need for much better timeout handling in the tuplespace object itself.
-Additionally interests for tuples stay registered even if an inp/rd
-is aborted (for example with timeout()) - this will make tuples go
-away if they are sent to the aborted interest! This is mitigated in the
-code a little bit by reinserting tuples that are stuck in an aborted
-interest, but that only works if that process regularily makes
-new requests and so reinserts leftovers. There is currently a breaking
-test case for this problem.
+Currently there is no tuplespace locking - all is just done with pipes.
+That way, a massive "out"ing process could move tuples into the gap
+on clients between the timeout on receives and the unregister, as the
+unregister might take a moment for the manager to process the unregister.
+This is mitigated by cleaning (and reinserting) tuples from a pipe
+prior to registering a new interest, but that only works if the process
+does regular inp/rd requests. So far I didn't come up with a good testcase
+for triggering this. Of course you can allways say "don't abort
+interests" and you should be fine, as the deregistration is done
+on send then, and that only happens in the manager. The only concurrent
+moment for deregistration to the managers activities is on aborted
Limits - or when not to use it
your tuple simple and use them for coordination, massive data is best
kept in a database that is shared in all processes.
+Since version 0.2 there are functional patterns - you can specify a
+callable on your interest and it will match by being called on the
+respective column of the tuples, so you can construct more complex
Additionally lindypy keeps all data in memory (allthough it's base could
maybe one day hooked to different backends and then use for example
sqlite or some other database for persistent tuple spaces), so for now