Anonymous avatar Anonymous committed f298d09

make it less about the syncreader

Comments (0)

Files changed (1)

docs/build/usage.rst

-Introduction
-============
+====================
+Dogpile Usage Guide
+====================
 
 At its core, Dogpile provides a locking interface around a "value creation" function.
 
 resource-usage system outside, or in addition to, the one 
 `dogpile.cache <http://bitbucket.org/zzzeek/dogpile.cache>`_ provides.
 
-Usage
-=====
+Rudimentary Usage
+==================
 
 A simple example::
 
 be altered to support any kind of locking as we'll see in a 
 later section.
 
-Locking the "write" phase against the "readers"
-------------------------------------------------
-
-The dogpile lock can provide a mutex to the creation 
-function itself, so that the creation function can perform
-certain tasks only after all "stale reader" threads have finished.
-The example of this is when the creation function has prepared a new
-datafile to replace the old one, and would like to switch in the
-"new" file only when other threads have finished using it.
-
-To enable this feature, use :class:`.SyncReaderDogpile`.
-:meth:`.SyncReaderDogpile.acquire_write_lock` then provides a safe-write lock
-for the critical section where readers should be blocked::
-
-
-    from dogpile import SyncReaderDogpile
-
-    dogpile = SyncReaderDogpile(3600)
-
-    def some_creation_function(dogpile):
-        create_expensive_datafile()
-        with dogpile.acquire_write_lock():
-            replace_old_datafile_with_new()
-
-    # usage:
-    with dogpile.acquire(some_creation_function):
-        read_datafile()
-
-With the above pattern, :class:`.SyncReaderDogpile` will
-allow concurrent readers to read from the current version 
-of the datafile as 
-the ``create_expensive_datafile()`` function proceeds with its
-job of generating the information for a new version.
-When the data is ready to be written,  the 
-:meth:`.SyncReaderDogpile.acquire_write_lock` call will 
-block until all current readers of the datafile have completed
-(that is, they've finished their own :meth:`.Dogpile.acquire` 
-blocks).   The ``some_creation_function()`` function
-then proceeds, as new readers are blocked until
-this function finishes its work of 
-rewriting the datafile.
-
 Using a Value Function with a Cache Backend
--------------------------------------------
+=============================================
 
 The dogpile lock includes a more intricate mode of usage to optimize the
 usage of a cache like Memcached.   The difficulties Dogpile addresses
 .. _caching_decorator:
 
 Using Dogpile for Caching
---------------------------
+==========================
 
 Dogpile is part of an effort to "break up" the Beaker
 package into smaller, simpler components (which also work better). Here, we
 .. _scaling_on_keys:
 
 Scaling Dogpile against Many Keys
-----------------------------------
+===================================
 
 The patterns so far have illustrated how to use a single, persistently held
 :class:`.Dogpile` object which maintains a thread-based lock for the lifespan
     my_data = get_some_value("somekey")
 
 Using a File or Distributed Lock with Dogpile
-----------------------------------------------
+==============================================
+
 
 The final twist on the caching pattern is to fix the issue of the Dogpile mutex
 itself being local to the current process.   When a handful of threads all go 
 objects in various processes will now coordinate with each other, using this common 
 filename as the "baton" against which creation of a new value proceeds.
 
+Locking the "write" phase against the "readers"
+================================================
 
+A less prominent feature of Dogpile ported from Beaker is the
+ability to provide a mutex against the actual resource being read
+and created, so that the creation function can perform
+certain tasks only after all reader threads have finished.
+The example of this is when the creation function has prepared a new
+datafile to replace the old one, and would like to switch in the
+new file only when other threads have finished using it.
+
+To enable this feature, use :class:`.SyncReaderDogpile`.
+:meth:`.SyncReaderDogpile.acquire_write_lock` then provides a safe-write lock
+for the critical section where readers should be blocked::
+
+
+    from dogpile import SyncReaderDogpile
+
+    dogpile = SyncReaderDogpile(3600)
+
+    def some_creation_function(dogpile):
+        create_expensive_datafile()
+        with dogpile.acquire_write_lock():
+            replace_old_datafile_with_new()
+
+    # usage:
+    with dogpile.acquire(some_creation_function):
+        read_datafile()
+
+With the above pattern, :class:`.SyncReaderDogpile` will
+allow concurrent readers to read from the current version 
+of the datafile as 
+the ``create_expensive_datafile()`` function proceeds with its
+job of generating the information for a new version.
+When the data is ready to be written,  the 
+:meth:`.SyncReaderDogpile.acquire_write_lock` call will 
+block until all current readers of the datafile have completed
+(that is, they've finished their own :meth:`.Dogpile.acquire` 
+blocks).   The ``some_creation_function()`` function
+then proceeds, as new readers are blocked until
+this function finishes its work of 
+rewriting the datafile.
+
+Note that the :class:`.SyncReaderDogpile` approach is useful
+for when working with a resource that itself does not support concurent
+access while being written, namely flat files, possibly some forms of DBM file.
+It is **not** needed when dealing with a datasource that already
+provides a high level of concurrency, such as a relational database,
+Memcached, or NoSQL store.   Currently, the :class:`.SyncReaderDogpile` object
+only synchronizes within the current process among multiple threads;
+it won't at this time protect from concurrent access by multiple 
+processes.   Beaker did support this behavior however using lock files,
+and this functionality may be re-added in a future release.
+
+
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.