Commits

Pior Bastida committed 615896e Merge

merge

Comments (0)

Files changed (31)

 dynamodb_mapper.egg-info
 
 build
+_build
 dist
 
 coverage.xml
 c2554662c08016e84b7b882bdcd5c16b8da8b2f6 1.4.2
 754277898e0b0bd76a4f33e2236ec5fe0cf3fbee 1.4.3
 d72628aecca4bf866145d3f2424475c975626c7a 1.5.0
+ce23d59bbbc2361d8db0f1362aca4f552c7120cc 1.6.0
+ebfa3e966160a3cb027438100f5270126a5ce535 1.6.1
+34934595a8c35bfe8e044a1d4f48294db4de5881 1.6.2
+739db7a1b75fd3ebe599468051821714db2e304a 1.6.3
+10624a205ab3850f6ccc0b9591a1b2cd85e04713 1.7.0
+c04a0b415b1bbf90b3f8ab9039a59f40ad65d3c2 1.7.1
+0265d4207dbbb80f96becc7848936b1c9e2e959b 1.8.0
+========================
+DynamoDBMapper 1.8.0.dev
+========================
+
+This section documents all user visible changes included between DynamoDBMapper
+versions 1.8.0 and versions 1.7.1
+
+Additions
+---------
+
+- add ``DynamoDBModel.validate()`` based on Onctuous
+- data are validated prior to any write operations
+- cache tables objects in ConnectionBorg to avoid superfluous``DescribeTable`` requests
+
+Changes
+-------
+
+- ``__schema__`` can now use Onctuous for deep definitions
+- ``DynamoDBModel.__init__`` set members with no value nor default value to ``None`` instead of "neutral" value
+- revert the fix for bug #17 (regressions).
+- move dev deps to requirements.dev.txt (pip install -r requirements.dev.txt)
+
+Upgrade
+-------
+
+- all functions relying on type coercion in ``__init__`` will now need to it themselves
+- make sure all fields are set before saving or marked as optional as "neutral" values are no longer generated
+
+====================
+DynamoDBMapper 1.7.1
+====================
+
+This section documents all user visible changes included between DynamoDBMapper
+versions 1.7.1 and versions 1.7.0
+
+Changes
+-------
+
+- OverwriteError inherits from ConflictError so that ``raise_on_conflict`` always raises ``ConflictError`` while staying retro-compatible
+- fix bug #17: enforce type coercion in ``DynamoDBModel.__init__``. (thanks luckyasser)
+- (internal) no more "MAGIC_ITEM" initialisation for ``auto_inc_int``. It is not needed.
+
+====================
+DynamoDBMapper 1.7.0
+====================
+
+This section documents all user visible changes included between DynamoDBMapper
+versions 1.6.2 and versions 1.7.0
+
+Additions
+---------
+
+- migration engine - single object
+- method ``ConnectionBorg.set_region`` to specify Amazon's region (thanks kimscheibel)
+- method ``DynamoDBModel.from_db_dict`` which additionaly saves ``_raw_data``
+- ``raise_on_conflict`` on ``DynamoDBModel.save``, defaults to ``False``
+- ``raise_on_conflict`` on ``DynamoDBModel.delete``, defaults to ``False``
+
+Changes
+-------
+
+- rename ``ExpectedValueError`` to ``ConflictError`` to reflect its true meaning
+- rename ``to_db_dict`` to ``_to_db_dict``. Should not be used anymore
+- rename ``from_dict`` to ``_from_db_dict``. Should not be used anymore
+- transactions may create new Items (side effect of ``raise_on_conflict`` refactor)
+- fix bug #13 in dates de-serialization. (thanks Merwok)
+- open only one shared boto connection per process instead of on/thread. Boto is thread-safe
+- re-implement ``get_batch`` to rely on boto new generator. Fixes 100 Items limitation and paging.
+- boto min version is 2.6.0
+
+Removal
+-------
+
+- ``expected_values`` feature was incompatible with the migration engine
+- ``allow_overwrite`` feature was not needed with ``raise_on_conflict``
+- ``to_db_dict`` and ``from_dict`` are no longer public
+- ``ThroughputError``. Throughput checks are delegated to Amazon's API (thanks kimscheibel)
+- ``new_batch_list_nominal`` is not needed anymore with boto>=2.6.0
+
+Upgrade
+-------
+
+conflict detection
+    Wherever ``save`` was called with ``expected_values=...`` and/or
+    ``allow_overwrite=False``, replace it with a call to save with
+    ``raise_on_conflict=True``. It should handle most if not all use cases. In
+    some place, you'll even be able to get rid of ``to_db_dict``. Rename also
+    all instances of ``ExpectedValueError`` to ``ConflictError``
+
+    ``raise_on_conflict=True`` --> ``allow_overwrite=False`` for new objects
+    ``raise_on_conflict=True`` --> ``expected_values=...`` for existing objects
+
+data (de-)serialization
+    ``from_dict`` and ``to_db_dict`` have been moved to private ``_from_db_dict``
+    and ``_to_db_dict``. Any direct use of these should be avoided
+    ``_from_db_dict`` *will* mark data as coming from the DB
+
+    - ``from_dict(data_dict)`` for initialization should be replaced by ``__init__(**data_dict)``
+    - ``to_db_dict`` for data export should be replaced by ``to_json_dict``
+    - overloading for custom DB Item (de-)serialization can still be done provided that the function is renamed
+
+
+====================
+DynamoDBMapper 1.6.3
+====================
+
+This section documents all user visible changes included between DynamoDBMapper
+versions 1.6.2 and versions 1.6.3
+
+Changes
+-------
+
+- fix bug #11 in delete. Keys were not serialized
+
+
+====================
+DynamoDBMapper 1.6.2
+====================
+
+This section documents all user visible changes included between DynamoDBMapper
+versions 1.6.1 and versions 1.6.2
+
+Additions
+---------
+
+- transactions may generate a list of sub-transactions to run after the main one
+- log all successful queries
+- add parameter ``limit`` on ``query`` method defaulting to ``None``
+- extensive documentation
+
+Upgrade
+-------
+
+sub-transactions
+    If ``__init__()`` is called in any of your transactions, make sure to call
+    ``super(MyTransactionClass, self).__init__(**kwargs)``
+
+
+Known bugs - limitations
+------------------------
+
+- #7 Can't save models where a datetime field is nested in a dict/list
+- Can't use ``datetime`` objects in ``scan`` and ``query`` filters
+- DynamoDBModel.from_dict() does not check types as opposed to ``__init__()``
+
+====================
+DynamoDBMapper 1.6.1
+====================
+
+
+This section documents all user visible changes included between DynamoDBMapper
+version 1.6.0 and version 1.6.1
+
+Changes
+-------
+
+- fixed bug in scan
+
+====================
+DynamoDBMapper 1.6.0
+====================
+
+This section documents all user visible changes included between DynamoDBMapper
+versions 1.5.0 and versions 1.6.0
+
+Additions
+---------
+
+- support for default values in a ``__defaults__`` dict
+- specify instances members via global ``__init__ **kwargs``
+- autogenerated API documentation
+
+Changes
+-------
+
+- transactions engine rewrite to support multiple targets
+- transactions always persisted after first write attempt
+- transactions engine now embeds its own minimal schema
+- transactions can be set ``transient`` on a 'per instance basis' instead of class
+- autoinc hash key now relies on ``atomic add`` to prevent risks of races
+- autoinc magic element moved to -1 instead of 0 to prevent accidental overwrite
+- autoinc magic element now hidden from scan results
+- factorized default value code
+- enforce batch size 100 limit
+- full inline documentation
+- fixed issue: All transactions fail if they have a bool field set to False
+- 99% test coverage
+
+Removal
+-------
+
+(None)
+
+
+Upgrade
+-------
+
+autoinc
+    For all tables relying on autoinc feature, manually move element
+    at ``'hash_key' = 0`` to ``'hash_key' = -1``
+
+transactions
+    Should be retro-compatible but you are strongly advised to adopt the
+    new API
+    - specify ``targets`` and ``setters`` via ``Transactions._get_transactors``
+    - avoid any use of ``Transactions._get_target`` and ``Transactions._alter_target``
+    - save is now called automatically as long as at least 1 write was attempted
+    - ``__schema__`` might not be required anymore due to ``Transaction`` having a new one
+    - ``requester_id`` hash key must be set by the user
+    See these method's documentation for more informations
+
+
+Known bugs
+----------
+
+(None)
-dynamodb-mapper -- a DynamoDB object mapper, based on boto.
+Dynamodb-mapper -- a DynamoDB object mapper, based on boto.
 
 Presentation
 ============
 
-The documentation currently assumes that you're running Boto 2.3.0 or later.
-If you're not, then the API for query and scan changes. You will have to supply
-raw condition dicts, as is done in boto itself.
+`DynamoDB <http://aws.amazon.com/dynamodb/>`_ is a minimalistic NoSQL engine
+provided by Amazon as a part of their AWS product.
 
-Also note that Boto 2.3.1 or later is required for autoincrement_int hash keys.
-Earlier versions will fail.
+**DynamoDB** allows you to stores documents composed of unicode strings or numbers
+as well as sets of unicode strings and numbers. Each tables must define a hash
+key and may define a range key. All other fields are optional.
+
+**Dynamodb-mapper** brings a tiny abstraction layer over DynamoDB to overcome some
+of the limitations with no performance compromise. It is highly inspired by the
+mature `MoongoKit project <http://namlook.github.com/mongokit>`_
+
+- **Full documentation**: http://dynamodb-mapper.readthedocs.org/en/latest/
+- **Report bugs**: https://bitbucket.org/Ludia/dynamodb-mapper/issues
+- **Download**: http://pypi.python.org/pypi/dynamodb-mapper
+
+Requirements
+============
+
+ - Boto = 2.6.0
+ - AWS account
+
+Highlights
+==========
+
+- Python <--> DynamoDB type mapping
+- Deep schema definition and validation with ``Onctuous`` (new in 1.8.0)
+- Multi-target transaction (new in 1.6.0)
+- Sub-transactions (new in 1.6.2)
+- Migration engine (new in 1.7.0)
+- Smart conflict detection (new in 1.7.0)
+- Full low-level chunking abstraction for ``scan``, ``query`` and ``get_batch``
+- Default values
+- Auto-inc hash_key
+- Framework agnostic
 
 
 Example usage
 =============
 
-We assume you've correctly set your Boto credentials.
+We assume you've correctly set your Boto credentials or use ``ddbmock``.
 
-Your Model
-----------
+Quick install
+-------------
+
+::
+
+    $ pip install dynamodb-mapper
+
+If you have not yet configured Boto, you may simply
+
+::
+
+    $ export AWS_ACCESS_KEY_ID=<your id key here>
+    $ export AWS_SECRET_ACCESS_KEY=<your secret key here>
+
+
+First Model
+-----------
 
 ::
 
 
 
     class DoomMap(DynamoDBModel):
-        __table__ = "doom_map"
-        __hash_key__ = "episode"
-        __range_key__ = "map"
+        __table__ = u"doom_map"
+        __hash_key__ = u"episode"
+        __range_key__ = u"map"
         __schema__ = {
-            "episode": int,
-            "map": int,
-            "name": unicode,
-            "cheats": set,
+            u"episode": int,
+            u"map": int,
+            u"name": unicode,
+            u"cheats": set,
         }
         __defaults__ = {
-            "cheats": set(['Konami']),
+            "cheats": set([u"Konami"]),
         }
 
 
 Initial Table creation
 ----------------------
+
 ::
 
+    from dynamodb_mapper.model import ConnectionBorg
+
     conn = ConnectionBorg()
     conn.create_table(DoomMap, 10, 10, wait_for_active=True)
 
     e1m1.episode = 1
     e1m1.map = 1
     e1m1.name = u"Hangar"
-    e1m1.cheats = set(["idkfa", "iddqd", "idclip"])
+    e1m1.cheats = set([u"idkfa", u"iddqd", u"idclip"])
     e1m1.save()
 
 
     # Later on, retrieve that same object from the DB...
-    e1m1 = DoomMap.get((1, 1))
+    e1m1 = DoomMap.get(1, 1)
 
-    # query on hash+range-keyed tables
+    # query all maps of episode 1
     e1_maps = DoomMap.query(hash_key=1)
 
+    # query all maps of episode 1 with 'map' hash_key > 5
     from boto.dynamodb.condition import GT
     e1_maps_after_5 = DoomMap.query(
         hash_key=1,
         range_key_condition=GT(5))
+
+Contribute
+==========
+
+Want to contribute, report a but of request a feature ? The development goes on
+at Ludia's BitBucket account:
+
+Dynamodb-mapper
+---------------
+
+- **Report bugs**: https://bitbucket.org/Ludia/dynamodb-mapper/issues
+- **Fork the code**: https://bitbucket.org/Ludia/dynamodb-mapper/overview
+- **Download**: http://pypi.python.org/pypi/dynamodb-mapper
+
+Onctuous
+--------
+
+- **Full documentation**: https://onctuous.readthedocs.org/en/latest
+- **Report bugs**: https://bitbucket.org/Ludia/onctuous/issues
+- **Download**: http://pypi.python.org/pypi/onctuous
+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS    =
+SPHINXBUILD   = sphinx-build
+PAPER         =
+BUILDDIR      = _build
+
+# Internal variables.
+PAPEROPT_a4     = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+# the i18n builder cannot share the environment and doctrees with the others
+I18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
+
+help:
+	@echo "Please use \`make <target>' where <target> is one of"
+	@echo "  html       to make standalone HTML files"
+	@echo "  dirhtml    to make HTML files named index.html in directories"
+	@echo "  singlehtml to make a single large HTML file"
+	@echo "  pickle     to make pickle files"
+	@echo "  json       to make JSON files"
+	@echo "  htmlhelp   to make HTML files and a HTML help project"
+	@echo "  qthelp     to make HTML files and a qthelp project"
+	@echo "  devhelp    to make HTML files and a Devhelp project"
+	@echo "  epub       to make an epub"
+	@echo "  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+	@echo "  latexpdf   to make LaTeX files and run them through pdflatex"
+	@echo "  text       to make text files"
+	@echo "  man        to make manual pages"
+	@echo "  texinfo    to make Texinfo files"
+	@echo "  info       to make Texinfo files and run them through makeinfo"
+	@echo "  gettext    to make PO message catalogs"
+	@echo "  changes    to make an overview of all changed/added/deprecated items"
+	@echo "  linkcheck  to check all external links for integrity"
+	@echo "  doctest    to run all doctests embedded in the documentation (if enabled)"
+
+clean:
+	-rm -rf $(BUILDDIR)/*
+
+html:
+	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
+	@echo
+	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
+
+dirhtml:
+	$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
+	@echo
+	@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
+
+singlehtml:
+	$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
+	@echo
+	@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
+
+pickle:
+	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
+	@echo
+	@echo "Build finished; now you can process the pickle files."
+
+json:
+	$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
+	@echo
+	@echo "Build finished; now you can process the JSON files."
+
+htmlhelp:
+	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
+	@echo
+	@echo "Build finished; now you can run HTML Help Workshop with the" \
+	      ".hhp project file in $(BUILDDIR)/htmlhelp."
+
+qthelp:
+	$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
+	@echo
+	@echo "Build finished; now you can run "qcollectiongenerator" with the" \
+	      ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
+	@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/dynamodb-mapper.qhcp"
+	@echo "To view the help file:"
+	@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/dynamodb-mapper.qhc"
+
+devhelp:
+	$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
+	@echo
+	@echo "Build finished."
+	@echo "To view the help file:"
+	@echo "# mkdir -p $$HOME/.local/share/devhelp/dynamodb-mapper"
+	@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/dynamodb-mapper"
+	@echo "# devhelp"
+
+epub:
+	$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
+	@echo
+	@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
+
+latex:
+	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+	@echo
+	@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
+	@echo "Run \`make' in that directory to run these through (pdf)latex" \
+	      "(use \`make latexpdf' here to do that automatically)."
+
+latexpdf:
+	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+	@echo "Running LaTeX files through pdflatex..."
+	$(MAKE) -C $(BUILDDIR)/latex all-pdf
+	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
+
+text:
+	$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
+	@echo
+	@echo "Build finished. The text files are in $(BUILDDIR)/text."
+
+man:
+	$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+	@echo
+	@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
+
+texinfo:
+	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
+	@echo
+	@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
+	@echo "Run \`make' in that directory to run these through makeinfo" \
+	      "(use \`make info' here to do that automatically)."
+
+info:
+	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
+	@echo "Running Texinfo files through makeinfo..."
+	make -C $(BUILDDIR)/texinfo info
+	@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
+
+gettext:
+	$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
+	@echo
+	@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
+
+changes:
+	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
+	@echo
+	@echo "The overview file is in $(BUILDDIR)/changes."
+
+linkcheck:
+	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
+	@echo
+	@echo "Link check complete; look for any errors in the above output " \
+	      "or in $(BUILDDIR)/linkcheck/output.txt."
+
+doctest:
+	$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
+	@echo "Testing of doctests in the sources finished, look at the " \
+	      "results in $(BUILDDIR)/doctest/output.txt."

docs/_include/intro.rst

+`DynamoDB <http://aws.amazon.com/dynamodb/>`_ is a minimalistic NoSQL engine
+provided by Amazon as a part of their AWS product.
+
+**DynamoDB** allows you to stores documents composed of unicode strings or numbers
+as well as sets of unicode strings and numbers. Each tables must define a hash
+key and may define a range key. All other fields are optional.
+
+**Dynamodb-mapper** brings a tiny abstraction layer over DynamoDB to overcome some
+of the limitations with no performance compromise. It is highly inspired by the
+mature `MoongoKit project <http://namlook.github.com/mongokit>`_

docs/api/alter.rst

+#################
+Data manipulation
+#################
+
+.. currentmodule:: dynamodb_mapper.model
+
+Amazon's DynamoDB offers the ability to both update and insert data with a single
+:py:meth:`~.DynamoDBModel.save` method that is mostly exposed by Dynamodb-mapper.
+
+.. _saving:
+
+Saving
+======
+
+As Dynamodb-mapper directly exposes items properties as python properties,
+manipulating data is as easy as manipulating any Python object. Once done, just
+call :py:meth:`~.DynamoDBModel.save` on your model instance.
+
+Conflict detection
+------------------
+
+:py:meth:`~.DynamoDBModel.save` has an optional parameter ``raise_on_conflict``.
+When set to ``True``, ``save`` will ensure that:
+
+- saving a *new* object will not overwrite a pre-existing one at the same keys
+- DB object has not changed before saving when the object was read form the DB.
+
+If the first scenario occurs, :py:class:`~.OverwriteError` is raised. In all
+other cases, it is :py:class:`~.ConflictError`.
+
+Please note that :py:class:`~.ConflictError` inherits from :py:class:`~.OverwriteError`.
+If you make a distinction between both cases, ``OverwriteError`` must be the
+first ``except`` block.
+
+.. _save-use-case:
+
+Use case: Virtual coins
+-----------------------
+
+When a player purchases a virtual good in a game, virtual money is withdrawn from
+from its internal account. After the operation, the balance must be > 0. If
+multiple orders are being processed at the same time, we must prevent the `lost
+update` scenario:
+
+- initial balance = 200
+- purchase P1 150
+- purchase P2 100
+- read balance P1 -> 200
+- read balance P2 -> 200
+- update balance P1 -> 50
+- update balance P1 -> 100
+
+Indeed, when saving, you **expect** that the balance has not changed. This is
+what ``raise_on_conflict`` is for.
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, autoincrement_int
+
+    class NotEnoughCreditException(Exception):
+        pass
+
+    class User(DynamoDBModel):
+        __table__ = u"game-dev-users"
+        __hash_key__ = u"login"
+        __schema__ = {
+            u"login": unicode,
+            u"firstname": unicode,
+            u"lastname": unicode,
+            u"email": unicode,
+            u"connections": int,
+            #...
+            u"balance": int,
+        }
+
+    user = User.get("waldo")
+    if user.balance - 150 < 0:
+        raise NotEnoughCreditException
+    user.balance -= 150
+
+    try:
+        user.save(raise_on_conflict=True)
+    except ConflictError:
+        print "Ooops: Lost update syndrome caught!"
+
+Note: In a real world application, this would most probably be wrapped in
+:ref:`transactions` which transparently rely on the same mechanism and provides
+a way to persist states.
+
+Deleting
+========
+
+Just like :py:meth:`~.DynamoDBModel.save`, :py:meth:`~.DynamoDBModel.delete`
+features the ``raise_on_conflict`` option. When ``True``, it will ensure that:
+
+- deleting a *new* object does nothing. In other words, you are not accidentally deleting a random Item
+- DB object has not changed before deleting when the object was read form the DB.
+
+In all other case, the delete operation will proceed as usual .
+
+Note: Eventual consistent read operations might be able to successfully get the
+Item for a short while, usually under 1s.
+
+Use case: single operation user deletion
+----------------------------------------
+
+An item may be deleted in a single operation as long as the keys are known. The
+trick is to create an object with only these keys and to call delete on it. Of
+course, it will not work if ``raise_on_conflict=True``.
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, autoincrement_int
+    from boto.dynamodb.exceptions import DynamoDBKeyNotFoundError
+
+    class User(DynamoDBModel):
+        __table__ = u"game-dev-users"
+        __hash_key__ = u"login"
+        __schema__ = {
+            u"login": unicode,
+            u"firstname": unicode,
+            u"lastname": unicode,
+            u"email": unicode,
+            u"connections": int,
+            #...
+            u"balance": int,
+        }
+
+    try:
+        user = User(login=u"waldo")
+        user.delete()
+    except DynamoDBKeyNotFoundError:
+        print "Ooops: user 'waldo' did not exist. Can't delete it!"
+
+.. _auto-increment-internals:
+
+Autoincrement technical background
+==================================
+
+When saving an Item with an :py:class:`~.autoincrement_int` ``hash_key``, the
+:py:meth:`~.DynamoDBModel.save` method will automatically add checks to prevent
+accidental overwrite of the "magic item". The magic item holds the last allocated
+ID and is saved at ``hash_key=-1``. If ``hash_key is None`` then a new ID is
+automatically and atomically allocated meaning that no collision can occure even
+if the database connection is lost. Additionaly, a check is performed to make
+sure no Item were manually inserted to this location. If applicable, a maximum
+of ``MAX_RETRIES=100`` attempts to allocate a new ID will be performed before
+raising :py:class:`~.MaxRetriesExceededError`. In all other cases, the Item will
+be saved exactly where requested.
+
+To make it short, Items involving an :py:class:`~.autoincrement_int` ``hash_key``
+will involve 2 write request on first save. It is important to keep it in mind
+when dimensioning an insert-intensive application.
+
+:ref:`Know when to use it, when *not* to use it <auto-increment-when-to-use>`.
+
+Example:
+
+>>> model = MyModel() # model with an autoincrement_int 'id' hash_key
+>>> model.do_stuff()
+>>> model.save()
+>>> print model.id # An id field is automatically generated
+7
+
+
+About editing ``hash_key`` and/or ``range_key`` values
+======================================================
+
+Key fields specifies the Item position. Amazon's DynamoDB has no support for
+"moving" an Item. It means that any edition of ``hash_key`` and/or ``range_key``
+values will preserve the original Item and insert a *new* one at the specified
+location. To prevent accidental key value change, set ``raise_on_conflict=True``
+when calling `save``.
+
+If you indeed meant to move the Item:
+
+- delete the item
+- save it to the new location
+
+Example:
+
+>>> model = MyModel.get(24)
+>>> model.delete() # Delete *first*
+>>> model.id = 42  # Then change the key(s)
+>>> model.save()   # Finally, save it
+
+Logically group data manipulations
+==================================
+
+Some data manipulations requires a whole context to be consistent, status saving
+or whatever. If your application requires any of these features, please go to the
+:ref:`transactions section <transactions>` of this guide.
+
+Limitations
+============
+
+Some limitations over Amazon's DynamoDB currently applies to this mapper.
+:py:meth:`~.DynamoDBModel.save` has no support for :
+
+- returning data after a transaction
+- atomic increments
+
+Please, let us know if this is a blocker to you!
+
+Related exceptions
+==================
+
+OverwriteError
+--------------
+
+.. autoclass:: OverwriteError
+
+ConflictError
+------------------
+
+.. autoclass:: ConflictError

docs/api/migration.rst

+.. _migrations:
+
+##########
+Migrations
+##########
+
+As the the development goes, data schema in the application evolves. As this is
+NoSQL, there is no notion of "column" hence no way to update a whole table at a
+time. In a sense, this is a good point. Migrations may be done lazily with no
+need to lock the database for hours.
+
+Migration module aims to provide simple tools for most common migration scenarios.
+
+Migration concepts
+==================
+
+Migrations involves 2 steps
+
+ 1. detecting the current version
+ 2. if need be, perform operations
+
+Version detection will **always** be performed as long as a ``Migration`` class
+is associated with the ``DynamoDbModel`` to make sure the object is up to date.
+
+The version is detected by running ``check_N`` successively on the raw boto data.
+``N`` is a revision integer. Revisions number do not need to be consecutive and
+are sorted in natural decreasing order. Its means that ``N=11`` is considered
+bigger than ``N=2``.
+
+ - If ``check_N`` returns ``True``, detected version version will be ``N``.
+ - If ``check_N`` returns ``False``, go on with the immediate lower version
+ - If no ``check_N`` succeed, :py:class:`~.VersionError` is raised.
+
+Migration in itself is performed by successively running ``migrate_to_N`` on the
+raw boto data. This enables you to run incremental migration. The first migrator
+ran has ``N > current_version``. Revision number ``N`` needs not be consecutive
+nor to have ``check_N`` equivalents.
+
+If your lowest possible version is ``n``, you need to have a ``check_n`` but no
+``migrate_to_n`` as there is no lower version to migrate to ``n``. On the contrary,
+you need to have both a migrator and a version checker to the latest revisions.
+The migrator will be needed to update older objects while the the version checker
+will ensure the Item is at the latest revision. If it returns ``True``, no
+migration will be performed.
+
+At the end of the process, the version is assumed to be the latest. No additional
+check will be performed. The migrated object needs to be saved manually.
+
+When will the migration be useful?
+----------------------------------
+
+Non null field is added
+    - **detection**: no field in raw_data
+    - **migration**: add the field in raw_data
+    - Note: this is of no use if empty values are allowed as there is no distinction between empty and non existing values in boto
+Renamed field
+    - **detection**: old field name in raw_data
+    - **migration**: insert a new field with the old value and ``del`` the old field in raw_data.
+Deleted field
+    - **detection**: old field still exist in raw data
+    - **migration**: ``del`` old field from raw data
+Type change
+    - **detection**: if converting the raw data field to the expected type fails.
+    - **migration**: perform the type conversion manually and serialize it back *before* returning other data
+
+
+When will it be of no use?
+--------------------------
+
+Table rename
+    You need to manually fall-back to the old table.
+Field migration between table
+    You still need some high level magic.
+
+For complex use cases, you may consider freezing you application and running an
+EMR on it.
+
+Use case: Rename field 'mail' to 'email'
+========================================
+
+Migration engine
+----------------
+
+::
+
+    from dynamodb_mapper.migration import Migration
+
+    class UserMigration(Migration):
+        # Is it at least compatible with first revision ?
+        def check_1(self, raw_data):
+            field_count = 0
+            field_count += u"id" in raw_data and isinstance(raw_data[u"id"], unicode)
+            field_count += u"energy" in raw_data and isinstance(raw_data[u"energy"], int)
+            field_count += u"mail" in raw_data and isinstance(raw_data[u"mail"], unicode)
+
+            return field_count == len(raw_data)
+
+        #No migrator to version 1: in can not be older than version 1 !
+
+        # Is the object Up to date ?
+        def check_2(self, raw_data):
+            field_count = 0
+            field_count += u"id" in raw_data and isinstance(raw_data[u"id"], unicode)
+            field_count += u"energy" in raw_data and isinstance(raw_data[u"energy"], int)
+            field_count += u"email" in raw_data and isinstance(raw_data[u"email"], unicode)
+
+            return field_count == len(raw_data)
+
+        # migrate from previous revision (1) to this one (the latest)
+        def migrate_to_2(self, raw_data):
+            raw_data[u"email"] = raw_data[u"mail"]
+            del raw_data[u"mail"]
+            return raw_data
+
+Enable migrations in model
+--------------------------
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel
+
+    class User(DynamoDBModel):
+        __table__ = "user"
+        __hash_key__ = "id"
+        __migrator__ = UserMigration # Single line to add !
+        __schema__ = {
+            "id": unicode,
+            "energy": int,
+            "email": unicode
+        }
+
+Example run
+-----------
+
+Let's say you have an object at revision 1 in the db. It will look like this:
+
+::
+
+    raw_data_version_1 = {
+        u"id": u"Jackson",
+        u"energy": 6742348,
+        u"mail": u"jackson@tldr-ludia.com",
+    }
+
+Now, migrate it:
+
+>>> jackson = User.get(u"Jackson")
+# Done, jackson is migrated, but let's check it
+>>> print jackson.email
+u"jackson@tldr-ludia.com" #Alright !
+>>> jackson.save(raise_on_conflict=True)
+# Should go fine if no concurrent access
+
+``raise_on_conflict`` integration
+=================================
+
+Internally, ``raise_on_conflict`` relies on the raw data dict from boto to
+generate a non conflict detection. This dict is stored in the model instance
+*before* the migration engine is triggered so that `raise_on_conflict` feature
+will keep on working as expected.
+
+This behavior guarantees that :ref:`transactions` works as expected even when
+dealing with migrated objects.
+
+Related exceptions
+==================
+
+VersionError
+------------
+
+.. autoclass:: dynamodb_mapper.migration.VersionError

docs/api/model.rst

+.. _data-models:
+
+###########
+Data models
+###########
+
+.. currentmodule:: dynamodb_mapper.model
+
+Models are formal Pythons objects telling the mapper how to map DynamoDB data
+to regular Python and vice versa.
+
+Bare minimal model
+==================
+
+A bare minimal model with only a ``hash_key`` needs only to define a ``__table__``
+and a ``hash_key``.
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel
+
+    class MyModel(DynamoDBModel):
+        __table__ = u"..."
+        __hash_key__ = u"key"
+        __schema__ = {
+            u"key": int,
+            #...
+        }
+
+The model can then be instanciated and used like any other Python class.
+
+>>> data = MyModel()
+>>> data.key = u"foo/bar"
+
+Initial values can even be specified directly in the constructor. Otherwise, unless
+:ref:`defaults are provided <using-default-values>`, all fields are set to ``None``
+
+>>> data = MyModel(key=u"foo/bar")
+>>> repr(data.key)
+"u'foo/bar'"
+
+About keys
+==========
+
+While this is not stricly speaking related the mapper itself, it seems important
+to clarify this point as this is a key feature of Amazon's DynamoDB.
+
+Amazon's DynamoDB has support for 1 or 2 keys per objects. They must be specified
+at table creation time and can not be altered. Neither renamed nor added or removed.
+It is not even possible to change their values whithout deleting and re-inserting
+the object in the table.
+
+The first key is mandatory. It is called the ``hash_key``. The ``hash_key`` is
+to access data and controls its replications among database partitions. To take
+advantage of all the provisioned R/W throughput, keys should be as random as
+possible. For more informations about ``hash_key``, please see `Amazon's
+developer guide <http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/BestPractices.html#UniformWorkloadBestPractices>`_
+
+The second key is optional. It is called the ``range_key``. The ``range_key`` is
+used to logically group data with a given ``hash_key``. :ref:`More informations
+below <range-key>`.
+
+Data access relying either on the  ``hash_key`` or both the ``hash_key`` and
+the ``range_key`` is fast and cheap. All other options are **very** expensive.
+
+We intend to add migration tools to Dynamodb-mapper in a later revision but do not
+expect miracles in this area.
+
+This is why correctly modeling your data is crucial with DynamoDB.
+
+Creating the table
+==================
+
+Unlike other NoSQL engines like MongoDB, tables must be created and managed
+explicitely. At the moment, dynamodb-mapper abstracts only the initial table
+creation. Other lifecycle managment operations may be done directly via Boto.
+
+To create the table, use :py:meth:`~.ConnectionBorg.create_table` with the model
+class as first argument. When calling this method, you must specify how much
+throughput you want to provision for this table. Throughput is mesured as the
+number of atomic KB requested or sent per second. For more information, please
+see `Amazon's official documentation
+<http://aws.amazon.com/dynamodb/faqs/#What_is_provisioned_throughput>`_.
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, ConnectionBorg
+
+    conn = ConnectionBorg()
+    conn.create_table(MyModel, read_units=10, write_units=10, wait_for_active=True)
+
+Important note: Unlike most databases, table creation may take up to 1 minute.
+during this time, the table is *not* usable. Also, you can not have more than 10
+tables in ``CREATING`` or ``DELETING`` state any given time for your whole Amazon
+account. This is an Amazon's DynamoDB limitation.
+
+The connection manager automatically reads your credentials from either:
+
+- ``/etc/boto.cfg``
+- ``~/.boto``
+- or ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables
+
+If none of these places defines them or if you want to overload them, please use
+:py:meth:`~.ConnectionBorg.set_credentials` before calling ``create_table``.
+
+For more informations on the connection manager, pease see :py:class:`~.ConnectionBorg`
+
+Region
+------
+
+To change the AWS region from the the default ``us-east-1``, use
+:py:meth:`~.ConnectionBorg.set_region` before any method that creates a
+connection. The region defaults to ``RegionInfo:us-east-1``.
+
+You can list the currently available regions like this:
+
+::
+
+    >>> import boto.dynamodb
+    >>> boto.dynamodb.regions()
+    [RegionInfo:us-east-1, RegionInfo:us-west-1, RegionInfo:us-west-2,
+    RegionInfo:ap-northeast-1, RegionInfo:ap-southeast-1, RegionInfo:eu-west-1]
+
+.. TODO: more documentations/features on table lifecycle
+
+Advanced usage
+==============
+
+Namespacing the models
+----------------------
+
+This is more an advice, than a feature. In DynamoDB, each customer is allocated
+a single database. It is highly recommended to namespace your tables with a name
+of the form ``<application>-<env>-<model>``.
+
+Deep schema definition and validation with Onctuous
+---------------------------------------------------
+
+Onctous (http://pypi.python.org/pypi/onctuous) has been integrated into
+DynamoDB-Mapper as part of 1.8.0 release cycle.
+
+Before writing any validator relying on Onctuous, there is a crucial point to
+take into account. Validators are run when loading from DynamoDB *and* when saving
+to DynamoDB. ``save`` stores the output of the validators while reading functions feeds the
+validators with raw DynamoDB values that is to say, the serialized output of the
+validators.
+
+Hence, validators must be accept both serialized and already de-serialized input.
+As of Onctuous 0.5.2, ``Coerce`` can safely do that as it checks the type before
+attempting anything.
+
+To sum up, schema entries of the form
+
+ - base type (``int``, ``unicode``, ``float``, ``dict``, ``list``, ...) works seamlessly.
+ - ``datetime`` type: same special behavior as before
+ - ``[validators]`` and ``{'keyname': validators}`` are automatically (de-)serialized
+ - callable validators (``All``, ``Range``, ...) MUST accept both serialized and de-serialized input
+
+Here is a basic schema example using deep validation:
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel
+    from onctuous.validators import Match, Length, All, Coerce
+    from datetime import datetime
+
+    class Article(DynamoDBModel):
+        __table__ = "Article"
+        __hash_key__ = "slug"
+        __schema__ = {
+            # Regex validation. Input and output are unicode so no coercion problem
+            "slug": Match("^[a-z0-9-]+$"),
+
+            # Regular title and body definition
+            "title": unicode,
+            "body": unicode,
+
+            # Special case for dates. Not that you would have to handle
+            # (de-)serialization yourself if you wanted to apply condition
+            "published_date": datetime,
+
+            # list of tags. I force unicode as an example even though it is not
+            # strictly speaking needed here
+            "tags": [All(Coerce(unicode), Length(min=3, max=15))],
+        }
+
+.. _auto-increment-when-to-use:
+
+Using auto-incrementing index
+-----------------------------
+
+For those comming from SQL-like world or even MongoDB with its UUIDs, adding an
+ID field or using the default one has become automatic but these environement
+are not limited to 2 indexes. Moreover, DynamoDB has no built-in support for it.
+Nonetheless, Dynamodb-mapper implements this feature at a higher level while.
+For more technical background on the :ref:`internal implementation <auto-increment-internals>`.
+
+If the field value is left to its default value of 0, a new hash_key will
+automatically be generated when saving. Otherwise, the item is inserted at the
+specified ``hash_key``.
+
+Before using this feature, make sure you *really need it*. In most cases another
+field can be used in place. A good hint is "which field would I have marked
+UNIQUE in SQL ?".
+
+- for users, ``email`` or ``login`` field shoud do it.
+- for blogposts, ``permalink`` could to it too.
+- for orders, ``datetime`` is a good choice.
+
+In some applications, you need a combination of 2 fields to be unique. You may
+then consider using one as the ``hash_key`` and the other as the ``range_key``
+or, if the ``range_key`` is needed for another purpose, combine try combining them.
+
+At Ludia, this is a feature we do not use anymore in our games at the time of
+writing.
+
+So, when to use it ? Some applications still need a ticket like approach and dates
+could be confusing for the end user. The best example for this is a bugtracking
+system.
+
+Use case: Bugtracking System
+----------------------------
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, autoincrement_int
+
+    class Ticket(DynamoDBModel):
+        __table__ = u"bugtracker-dev-ticket"
+        __hash_key__ = u"ticket_number"
+        __schema__ = {
+            u"ticket_number": autoincrement_int,
+            u"title": unicode,
+            u"description": unicode,
+            u"tags": set, # target, version, priority, ..., order does not matter
+            u"comments": list, # probably not the best because of the 64KB limitation...
+            #...
+        }
+
+    # Create a new ticket and auto-generate an ID
+    ticket = Ticket()
+    ticket.title = u"Chuck Norris is the reason why Waldo hides"
+    ticket.tags = set([u'priority:critical', u'version:yesterday'])
+    ticket.description = u"Ludia needs to create a new social game to help people all around the world find him again. Where is Waldo?"
+    ticket.comments.append(u"...")
+    ticket.save()
+    print ticket.ticket_number # A new id has been generated
+
+    # Create a new ticket and force the ID
+    ticket = Ticket()
+    ticket.ticket_number = 42
+    ticket.payload = u"foo/bar"
+    ticket.save() # create or replace item #42
+    print ticket.ticket_number # id has not changed
+
+To prevent accidental data overwrite when saving to an arbitrary location, please
+see the detailed presentation of :ref:`saving`.
+
+.. Suggestion: remove the range_key limitation  when using `autoincrement_int`. might be useful to store revisions for ex
+
+Please note that ``hash_key=-1`` is currently reserved and nothing can be stored
+at this index.
+
+You can not use ``autoincrement_int`` and a ``range_key`` at the same time. In the
+bug tracker example above, it also means that tickets number are distributed on
+the application scope, not on a per project scope.
+
+This feature is only part of Dynamodb-mapper. When using another mapper or
+direct data access, you might *corrupt* the counter. Please see the `reference
+documentation <~.model.autoincrement_int>`_ for implementation details and
+technical limitations.
+
+.. _range-key:
+
+Using a range_key
+-----------------
+
+Models may define a second key index called ``range_key``. While ``hash_key`` only
+allows dict like access, ``range_key`` allows to group multiple items under a single
+``hash_key`` and to further filter them.
+
+For example, let's say you have a customer and want to track all it's orders. The
+naive/SQL-like implementation would be:
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, autoincrement_int
+
+    class Customer(DynamoDBModel):
+        __table__ = u"myapp-dev-customers"
+        __hash_key__ = u"login"
+        __schema__ = {
+            u"login": unicode,
+            u"order_ids": set,
+            #...
+        }
+
+    class Order(DynamoDBModel):
+        __table__ = u"myapp-dev-orders"
+        __hash_key__ = u"order_id"
+        __schema__ = {
+            u"order_id": autoincrement_int,
+            #...
+        }
+
+    # Get all orders for customer "John Doe"
+    customer = Customer(u"John Doe")
+    order_generator = Order.get_batch(customer.order_ids)
+
+But this approach has many drawbacks.
+
+- It is expensive:
+    - An update to generate a new autoinc ID
+    - An insertion for the new order item
+    - An update to add the new order id to the customer
+- It is risky:
+    - Items are limited to 64KB but the ``order_ids`` set has no growth limit
+- To get all orders from a giver customer, you need to read the customer first
+    and use a :py:meth:`~.DynamoDBModel.get_batch` request
+
+As a first enhancement and to spare a request, you can use ``datetime`` instead of
+``autoincrement_int`` for the key ``order_id`` but with the power of range keys,
+you could to get all orders in a single request:
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel
+    from datetime import datetime
+
+    class Customer(DynamoDBModel):
+        __table__ = u"myapp-dev-customers"
+        __hash_key__ = u"login"
+        __schema__ = {
+            u"login": unicode,
+            #u"orders": set, => This field is not needed anymore
+            #...
+        }
+
+    class Order(DynamoDBModel):
+        __table__ = u"myapp-dev-orders"
+        __hash_key__ = u"login"
+        __range_key__ = u"order_id"
+        __schema__ = {
+            u"order_id": datetime,
+            #...
+        }
+
+    # Get all orders for customer "John Doe"
+    Order.query(u"John Doe")
+
+Not only is this approach better, it is also much more powerful. We could
+easily limit the result count, sort them in reverse order or filter them by
+creation date if needed. For more background on the querying system, please see
+the :ref:`accessing data <accessing-data>` section of this manual.
+
+.. _using-default-values:
+
+Default values
+--------------
+
+When instanciating a model, all fields are initialised to "neutral" values. For
+containers (``dict``, ``set``, ``list``, ...) it is the empty container, for
+``unicode``, it's the empty string, for numbers, 0...
+
+It is also possible to specify the values taken by the fields when instanciating
+either with a ``__defaults__`` dict or directly in ``__init__``. The former applies
+to all new instances while the later is obviously on a per instance basis and has
+a higher precedence.
+
+``__defaults__`` is a ``{u'keyname':default_value}``. ``__init__`` syntax follows
+the same logic: ``Model(keyname=default_value, ...)``.
+
+``default_value`` can either be a scalar value or a callable with no argument
+returning a scalar value. The value must be of type matching the schema definition,
+otherwise, a ``TypeError`` exception is raised.
+
+Example:
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, utc_tz
+    from datetime import datetime
+
+    # define a model with defaults
+    class PlayerStrength(DynamoDBModel):
+        __table__ = u"player_strength"
+        __hash_key__ = u"player_id"
+        __schema__ = {
+            u"player_id": int,
+            u"strength": unicode,
+            u"last_update": datetime,
+        }
+        __defaults__ = {
+            u"strength": u'weak', # scalar default value
+            u"last_update": lambda: datetime.now(utc_tz), # callable default value
+        }
+
+>>> player = PlayerStrength(strength=u"chuck norris") # overload one of the defaults
+>>> print player.strength
+chuck norris
+>>> print player.lastUpdate
+2012-12-21 13:37:00.00000
+
+Related exceptions
+==================
+
+SchemaError
+-----------
+
+.. autoclass:: SchemaError
+
+InvalidRegionError
+------------------
+
+.. autoclass:: InvalidRegionError
+

docs/api/query.rst

+.. _accessing-data:
+
+##############
+Accessing data
+##############
+
+Amazon's DynamoDB offers 4 data access method. Dynamodb-mapper directly exposes
+them. They are documented here from the fastest to the slowest. It is interesting
+to note that, because of Amazon's throughput credit, the slowest is also the most
+expensive.
+
+Strong vs eventual consistency
+==============================
+
+While this is not stricly speaking related the mapper itself, it seems important
+to clarify this point as this is a key feature of Amazon's DynamoDB.
+
+Tables are spreaded among partitions for redundancy and performance purpose. When
+writing an item, it takes some time to replicate it on all partitions. Usually
+less than a second according to the technical specifications. Accessing an item
+right after writing it might get you an outdated version.
+
+In most applications, this will not be an issue. In this case we say that data is
+'eventually consistent'. If this matters, you may request 'strong consistency'
+thus asking for the most up to date version. 'strong consistency' is also more
+twice as expensive in terms of capacity units as 'eventual consistency' and a bit
+slower too. So that keeping this aspect in mind is important.
+
+'Eventual consistency' is the default behavior in all requests. It also the only
+available option for ``scan`` and ``get_batch``.
+
+.. todo: get with update
+
+Querying
+========
+
+The 4 DynamoDB query methods are:
+
+- :py:meth:`~.DynamoDBModel.get`
+- :py:meth:`~.DynamoDBModel.get_batch`
+- :py:meth:`~.DynamoDBModel.query`
+- :py:meth:`~.DynamoDBModel.scan`
+
+They all are ``classmethods`` returning instance(s) of the model.
+To get object(s):
+
+>>> obj = MyModelClass.get(...)
+
+Use ``get`` or ``batch_get`` to get one or more item by exact id. If you need
+more than one item, it is highly recommended to use ``batch_get`` instead of
+``get`` in a loop as it avoids the cost of multiple network call. However, if
+strong consistency is required, ``get`` is the only option as DynamoDB does not
+support it in batch mode.
+
+When objects are logically grouped using a :ref:`range_key <range-key>` it is
+possible to get all of them in a simple query and fast query provided they all
+have the same known ``hash_key``. :py:meth:`~.DynamoDBModel.query` also supports
+`a couple of handy filters <http://docs.pythonboto.org/en/latest/ref/dynamodb.html#boto.dynamodb.layer2.Layer2.query>`_.
+
+When querying, you pay only for the results you really get this is what makes
+filtering interesting. They work both for strings and for numbers. The
+``BEGINSWITH`` filter is extremely handy for namespaced ``range_key``. When
+using ``EQ(x)`` filter, it may be preferable for readability to rewrite it as a
+regular ``get``. The cost in terms of read units is strictly speaking the same.
+
+If needed :py:meth:`~.DynamoDBModel.query` support ``strong consistency``,
+reversing scan order and limiting the results count.
+
+The last function, ``scan``, is like a generalised version of ``query``. Any field
+can be filtered and more filters are available. There is a `complete list
+<http://docs.pythonboto.org/en/latest/ref/dynamodb.html#boto.dynamodb.layer2.Layer2.scan>`_
+on the Boto website. Nonetheless, ``scan`` results are *always* ``eventually
+consistent``.
+
+This said, ``scan`` is extremely expensive in terms of throughput and its use
+should be avoided as much as possible. It may even impact negatively pending
+regular requests causing them to repetively fail. Underlying Boto tries to
+gracefully handle this but you overall application's performance and user
+experience might suffer a lot. For more informations about ``scan`` impact,
+please see `Amazon's developer guide
+<http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/BestPractices.html#ScanQueryConsiderationBestPractices>`_
+
+To retrieve results of :py:meth:`~.DynamoDBModel.get_batch`,
+:py:meth:`~.DynamoDBModel.query` and :py:meth:`~.DynamoDBModel.scan`, just loop
+over the result list. Technically, they all rely on high-level generators
+abstracting the query chunking logic.
+
+All querying methods persists the original raw object for
+:ref:`raise_on_conflict <saving>` and transactions.
+
+Use case: Get user ``Chuck Norris``
+-----------------------------------
+
+This first example is pretty straight-forward.
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel
+
+    # Example model
+    class MyUserModel(DynamoDBModel):
+        __table__ = u"..."
+        __hash_key__ = u"fullname"
+        __schema__ = {
+            # This is probably a good key in a real world application because of homonynes
+            u"fullname": unicode,
+            # [...]
+        }
+
+    # Get the user
+    myuser = MyUserModel.get("Chuck Norris")
+
+    # Do some work
+    print "myuser({})".format(myuser.fullname)
+
+
+Use case: Get only objects after ``2012-12-21 13:37``
+-----------------------------------------------------
+
+At the moment, filters only accepts strings and numbers. If you need to filter
+dates for time based applications. To workaround this limitation, you need to
+export the ``datetime`` object to the internal W3CDTF representation.
+
+::
+
+    from datetime import datetime
+    from dynamodb_mapper.model import DynamoDBModel, utc_tz
+    from boto.dynamodb.condition import *
+
+    # Example model
+    class MyDataModel(DynamoDBModel):
+        __table__ = u"..."
+        __hash_key__ = u"h_key"
+        __range_key__ = u"r_key"
+        __schema__ = {
+            u"h_key": int,
+            u"r_key": datetime,
+            # [...]
+        }
+
+    # Build the date condition and export it to W3CDTF representation
+    date_obj = datetime.datetime(2012, 12, 21, 13, 31, 0, tzinfo=utc_tz),
+    date_str = date_obj.astimezone(utc_tz).strftime("%Y-%m-%dT%H:%M:%S.%f%z")
+
+    # Get the results generator
+    mydata_generator = MyDataModel.query(
+        hash_key_value=42,
+        range_key_condition=GT(date_str)
+    )
+
+    # Do some work
+    for data in mydata_generator:
+        print "data({}, {})".format(data.h_key, data.r_key)
+
+Use case: Query the most up to date revision of a blogpost
+----------------------------------------------------------
+
+There is no builtin filter but this can easily be achieved using a conjunction
+of ``limit`` and ``reverse`` parameters. As ``query`` returns a generator,
+``limit`` parameter could seem to be of no use. However, internaly DynamoDB sends
+results by batches of 1MB and you pay for all the results so... you'd beter use it.
+
+::
+
+    from dynamodb_mapper.model import DynamoDBModel, utc_tz
+
+    # Example model
+    class MyBlogPosts(DynamoDBModel):
+        __table__ = u"..."
+        __hash_key__ = u"post_id"
+        __range_key__ = u"revision"
+        __schema__ = {
+            u"post_id": int,
+            u"revision": int,
+            u"title": unicode,
+            u"tags": set,
+            u"content": unicode,
+            # [...]
+        }
+
+    # Get the results generator
+    mypost_last_revision_generator = MyBlogPosts.query(
+        hash_key_value=42,
+        limit=1,
+        reverse=True
+    )
+
+    # Get the actual blog post to render
+    try:
+        mypost = mypost_last_revision_generator.next()
+    except StopIteration:
+        mypost = None # Not Found
+
+This example could easily be adapted to get the first revision, the ``n`` first
+comments. You may also combine it with a condition to get pagination like behavior.
+
+
+.. TODO: use case avec le prefixage

docs/api/transaction.rst

+.. _transactions:
+
+############
+Transactions
+############
+
+.. currentmodule:: dynamodb_mapper.transactions
+
+The :ref:`save use case <save-use-case>` demonstrates the use of
+``raise_on_conflict`` argument. What it does is actually implement by hand a
+transaction. Amazon's DynamoDB has no "out of the box" transaction engines but
+provides this parameter as an elementary block for this purpose.
+
+Transaction concepts
+====================
+
+Transactions are a convenient way to logically group database operations while
+trying as much as possible to enforce consistency. In Dynamodb-mapper,
+transactions *are* plain ``DynamoDBModel`` thus allowing them to persist their
+state. Dynamodb-mapper provides 2 grouping level: Targets and sub-transactions.
+
+Transactions operates on a list of 'targets'. For each target, it needs list of
+``transactors``. ``transactors`` are tuples of ``(getter, setter)``. The getter
+is responsible of either getting a fresh copy of the target either create it while
+setter performs the modifications. The call to save is handled by the engine itself.
+
+For each target, the transaction engine will successively call ``getter`` and
+``setter`` until ``save()`` succeeds. ``save()`` will succeed if and only if
+the target has not been altered by another thread in the mean time thus avoiding
+the lost update syndrome.
+
+Optionally, transactions may define a  method :py:meth:`~.Transaction._setup`
+which will be called before any transactors.
+
+Sub-transactions, if applicable, are ran after the main transactors if they all
+succeeded. Hence, :py:meth:`~.Transaction._setup` and the ``transactors`` may
+dynamically append sub-transactions to the main transactions.
+
+Unless the transaction is explicitely marked ``transient``, its state will be
+persisted to a dedicated table. ``Transaction`` base class embeds a minimal
+schema that should suit most applications but may be overloaded as long as a
+``datetime`` ``range_key`` is preserved along with a ``unicode`` ``status``
+field.
+
+Since version 1.7.0, transactions may operate on new (not yet persisted) Items.
+
+Using the transaction engine
+============================
+
+To use the transaction engine, all you have to do is to define `__table__` and
+overload ``_get_transactors()``. Of course the transactors will themselves will
+need to be implemented. Optionally, you may overload the whole schema or set
+``transient=True``. A ``_setup()`` method may also be implemented.
+
+During the transaction itself, please set ``requester_id`` field to any relevant
+interger unless the transaction is ``transient``. ``_setup()`` is a good place
+to do it.
+
+Note: ``transient`` flag may be toggled on a per instance basis. It may even be
+toggled in one of the transactors.
+
+Use case: Bundle purchase
+-------------------------
+
+::
+
+    from dynamodb_mapper.transactions import Transaction, TargetNotFoundError
+
+    # define PlayerExperience, PlayerPowerUp, PlayerSkins, Players with user_id as hash_key
+
+    class InsufficientResourceError(Exception):
+        pass
+
+    bundle = {
+        u"cost": 150,
+        u"items": [
+            PlayerExperience,
+            PlayerPowerUp,
+            PlayerSkins
+        ]
+    }
+
+    class BundleTransaction(Transaction):
+        transient = False # Make it explicit. This is anyway the default.
+        __table__ = u"mygame-dev-bundletransactions"
+
+        def __init__(self, user_id, bundle):
+            super(BundleTransaction, self).__init__()
+            self.requester_id = user_id
+            self.bundle = bundle
+
+        # _setup() is not needed here
+
+        def _get_transactors(self):
+            transactors = [(
+                lambda: Players.get(self.requester_id), # lambda
+                self.user_payment # regular callback
+            )]
+
+            for Item in self.bundle.items:
+                transactors.append((
+                    lambda: Item.get(self.requester_id),
+                    lambda item: item.do_stuff()
+                ))
+
+            return transactors
+
+        def user_payment(self, player):
+            if player.balance < self.bundle.cost:
+                raise InsufficientResourceError()
+            player.balance -= self.bundle.cost
+
+    # Run the transaction
+    try:
+        transaction = BundleTransaction(42, bundle)
+        transaction.commit()
+    except InsufficientResourceError:
+        print "Ooops, user {} has not enough coins to proceed...".format(42)
+
+    #That's it !
+
+This example has been kept simple on purpose. In a real world application, you
+certainly would *not* model your data this way ! You can notice the power of this
+approach that is compatible with ``lambda`` niceties as well as regular callbacks.
+
+Use case: PowerUp purchase
+--------------------------
+
+This example is a bit more subtle than the previous one. The customer may
+purchase a '*surprise*' bundle of powerups. The database knows what is in the
+pack while the client application does not. As bundles may change from time to
+time, we want to log what exactly was purchased. Also, the actual ``PowerUp``
+registration should not start until the ``Coins`` transaction has succeeded.
+
+To reach this goal, we could
+
+- pre-load the ``Bundle``
+- dynamically use the content in get_transactors
+- save the detailed status in a specially overloaded Transaction's ``__schema__``
+
+But that's more hand work.
+
+A much better way is to split the transaction into ``PowerupTransaction`` and
+``UserPowerupTransaction``. The former handles the coins and the registration
+of the sub-transaction while the later handles the PowerUo magic.
+
+::
+
+    from dynamodb_mapper.transactions import Transaction, TargetNotFoundError
+
+    # define PlayerPowerUp, Players with user_id as hash_key
+
+    class InsufficientResourceError(Exception):
+        pass
+
+    # Sub-Transaction of PowerupTransaction. Will have i's own status
+    class UserPowerupTransaction(transaction):
+        __table__ = u"mygame-dev-userpoweruptransactions"
+
+        def __init__(self, player, powerup):
+            super(UserPowerupTransaction, self).__init__()
+            self.requester_id = player.user_id
+            self.powerup = powerup
+
+        def _get_transactors(self):
+            return [(
+                lambda: PlayerPowerUp.get(self.requester_id, self.powerup),
+                do_stuff()
+            )]
+
+    # Main Transaction class. Will have it's own status
+    class PowerupTransaction(Transaction):
+        __table__ = u"mygame-dev-poweruptransactions"
+
+        cost = 150 # hard-coded cost for the demo
+        powerups = ["..."] # hard-coded powerups for the demo
+
+        def _get_transactors(self):
+            return [(
+                lambda: Players.get(self.requester_id),
+                self.user_payment
+            )]
+
+        def user_payment(self, player):
+            # Payment logic
+            if player.balance < self.cost:
+                raise InsufficientResourceError()
+            player.balance -= self.cost
+
+            # Register (overwrite) sub-transactions
+            self.subtransactions = []
+            for powerupName in self.powerups:
+                self.subtransactions.append = (player, powerupName)
+
+
+    # Run the transaction
+    try:
+        transaction = PowerupTransaction(requester_id=42)
+        transaction.commit()
+    except InsufficientResourceError:
+        print "Ooops, user {} has not enough coins to proceed...".format(42)
+
+    #That's it !
+
+Note: In some special "real-World(tm)" situations, it may be necessary to modify
+the behavior of subtransactions. It is possible to overload the method
+:py:meth:`.Transaction._apply_subtransactions` for this purpose. Use case:
+sub-transactions have been automatically/randomly generated by the main transaction
+and the application needs to know wich one were generated or perform some other
+application specific tasks when applying.
+
+Related exceptions
+==================
+
+MaxRetriesExceededError
+-----------------------
+
+.. autoclass:: dynamodb_mapper.model.MaxRetriesExceededError
+
+Note: ``MAX_RETRIES`` is currently hardcoded to ``100`` in transactions module.
+
+TargetNotFoundError
+-------------------
+
+.. autoclass:: dynamodb_mapper.transactions.TargetNotFoundError
+# -*- coding: utf-8 -*-
+#
+# dynamodb-mapper documentation build configuration file, created by
+# sphinx-quickstart on Fri Aug  3 10:48:56 2012.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+sys.path.insert(0, os.path.abspath('..'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'dynamodb-mapper'
+copyright = u'2012, Ludia Inc.'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = '1.8'
+# The full version, including alpha/beta/rc tags.
+release = '1.8.0.dev'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['_build', '_include']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+html_theme = 'default'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further.  For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents.  If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar.  Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it.  The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'dynamodb-mapperdoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+  ('index', 'dynamodb-mapper.tex', u'dynamodb-mapper Documentation',
+   u'Max Noel', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+    ('index', 'dynamodb-mapper', u'dynamodb-mapper Documentation',
+     [u'Max Noel'], 1)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output ------------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+#  dir menu entry, description, category)
+texinfo_documents = [
+  ('index', 'dynamodb-mapper', u'dynamodb-mapper Documentation',
+   u'Max Noel', 'dynamodb-mapper', 'One line description of project.',
+   'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+
+# -- Options for Epub output ---------------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = u'dynamodb-mapper'
+epub_author = u'Max Noel'
+epub_publisher = u'Max Noel'
+epub_copyright = u'2012, Ludia Inc.'
+
+# The language of the text. It defaults to the language option
+# or en if the language is not set.
+#epub_language = ''
+
+# The scheme of the identifier. Typical schemes are ISBN or URL.
+#epub_scheme = ''
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#epub_identifier = ''
+
+# A unique identification for the text.
+#epub_uid = ''
+
+# A tuple containing the cover image and cover page html template filenames.
+#epub_cover = ()
+
+# HTML files that should be inserted before the pages created by sphinx.
+# The format is a list of tuples containing the path and title.
+#epub_pre_files = []
+
+# HTML files shat should be inserted after the pages created by sphinx.
+# The format is a list of tuples containing the path and title.
+#epub_post_files = []
+
+# A list of files that should not be packed into the epub file.
+#epub_exclude_files = []
+
+# The depth of the table of contents in toc.ncx.
+#epub_tocdepth = 3
+
+# Allow duplicate toc entries.
+#epub_tocdup = True
+################################
+Dynamodb-mapper's documentation.
+################################
+
+Overview
+========
+
+.. include:: _include/intro.rst
+
+Documentation
+=============
+
+User guide
+----------
+
+.. toctree::
+   :maxdepth: 3
+
+   pages/overview
+   pages/getting_started
+
+   api/model
+   api/query
+   api/alter
+   api/transaction
+   api/migration
+
+   pages/changelog
+
+
+Api reference
+-------------
+
+.. toctree::
+   :maxdepth: 2
+   :glob:
+
+   raw_api/*
+
+Indices and tables
+------------------
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+
+Contribute
+==========
+
+Want to contribute, report a but of request a feature ? The development goes on
+at Ludia's BitBucket account:
+
+- **Report bugs**: https://bitbucket.org/Ludia/dynamodb-mapper/issues
+- **Fork the code**: https://bitbucket.org/Ludia/dynamodb-mapper/overview
+@ECHO OFF
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+	set SPHINXBUILD=sphinx-build
+)
+set BUILDDIR=_build
+set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
+set I18NSPHINXOPTS=%SPHINXOPTS% .
+if NOT "%PAPER%" == "" (
+	set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
+	set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
+)
+
+if "%1" == "" goto help
+
+if "%1" == "help" (
+	:help
+	echo.Please use `make ^<target^>` where ^<target^> is one of
+	echo.  html       to make standalone HTML files
+	echo.  dirhtml    to make HTML files named index.html in directories
+	echo.  singlehtml to make a single large HTML file
+	echo.  pickle     to make pickle files
+	echo.  json       to make JSON files
+	echo.  htmlhelp   to make HTML files and a HTML help project
+	echo.  qthelp     to make HTML files and a qthelp project
+	echo.  devhelp    to make HTML files and a Devhelp project
+	echo.  epub       to make an epub
+	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
+	echo.  text       to make text files
+	echo.  man        to make manual pages
+	echo.  texinfo    to make Texinfo files
+	echo.  gettext    to make PO message catalogs
+	echo.  changes    to make an overview over all changed/added/deprecated items
+	echo.  linkcheck  to check all external links for integrity
+	echo.  doctest    to run all doctests embedded in the documentation if enabled
+	goto end
+)
+
+if "%1" == "clean" (
+	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
+	del /q /s %BUILDDIR%\*
+	goto end
+)
+
+if "%1" == "html" (
+	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
+	goto end
+)
+
+if "%1" == "dirhtml" (
+	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
+	goto end
+)
+
+if "%1" == "singlehtml" (
+	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
+	goto end
+)
+
+if "%1" == "pickle" (
+	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
+	if errorlevel 1 exit /b 1
+	echo.
+	echo.Build finished; now you can process the pickle files.
+	goto end
+)