Commits

schugschug  committed 9b4c0a4 Merge

merged with default

  • Participants
  • Parent commits 49661e0, e293ba4
  • Branches older

Comments (0)

Files changed (34)

 1784335f6a7c87c022eae7a938471b5cfc0e0e2e v0.3.1
 6df9c32239319b913860967f3fd8340fae82bcda v0.3.2
 50648927fc21fb092f2b7960d581b391f589ab92 v0.4.0
+7e6e50b0ab5ce8a1427a011a5197684148e5b709 v0.4.1
+279d4b8ab3f697ed3c79f80060a46c6a9344e0e8 v1.0.0
-=================
-ddbmock 0.4.1.dev
-=================
+=============
+ddbmock 1.0.0
+=============
+
+This section documents all user visible changes included between ddbmock
+versions 0.4.1 and versions 1.0.0
+
+Additions
+---------
+
+ - Add documentation for ``Table`` internal API
+ - Add documentation for ``DynamoDB`` (database) internal API
+ - Add documentation for ``Key`` internal API
+ - Add documentation for ``Item`` and ``ItemSize`` internal API
+
+Changes
+-------
+
+  - Add a ``truncate`` method to the tables
+
+
+==========================
+ddbmock 0.4.1 aka 1.0.0 RC
+==========================
 
 This section documents all user visible changes included between ddbmock
 versions 0.4.0 and versions 0.4.1
 
+This iteration was mostly focused on polishing and brings last missing bits.
+
+Additions
+---------
+
+ - Add support for ``ExclusiveStartKey``, ``LastEvaluatedKey`` and ``Limit`` for ``Scan``
+
 Changes
 -------
 
  - Wrap all write operations in a table scope lock: each individual operation should be atomic
+ - Addressed Thread safety issues
+ - Add option to disable status update timers (#8)
+ - Fix BETWEEN bug (#7)
+
 
 =============
 ddbmock 0.4.0
-include *.txt *.ini *.cfg *.rst
+include *.ini *.cfg *.rst
 recursive-include ddbmock *.py
 recursive-include docs *.py *.rst *.png Makefile make.bat
 recursive-include tests *.py
+prune docs/_build
 **DynamoDB** is really awesome but is terribly slooooow with managment tasks.
 This makes it completly unusable in test environements.
 
-**ddbmock** brings a nice, tiny, in-memory (optionaly sqlite) implementation of
+**ddbmock** brings a nice, tiny, in-memory or sqlite implementation of
 DynamoDB along with much better and detailed error messages. Among its niceties,
 it features a double entry point:
 
 you've been warned! I currently recommend the "boto extension" mode for unit-tests
 and the "server" mode for functional tests.
 
-Yes, ddbmock *can persist your data* using the optional "sqlite" backend. Bt still,
-do *not* use it in production.
-
 Installation
 ============
 
     $ nosetests # --no-skip to run boto integration tests too
 
 
+What is ddbmock *not* useful for ?
+==================================
+
+Do *not* use it in production or as a cheap DynamoDB replacement. I'll never
+stress it enough.
+
+All the focus was on simplicity/hackability and simulation quality. Nothing else.
+
 What is ddbmock useful for ?
 ============================
 
-- running unit test FAST. DONE
-- running functional test FAST. DONE
-- experiment with DynamoDB API. DONE
-- plan throughput usage. DONE
-- plan disk space requirements. DONE (describe table returns accurate size !)
-- perform simulations with accurate limitations.
+- FAST and RELIABLE unit testing
+- FAST and RELIABLE functional testing
+- experiment with DynamoDB API.
+- RELIABLE throughput planification
+- RELIABLE disk space planification
+- almost any DynamoDB simulation !
+
+ddbmock can also persist your data in SQLITE. This open another vast range of
+possibilities :)
 
 Current status
 ==============
 - support full item life-cycle
 - support for all item limitations
 - accurate size, throughput reporting
-- ``Scan``, ``BatchGetItem`` and ``BatchWriteItem`` still lacks ``ExclusiveStartKey``
 - no limits on concurent table operations
-- no limits for request/response size nor item count in those
+- no limits for request/response size nor item count in these
 
 See http://ddbmock.readthedocs.org/en/latest/pages/status.html for detailed
 up-to-date status.
 History
 =======
 
- - v0.4.1 (?): more analytics, schema persistence in sqlite, ...
+ - v1.0.0 (*): full documentation and bugfixes
+ - v0.4.1: schema persistence + thread safety, bugfixes
  - v0.4.0: sqlite backend + throughput statistics + refactoring, more documentation, more tests
  - v0.3.2: batchWriteItem support + pass boto integration tests
  - v0.3.1: accuracy in item/table sizes + full test coverage

File ddbmock/config.py

 
 ### slowness simulation ###
 
+# boolean: enable timers ?
+ENABLE_DELAYS = True
 # seconds: simulate table creation delay. It will still be available immediately
 DELAY_CREATING = 1000
 # seconds: simulate table update delay. It will still be available immediately

File ddbmock/database/db.py

 
 from .table import Table
 from .item import ItemSize
+from .storage import Store
 from collections import defaultdict
 from ddbmock import config
-from ddbmock.utils import push_write_throughput, push_read_throughput
+from ddbmock.utils import push_write_throughput, push_read_throughput, schedule_action
 from ddbmock.errors import (ResourceNotFoundException,
                             ResourceInUseException,
                             LimitExceededException,
 # All validations are performed on *incomming* data => already done :)
 
 class DynamoDB(object):
+    """
+    Main database, behaves as a singleton in the sense that all instances share
+    the same data.
+
+    If underlying store supports it, all tables schema are persisted at creation
+    time to a special table ``~*schema*~`` which is an invalid DynamoDB table name
+    so no collisions are to be expected.
+    """
     _shared_data = {
-        'data': {}
+        'data': {},
+        'store': None,
     }
 
     def __init__(self):
+        """
+        When first instanciated, ``__init__`` checks the underlying store for
+        potentially persisted tables. If any, it reloads there schema to make
+        them available to the application.
+
+        In all other cases, ``__init__`` simply loads the shared state.
+        """
         cls = type(self)
         self.__dict__ = cls._shared_data
 
+        # At first instanciation, attempt to reload the database schema
+        if self.store is None:
+            self.store = Store('~*schema*~')
+            for table in self.store:
+                self.data[table.name] = table
+
     def hard_reset(self):
+        """
+        Reset and drop all tables. If any data was persisted, it will be completely
+        lost after a call to this method. I do use in ``tearDown`` of all ddbmock
+        tests to avaid any side effect.
+        """
         for table in self.data.values():
-            table.store.truncate() # FIXME: should be moved in table
+            table.truncate()
+
         self.data.clear()
+        self.store.truncate()
 
     def list_tables(self):
+        """
+        Get a list of all table names.
+        """
         return self.data.keys()
 
     def get_table(self, name):
+        """
+        Get a handle to :py:class:`ddbmock.database.table.Table` '``name``' is it exists.
+
+        :param name: Name of the table to load.
+
+        :return: :py:class:`ddbmock.database.table.Table` with name '``name``'
+
+        :raises: :py:exc:`ddbmock.errors.ResourceNotFoundException` if the table does not exist.
+        """
         if name in self.data:
             return self.data[name]
         raise ResourceNotFoundException("Table {} does not exist".format(name))
 
     def create_table(self, name, data):
+        """
+        Create a :py:class:`ddbmock.database.table.Table` named '``name``'' using
+        parameters provided in ``data`` if it does not yet exist.
+
+        :param name: Valid table name. No further checks are performed.
+        :param data: raw DynamoDB request data.
+
+        :return: A reference to the newly created :py:class:`ddbmock.database.table.Table`
+
+        :raises: :py:exc:`ddbmock.errors.ResourceInUseException` if the table already exists.
+        :raises: :py:exc:`ddbmock.errors.LimitExceededException` if more than :py:const:`ddbmock.config.MAX_TABLES` already exist.
+        """
         if name in self.data:
             raise ResourceInUseException("Table {} already exists".format(name))
         if len(self.data) >= config.MAX_TABLES:
-            raise LimitExceededException("Table limit reached. You can have more than {} tables simultaneously".format(config.MAX_TABLES))
+            raise LimitExceededException("Table limit reached. You can not have more than {} tables simultaneously".format(config.MAX_TABLES))
 
         self.data[name] = Table.from_dict(data)
+        self.store[name, False] = self.data[name]
         return self.data[name]
 
-    def _internal_delete_table(self, name):
-        """This is ran only after the timer is exhausted"""
-        if name in self.data:
+    def _internal_delete_table(self, table):
+        """This is ran only after the timer is exhausted.
+        Important note: this function is idempotent. If another table with the
+        same name has been created in the mean time, it won't be overwrittem.
+        This is the kind of special cases you might encounter in testing
+        environenment"""
+        name = table.name
+        if name in self.data and self.data[name] is table:
             self.data[name].store.truncate()  # FIXME: should be moved in table
             del self.data[name]
+            del self.store[name, False]
 
     def delete_table(self, name):
+        """
+        Triggers internal "realistic" table deletion. This implies changing
+        the status to ``DELETING``. Once :py:const:ddbmock.config.DELAY_DELETING
+        has expired :py:meth:_internal_delete_table is called and the table
+        is de-referenced from :py:attr:data.
+
+        Since :py:attr:data only holds a reference, the table object might still
+        exist at that moment and possibly still handle pending requests. This also
+        allows to safely return a handle to the table object.
+
+        :param name: Valid table name.
+
+        :return: A reference to :py:class:`ddbmock.database.table.Table` named ``name``
+        """
         if name not in self.data:
             raise ResourceNotFoundException("Table {} does not exist".format(name))
-        self.data[name].delete(callback=self._internal_delete_table)
 
-        return self.data[name]
+        table = self.data[name]
+        table.delete()
+        schedule_action(config.DELAY_DELETING, self._internal_delete_table, [table])
+
+        return table
 
     def get_batch(self, batch):
+        """
+        Batch processor. Dispatches call to appropriate :py:class:`ddbmock.database.table.Table`
+        methods. This is the only low_level API that directly pushes throughput usage.
+
+        :param batch: raw DynamoDB request batch.
+
+        :returns: dict compatible with DynamoDB API
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if a ``range_key`` was provided while table has none.
+        :raises: :py:exc:`ddbmock.errors.ResourceNotFoundException` if a table does not exist.
+        """
         ret = defaultdict(dict)
 
         for tablename, batch in batch.iteritems():
         return ret
 
     def write_batch(self, batch):
+        """
+        Batch processor. Dispatches call to appropriate :py:class:`ddbmock.database.table.Table`
+        methods. This is the only low_level API that directly pushes throughput usage.
+
+        :param batch: raw DynamoDB request batch.
+
+        :returns: dict compatible with DynamoDB API
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if a ``range_key`` was provided while table has none.
+        :raises: :py:exc:`ddbmock.errors.ResourceNotFoundException` if a table does not exist.
+        """
         ret = defaultdict(dict)
 
         for tablename, operations in batch.iteritems():
 
         return ret
 
-# reference instance
 dynamodb = DynamoDB()
+"""Reference :py:class:`DynamoDB` instance. You should use it directly"""

File ddbmock/database/item.py

 
 
 def _decode_field(field):
+    """
+    Read a field's type and value
+
+    :param field: Raw DynamoDB request field of the form ``{'typename':'value'}``
+
+    :return: (typename, value) string tuple
+    """
     return field.items()[0]
 
 class ItemSize(int):
+    """
+    Utility class to represent an :py:class:`Item` size as bytes or capacity units
+    """
     def __add__(self, value):
+        """
+        Transparently allow addition of ``ItemSize`` values. This is useful for
+        all batch requests as ``Scan``, ``Query``, ``BatchWriteItem`` and
+        ``BatchReadItem``
+
+        :param value: foreign int compatible value to add
+
+        :return: new :py:class:`ItemSize` value
+
+        :raises: ``TypeError`` if ``value`` is not int compatible
+        """
         return ItemSize(int.__add__(self, value))
 
     def as_units(self):
-        """Get item size in terms of capacity units. This does *not* include the
-        index overhead. Units can *not* be bellow 1 ie: a "delete" on a non
+        """
+        Get item size in terms of capacity units. This does *not* include the
+        index overhead. Units can *not* be bellow 1 ie: a ``DeleteItem`` on a non
         existing item is *not* free
+
+        :return: number of capacity unit consummed by any operation on this ``ItemSize``
         """
         return max(1, int(ceil((self) / 1024.0)))
 
     def with_indexing_overhead(self):
-        """Take the indexing overhead into account. this is especially usefull
+        """
+        Take the indexing overhead into account. this is especially usefull
         to compute the table disk size as DynamoDB would but it's not included
         in the capacity unit calculation.
+
+        :return: ``ItemSize`` + :py:const:`ddbmock.config.INDEX_OVERHEAD`
         """
         return self + config.INDEX_OVERHEAD
 
 
 class Item(dict):
+    """
+    Internal Item representation. The Item is stored in its raw DynamoDB request
+    form and no parsing is involved unless specifically needed.
+
+    It adds a couple of handful helpers to the dict class such as DynamoDB actions,
+    condition validations and specific size computation.
+    """
     def __init__(self, dico={}):
+        """
+        Load a raw DynamoDb Item and enhance it with ou helpers. Also set the
+        cached :py:class:`ItemSize` to ``None`` to mark it as not computed. This
+        avoids unnecessary computations on temporary Items.
+
+        :param dico: Raw DynamoDB request Item
+        """
         self.update(dico)
         self.size = None
 
         ``fields`` evaluates to False (None, empty, ...), the original dict is
         returned untouched.
 
-        :ivar fields: array of name of keys to keep
-        :return: filtered ``item``
+        Internal :py:class:`ItemSize` of the filtered Item is set to original
+        Item size as you pay for the data you operated on, not for what was
+        actually sent over the wires.
+
+        :param fields: array of name of keys to keep
+
+        :return: filtered ``Item``
         """
         if fields:
             filtered = Item((k, v) for k, v in self.items() if k in fields)
         return self
 
     def _apply_action(self, fieldname, action):
+        """
+        Internal function. Applies a single action to a single field.
+
+        :param fieldname: Valid field name
+        :param action: Raw DynamoDB request action specification
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` whenever attempting an illegual action
+        """
         # Rewrite this function, it's disgustting code
         if action[u'Action'] == u"PUT":
             self[fieldname] = action[u'Value']
                 self[fieldname] = action[u'Value']
 
     def apply_actions(self, actions):
+        """
+        Apply ``actions`` to the current item. Mostly used by ``UpdateItem``.
+        This also resets the cached item size.
+
+        .. warning:: There is a corner case in ``ADD`` action. It will always behave
+            as though the item already existed before that is to say, it the target
+            field is a non existing set, it will always start a new one with this
+            single value in it. In real DynamoDB, if Item was new, it should fail.
+
+        :param action: Raw DynamoDB request actions specification
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` whenever attempting an illegual action
+        """
         map(self._apply_action, actions.keys(), actions.values())
         self.size = None  # reset cache
 
     def assert_match_expected(self, expected):
         """
-        Raise ConditionalCheckFailedException if ``self`` does not match ``expected``
-        values. ``expected`` schema is raw conditions as defined by DynamoDb.
+        Make sure this Items matches the ``expected`` values. This may be used
+        by any signe item write operation such as ``DeleteItem``, ``UpdateItem``
+        and ``PutItem``.
 
-        :ivar expected: conditions to validate
-        :raises: ConditionalCheckFailedException
+        :param expected: Raw DynamoDB request expected values
+
+        :raises: :py:exc:`ddbmock.errors.ConditionalCheckFailedException` if any of the expected values is not valid
         """
         for fieldname, condition in expected.iteritems():
             if u'Exists' in condition and not condition[u'Exists']:
                     fieldname, condition[u'Value'], self[fieldname]))
 
     def match(self, conditions):
+        """
+        Check if the current item matches conditions. Return False if a field is not
+        found, or does not match. If condition is None, it is considered to match.
+
+        Condition name are assumed to be valid as Onctuous is in charge of input
+        validation. Expect crashes otherwise :)
+
+        :param fieldname: Valid field name
+        :param condition: Raw DynamoDB request condition of the form ``{"OPERATOR": FIELDDEFINITION}``
+
+        :return: ``True`` on success or ``False`` on first failure
+        """
         for name, condition in conditions.iteritems():
             if not self.field_match(name, condition):
                 return False
 
         return True
 
-    def field_match(self, name, condition):
-        """Check if a field matches a condition. Return False when field not
+    def field_match(self, fieldname, condition):
+        """
+        Check if a field matches a condition. Return False when field not
         found, or do not match. If condition is None, it is considered to match.
 
-        :ivar name: name of the field to test
-        :ivar condition: raw dict describing the condition {"OPERATOR": FIELDDEFINITION}
-        :return: True on success
+        Condition name are assumed to be valid as Onctuous is in charge of input
+        validation. Expect crashes otherwise :)
+
+        :param fieldname: Valid field name
+        :param condition: Raw DynamoDB request condition of the form ``{"OPERATOR": FIELDDEFINITION}``
+
+        :return: ``True`` on success
         """
         # Arcording to specif, no condition means match
         if condition is None:
             return True
 
         # read the item
-        if name not in self:
+        if fieldname not in self:
             value = None
         else:
-            value = self[name]
+            value = self[fieldname]
 
-        # Load the test operator from the comparison module. Thamks to input
+        # Load the test operator from the comparison module. Thanks to input
         # validation, no try/except required
         condition_operator = condition[u'ComparisonOperator'].lower()
         operator = getattr(comparison, condition_operator)
         return operator(value, *condition[u'AttributeValueList'])
 
     def read_key(self, key, name=None, max_size=0):
-        """Provided ``key``, read field value at ``name`` or ``key.name`` if not
-        specified. If the field does not exist, this is a "ValueError". In case
-        it exists, also check the type compatibility. If it does not match, raise
-        TypeError.
+        """
+        Provided ``key``, read field value at ``name`` or ``key.name`` if not
+        specified.
 
-        :ivar key: ``Key`` or ``PrimaryKey`` to read
-        :ivar name: override name field of key
-        :ivar max_size: if specified, check that the item is bellow a treshold
-        :return: field value
+        :param key: ``Key`` or ``PrimaryKey`` to read
+        :param name: override name field of key
+        :param max_size: if specified, check that the item is bellow a treshold
+
+        :return: field value at ``key``
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if field does not exist, type does not match or is above ``max_size``
         """
         if key is None:
             return False
         return key.read(field)
 
     def _internal_item_size(self, base_type, value):
+        """
+        Internal DynamoDB field size computation. ``base_type`` is assumed to
+        be valid as it went through Onctous before and this helper is only
+        supposed to be called internally.
+
+        :param base_type: valid base type. Must be in ``['N', 'S', 'B']``.
+        :param value: compute the size of this value.
+        """
         if base_type == 'N': return 8 # assumes "double" internal type on ddb side
         if base_type == 'S': return len(value.encode('utf-8'))
-        if base_type == 'B': return len(value.encode('utf-8'))*3/4 # base64 overead
+        if base_type == 'B': return len(value.encode('utf-8'))*3/4 # base64 overhead
 
-    def get_field_size(self, key):
-        """Return value size in bytes or 0 if not found"""
-        if not key in self:
+    def get_field_size(self, fieldname):
+        """
+        Compute field size in bytes.
+
+        :param fieldname: Valid field name
+
+        :return: Size of the field in bytes or 0 if the field was not found. Remember that empty fields are represented as missing values in DynamoDB.
+        """
+        if not fieldname in self:
             return 0
 
-        typename, value = _decode_field(self[key])
+        typename, value = _decode_field(self[fieldname])
         base_type = typename[0]
 
         if len(typename) == 1:
         return value_size
 
     def get_size(self):
-        """Compute Item size as DynamoDB would. This is especially useful for
+        """
+        Compute Item size as DynamoDB would. This is especially useful for
         enforcing the 64kb per item limit as well as the capacityUnit cost.
 
-        note: the result is cached for efficiency. If you ever happend to directly
-        edit values for any reason, do not forget to invalidate it: ``self.size=None``
+        .. note:: the result is cached for efficiency. If you ever happend to
+            directly edit values for any reason, do not forget to invalidate the
+            cache: ``self.size=None``
 
-        :return: the computed size
+        :return: :py:class:`ItemSize` DynamoDB item size in bytes
         """
 
         # Check cache and compute
         return ItemSize(self.size)
 
     def __sub__(self, other):
+        """
+        Utility function to compute a 'diff' of 2 Items. All fields of ``self``
+        (left operand) identical to those of ``other`` (right operand) are dicarded.
+        The other fields from ``self`` are kept. This proves to be extremely
+        useful to support ``ALL_NEW`` and ``ALL_OLD`` return specification of
+        ``UpdateItem`` in a clean and readable manner.
+
+        :param other: ``Item`` to be used as filter
+
+        :return: dict with fields of ``self`` not in or different from ``other``
+        """
         # Thanks mnoel :)
         return {k:v for k,v in self.iteritems() if k not in other or v != other[k]}

File ddbmock/database/key.py

 from ddbmock.errors import ValidationException
 
 class Key(object):
+    """
+    Abstraction layer over DynamoDB Keys in :py:class:`ddbmock.database.item.Item`
+    """
     def __init__(self, name, typename):
+        """
+        High level Python constructor
+
+        :param name: Valid key name. No further checks are performed.
+        :param typename: Valid key typename. No further checks are performed.
+        """
         self.name = name
         self.typename = typename
 
     def read(self, key):
-        """Parse a key as specified by DynamoDB API and return its value as long as
-            its typename matches self.typename
+        """
+        Parse a key as specified by DynamoDB API and return its value as long as
+        its typename matches :py:attr:`typename`
+
+        :param key: Raw DynamoDB request key.
+
+        :return: the value of the key
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if field types does not match
+
         """
         typename, value = key.items()[0]
         if self.typename != typename:
         return value
 
     def to_dict(self):
-        """Return the a dict form of the key, suitable for DynamoDb API
+        """
+        Return the key as a Python dict.
+
+        :return: Serialized version of the key definition metadata compatible with DynamoDB API syntax.
         """
         return {
             "AttributeName": self.name,
 
     @classmethod
     def from_dict(cls, data):
+        """
+        Alternate constructor which deciphers raw DynamoDB request data before
+        ultimately calling regular ``__init__`` method.
+
+        See :py:meth:`__init__` for more insight.
+
+        :param data: raw DynamoDB request data.
+
+        :return: fully initialized :py:class:`Key` instance
+        """
         return cls(data[u'AttributeName'], data[u'AttributeType'])
 
 class PrimaryKey(Key):
+    """Special marker to provide distinction between regulat Keys and PrimaryKey"""
     pass

File ddbmock/database/storage/memory.py

 
 class Store(object):
     def __init__(self, name):
-        """ Initialize the in-memory store
+        """
+        Initialize the in-memory store
+
         :param name: Table name.
         """
         self.name = name
         self.data = defaultdict(dict)
 
     def __getitem__(self, (hash_key, range_key)):
-        """Get item at (``hash_key``, ``range_key``) or the dict at ``hash_key`` if
+        """
+        Get item at (``hash_key``, ``range_key``) or the dict at ``hash_key`` if
         ``range_key``  is None.
 
         :param key: (``hash_key``, ``range_key``) Tuple. If ``range_key`` is None, all keys under ``hash_key`` are returned
         return self.data[hash_key][range_key]
 
     def __setitem__(self, (hash_key, range_key), item):
-        """Set the item at (``hash_key``, ``range_key``). Both keys must be
+        """
+        Set the item at (``hash_key``, ``range_key``). Both keys must be
         defined and valid. By convention, ``range_key`` may be ``False`` to
         indicate a ``hash_key`` only key.
 
         self.data[hash_key][range_key] = item
 
     def __delitem__(self, (hash_key, range_key)):
-        """Delete item at key (``hash_key``, ``range_key``)
+        """
+        Delete item at key (``hash_key``, ``range_key``)
 
         :raises: KeyError if not found
         """
         del self.data[hash_key][range_key]
 
     def __iter__(self):
-        """ Iterate all over the table, abstracting the ``hash_key`` and
+        """
+        Iterate all over the table, abstracting the ``hash_key`` and
         ``range_key`` complexity. Mostly used for ``Scan`` implementation.
         """
 

File ddbmock/database/storage/sqlite.py

 
 class Store(object):
     def __init__(self, name):
-        """ Initialize the sqlite store
+        """
+        Initialize the sqlite store
 
         By contract, we know the table name will only contain alphanum chars,
         '_', '.' or '-' so that this is ~ safe
         self.name = name
 
     def truncate(self):
-        """Perform a full table cleanup. Might be a good idea in tests :)"""
+        """
+        Perform a full table cleanup. Might be a good idea in tests :)
+        """
         conn.execute('DELETE FROM `{}`'.format(self.name))
         conn.commit()
 
         return ret
 
     def __getitem__(self, (hash_key, range_key)):
-        """Get item at (``hash_key``, ``range_key``) or the dict at ``hash_key`` if
+        """
+        Get item at (``hash_key``, ``range_key``) or the dict at ``hash_key`` if
         ``range_key``  is None.
 
         :param key: (``hash_key``, ``range_key``) Tuple. If ``range_key`` is None, all keys under ``hash_key`` are returned
         return self._get_by_hash_range(hash_key, range_key)
 
     def __setitem__(self, (hash_key, range_key), item):
-        """Set the item at (``hash_key``, ``range_key``). Both keys must be
+        """
+        Set the item at (``hash_key``, ``range_key``). Both keys must be
         defined and valid. By convention, ``range_key`` may be ``False`` to
         indicate a ``hash_key`` only key.
 
         conn.commit()
 
     def __delitem__(self, (hash_key, range_key)):
-        """Delete item at key (``hash_key``, ``range_key``)
+        """
+        Delete item at key (``hash_key``, ``range_key``)
 
         :raises: KeyError if not found
         """
                           .format(self.name), (hash_key, range_key))
 
     def __iter__(self):
-        """ Iterate all over the table, abstracting the ``hash_key`` and
+        """
+        Iterate all over the table, abstracting the ``hash_key`` and
         ``range_key`` complexity. Mostly used for ``Scan`` implementation.
         """
         items = conn.execute('SELECT `data` FROM `{}`'.format(self.name))

File ddbmock/database/table.py

 from .item import Item, ItemSize
 from .storage import Store
 from collections import defaultdict, namedtuple
-from threading import Timer, Lock
+from threading import Lock
 from ddbmock import config
 from ddbmock.errors import ValidationException, LimitExceededException, ResourceInUseException
+from ddbmock.utils import schedule_action
 import time, copy, datetime
 
 # All validations are performed on *incomming* data => already done :)
 Results = namedtuple('Results', ['items', 'size', 'last_key', 'scanned'])
 
 class Table(object):
+    """
+    Table abstraction. Actual :py:class:`ddbmock.database.item.Item` are stored
+    in :py:attr:`store`.
+    """
     def __init__(self, name, rt, wt, hash_key, range_key, status='CREATING'):
+        """
+        Create a new ``Table``. When manually creating a table, make sure you
+        registered it in :py:class:`ddbmock.database.db.DynamoDB` with a something
+        like ``dynamodb.data[name] = Table(name, "...")``.
+
+        Even though there are :py:const:`DELAY_CREATING` seconds before the status
+        is updated to ``ACTIVE``, the table is immediately available. This is a
+        slight difference with real DynamooDB to ease unit and functionnal tests.
+
+        :param name: Valid table name. No further checks are performed.
+        :param rt: Provisioned read throughput.
+        :param wt: Provisioned write throughput.
+        :param hash_key: :py:class:`ddbmock.database.key.Key` instance describe the ``hash_key``
+        :param hash_key: :py:class:`ddbmock.database.key.Key` instance describe the ``range_key`` or ``None`` if table has no ``range_key``
+        :param status: (optional) Valid initial table status. If Table needd to be avaible immediately, use ``ACTIVE``, otherwise, leave default value.
+
+        .. note:: ``rt`` and ``wt`` are only used by ``DescribeTable`` and ``UpdateTable``. No throttling is nor will ever be done.
+        """
         self.name = name
         self.rt = rt
         self.wt = wt
         self.last_decrease_time = 0
         self.count = 0
 
-        Timer(config.DELAY_CREATING, self.activate).start()
+        schedule_action(config.DELAY_CREATING, self.activate)
 
-    def delete(self, callback):
+    def truncate(self):
         """
-        Delete is really done when the timeout is exhausted, so we need a callback
-        for this
-        In Python, this table is not actually destroyed until all references are
-        dead. So, we shut down the links with the databases but not the table itself
-        until all requests are done. This is the reason why the lock is not acquired
-        here. Indeed, this would probably dead-lock the server !
+        Remove all Items from this table. This is like a reset. Might be very
+        usefull in unit and functional tests.
+        """
+        self.store.truncate()
 
-        :ivar callback: real delete function
+    def delete(self):
+        """
+        If the table was ``ACTIVE``, update its state to ``DELETING``. This is
+        not a destructor, only a sate updater and the Table instance will still
+        be valid afterward. In all othercases, raise :py:exc:`ddbmock.errors.ResourceInUseException.`
+
+        If you want to perform the full table delete cycle, please use
+        :py:meth:`ddbmock.database.db.DynamoDB.delete_table()` instead
+
+        :raises: :py:exc:`ddbmock.errors.ResourceInUseException` is the table was not in ``Active`` state
         """
         if self.status != "ACTIVE":
-            raise ResourceInUseException("Table {} is in {} state. Can not UPDATE.".format(self.name, self.status))
+            raise ResourceInUseException("Table {} is in {} state. Can not DELETE.".format(self.name, self.status))
 
         self.status = "DELETING"
 
-        Timer(config.DELAY_DELETING, callback, [self.name]).start()
-
     def activate(self):
+        """
+        Unconditionnaly set Table status to ``ACTIVE``. This method is automatically
+        called by the constructor once :py:const:`DELAY_CREATING` is over.
+        """
         self.status = "ACTIVE"
 
     def update_throughput(self, rt, wt):
+        """
+        Update table throughput. Same conditions and limitations as real DynamoDB
+        applies:
+
+        - No more that 1 decrease operation per UTC day.
+        - No more than doubling throughput at once.
+        - Table must be in ``ACTIVE`` state.
+
+        Table status is then set to ``UPDATING`` until :py:const:`DELAY_UPDATING`
+        delay is over. Like real DynamoDB, the Table can still be used during
+        this period
+
+        :param rt: New read throughput
+        :param wt: New write throughput
+
+        :raises: :py:exc:`ddbmock.errors.ResourceInUseException` if table was not in ``ACTIVE`` state
+        :raises: :py:exc:`ddbmock.errors.LimitExceededException` if the other above conditions are not met.
+
+        """
         if self.status != "ACTIVE":
             raise ResourceInUseException("Table {} is in {} state. Can not UPDATE.".format(self.name, self.status))
 
         self.rt = rt
         self.wt = wt
 
-        Timer(config.DELAY_UPDATING, self.activate).start()
+        schedule_action(config.DELAY_UPDATING, self.activate)
 
     def delete_item(self, key, expected):
+        """
+        Delete item at ``key`` from the databse provided that it matches ``expected``
+        values.
+
+        This operation is atomic and blocks all other pending write operations.
+
+        :param key: Raw DynamoDB request hash and range key dict.
+        :param expected: Raw DynamoDB request conditions.
+
+        :return: deepcopy of :py:class:`ddbmock.database.item.Item` as it was before deletion.
+
+        :raises: :py:exc:`ddbmock.errors.ConditionalCheckFailedException` if conditions are not met.
+        """
         key = Item(key)
         hash_key = key.read_key(self.hash_key, u'HashKeyElement')
         range_key = key.read_key(self.range_key, u'RangeKeyElement')
 
         with self.write_lock:
             try:
-                old = self.store[hash_key, range_key]
+                old = copy.deepcopy(self.store[hash_key, range_key])
             except KeyError:
                 return Item()
 
         return old
 
     def update_item(self, key, actions, expected):
+        """
+        Apply ``actions`` to item at ``key`` provided that it matches ``expected``.
+
+        This operation is atomic and blocks all other pending write operations.
+
+        :param key: Raw DynamoDB request hash and range key dict.
+        :param actions: Raw DynamoDB request actions.
+        :param expected: Raw DynamoDB request conditions.
+
+        :return: both deepcopies of :py:class:`ddbmock.database.item.Item` as it was (before, after) the update.
+
+        :raises: :py:exc:`ddbmock.errors.ConditionalCheckFailedException` if conditions are not met.
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if ``actions`` attempted to modify the key or the resulting Item is biggere than :py:const:`config.MAX_ITEM_SIZE`
+        """
         key = Item(key)
         hash_key = key.read_key(self.hash_key, u'HashKeyElement', max_size=config.MAX_HK_SIZE)
         range_key = key.read_key(self.range_key, u'RangeKeyElement', max_size=config.MAX_RK_SIZE)
         return old, new
 
     def put(self, item, expected):
+        """
+        Save ``item`` in the database provided that ``expected`` matches. Even
+        though DynamoDB ``UpdateItem`` operation only supports returning ``ALL_OLD``
+        or ``NONE``, this method returns both ``old`` and ``new`` values as the
+        throughput, computed in the view, takes the maximum of both size into
+        account.
+
+        This operation is atomic and blocks all other pending write operations.
+
+        :param item: Raw DynamoDB request item.
+        :param expected: Raw DynamoDB request conditions.
+
+        :return: both deepcopies of :py:class:`ddbmock.database.item.Item` as it was (before, after) the update or empty item if not found.
+
+        :raises: :py:exc:`ddbmock.errors.ConditionalCheckFailedException` if conditions are not met.
+        """
         item = Item(item)
 
         if item.get_size() > config.MAX_ITEM_SIZE:
         return old, new
 
     def get(self, key, fields):
+        """
+        Get ``fields`` from :py:class:`ddbmock.database.item.Item` at ``key``.
+
+        :param key: Raw DynamoDB request key.
+        :param fields: Raw DynamoDB request array of field names to return. Empty to return all.
+
+        :return: reference to :py:class:`ddbmock.database.item.Item` at ``key`` or ``None`` when not found
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if a ``range_key`` was provided while table has none.
+        """
         key = Item(key)
         hash_key = key.read_key(self.hash_key, u'HashKeyElement')
         range_key = key.read_key(self.range_key, u'RangeKeyElement')
             return None
 
     def query(self, hash_key, rk_condition, fields, start, reverse, limit):
-        """Scans all items at hash_key and return matches as well as last
-        evaluated key if more than 1MB was scanned.
+        """
+        Return ``fields`` of all items with provided ``hash_key`` whose ``range_key``
+        matches ``rk_condition``.
 
-        :ivar hash_key: Element describing the hash_key, no type checkeing performed
-        :ivar rk_condition: Condition which must be matched by the range_key. If None, all is returned.
-        :ivar fields: return only these fields is applicable
-        :ivar start: key structure. where to start iteration
-        :ivar reverse: wether to scan the collection backward
-        :ivar limit: max number of items to parse in this batch
+        :param hash_key: Raw DynamoDB request hash_key.
+        :param rk_condition: Raw DynamoDB request ``range_key`` condition.
+        :param fields: Raw DynamoDB request array of field names to return. Empty to return all.
+        :param start: Raw DynamoDB request key of the first item to scan. Empty array to indicate first item.
+        :param reverse: Set to ``True`` to parse the range keys backward.
+        :param limit: Maximum number of items to return in this batch. Set to 0 or less for no maximum.
+
         :return: Results(results, cumulated_size, last_key)
+
+        :raises: :py:exc:`ddbmock.errors.ValidationException` if ``start['HashKeyElement']`` is not ``hash_key``
         """
         #FIXME: naive implementation
-        #FIXME: what is an item disappears during the operation ?
+        #FIXME: what if an item disappears during the operation ?
         #TODO:
         # - size limit
 
         return Results(results, size, lek, -1)
 
     def scan(self, scan_conditions, fields, start, limit):
-        """Scans a whole table, no matter the structure, and return matches as
-        well as the the last_evaluated key if applicable and the actually scanned
-        item count.
+        """
+        Return ``fields`` of all items matching ``scan_conditions``. No matter the
+        ``start`` key, ``scan`` allways starts from teh beginning so that it might
+        be quite slow.
 
-        :ivar scan_conditions: Dict of key:conditions to match items against. If None, all is returned.
-        :ivar fields: return only these fields is applicable
-        :ivar start: key structure. where to start iteration
-        :ivar limit: max number of items to parse in this batch
-        :return: results, cumulated_scanned_size, last_key
+        :param scan_conditions: Raw DynamoDB request conditions.
+        :param fields: Raw DynamoDB request array of field names to return. Empty to return all.
+        :param start: Raw DynamoDB request key of the first item to scan. Empty array to indicate first item.
+        :param limit: Maximum number of items to return in this batch. Set to 0 or less for no maximum.
+
+        :return: Results(results, cumulated_size, last_key, scanned_count)
         """
         #FIXME: naive implementation (too)
         #TODO:
-        # - reverse
         # - esk
-        # - limit
         # - size limit
-        # - last evaluated key
 
         size = ItemSize(0)
         scanned = 0
+        lek = {}
         results = []
+        skip = start is not None
+        hk_name = self.hash_key.name
+        rk_name = self.range_key.name if self.range_key else None
 
         for item in self.store:
-            size += item.get_size()
-            scanned += 1
+            # first, skip all items before start
+            if skip:
+                if start['HashKeyElement'] != item[hk_name]:
+                    continue
+                if rk_name and start['RangeKeyElement'] != item[rk_name]:
+                    continue
+                skip = False
+                continue
+
+            # match filters ?
             if item.match(scan_conditions):
                 results.append(item.filter(fields))
 
-        return Results(results, size, None, scanned)
+            # update stats
+            size += item.get_size()
+            scanned += 1
+
+            # quit ?
+            if scanned == limit:
+                lek[u'HashKeyElement'] = item[hk_name]
+                if rk_name:
+                    lek[u'RangeKeyElement'] = item[rk_name]
+                break
+
+        return Results(results, size, lek, scanned)
 
     @classmethod
     def from_dict(cls, data):
+        """
+        Alternate constructor which deciphers raw DynamoDB request data before
+        ultimately calling regular ``__init__`` method.
+
+        See :py:meth:`__init__` for more insight.
+
+        :param data: raw DynamoDB request data.
+
+        :return: fully initialized :py:class:`Table` instance
+        """
         hash_key = PrimaryKey.from_dict(data[u'KeySchema'][u'HashKeyElement'])
         range_key = None
         if u'RangeKeyElement' in data[u'KeySchema']:
                   )
 
     def get_size(self):
+        """
+        Compute the whole table size using the same rules as the real DynamoDB.
+        Actual memory usage in ddbmock will be much higher due to dict and Python
+        overheadd.
+
+        .. note:: Real DynamoDB updates this result every 6 hours or so while this is an "on demand" call.
+
+        :return: cumulated size of all items following DynamoDB size computation.
+        """
         # TODO: update size only every 6 hours
         size = 0
 
         return size
 
     def to_dict(self, verbose=True):
-        """Serialize table metadata for the describe table method. ItemCount and
-        TableSizeBytes are accurate but highly depends on CPython > 2.6. Do not
-        rely on it to project the actual size on a real DynamoDB implementation.
+        """
+        Serialize this table to DynamoDB compatible format. Every fields are
+        realistic, including the ``TableSizeBytes`` which relies on
+        :py:meth:`get_size.`
+
+        Some DynamoDB requests only send a minimal version of Table metadata. to
+        reproduce this behavior, just set ``verbose`` to ``False``.
+
+        :param verbose: Set to ``False`` to skip table size computation.
+
+        :return: Serialized version of table metadata compatible with DynamoDB API syntax.
         """
 
         ret = {
             ret[u'KeySchema'][u'RangeKeyElement'] = self.range_key.to_dict()
 
         return ret
+
+    # these 2 functions helps to pickle table schema (If store supports it)
+    # please note that only the schema is persisted, not the state
+    def __getstate__(self):
+        return (
+            self.name,
+            self.rt,
+            self.wt,
+            self.hash_key,
+            self.range_key,
+            'ACTIVE',
+        )
+
+    def __setstate__(self, state):
+        self.__init__(*state)  # quick and dirty (tm), but it does the job

File ddbmock/operations/scan.py

         "Count": len(results.items),
         "ScannedCount": results.scanned,
         "ConsumedCapacityUnits": capacity,
-        #TODO: last evaluated key where applicable
     }
 
+    if results.last_key:
+        ret['LastEvaluatedKey'] = results.last_key
+
     if not post[u'Count']:
         ret[u'Items'] = results.items
 

File ddbmock/utils/__init__.py

 
 from .stat import Stat
 from ddbmock import config
+from threading import Timer
 import logging
 
 tp_stat = {
                                                 config.STAT_TP_AGGREG,
                                                 tp_logger)
     tp_stat['write'][table_name].push(value)
+
+def schedule_action(delay, callback, args=[], kwargs={}):
+    """Unless delays are explicitely disabled, start ``callback`` once ``delay``
+    has expired. Otherwise, call it immediately.
+    """
+    if config.ENABLE_DELAYS:
+        Timer(delay, callback, args, kwargs).start()
+    else:
+        callback(*args, **kwargs)

File ddbmock/validators/types.py

 field_value.update(set_field_value)
 
 single_str_num_bin_list = All(Length(min=1, max=1), [simple_field_value])
+double_str_num_bin_list = All(Length(min=2, max=2), [simple_field_value])
 single_str_bin_list = All(Length(min=1, max=1), [{
     Optional(u'S'): field_string_value,
     Optional(u'B'): field_binary_value,
     field_name: update_action_schema
 }
 
-# Conditions shared by query and scan
+# Conditions supported by Query
 range_key_condition = Any(
     {
-        u"ComparisonOperator": Any(u"EQ", u"GT", u"GE", u"LT", u"LE", u"BETWEEN"),
+        u"ComparisonOperator": Any(u"EQ", u"GT", u"GE", u"LT", u"LE"),
         u"AttributeValueList": single_str_num_bin_list,
     },{
+        u"ComparisonOperator": u"BETWEEN",
+        u"AttributeValueList": double_str_num_bin_list,
+    },{
         u"ComparisonOperator": u"BEGINS_WITH",
         u"AttributeValueList": single_str_bin_list,
     },
 )
 
-# Conditions only implemented in scan
+# Conditions supported by Scan
 scan_condition = Any(
     {
-        u"ComparisonOperator": Any(u"EQ", u"GT", u"GE", u"LT", u"LE", u"BETWEEN"),
+        u"ComparisonOperator": Any(u"EQ", u"GT", u"GE", u"LT", u"LE"),
         u"AttributeValueList": single_str_num_bin_list,
     },{
+        u"ComparisonOperator": u"BETWEEN",
+        u"AttributeValueList": double_str_num_bin_list,
+    },{
         u"ComparisonOperator": u"BEGINS_WITH",
         u"AttributeValueList": single_str_bin_list,
     },{

File docs/_include/intro.rst

 `DynamoDB <http://aws.amazon.com/dynamodb/>`_ is a minimalistic NoSQL engine
 provided by Amazon as a part of their AWS product.
 
-**DynamoDB** is great in production environement but sucks when testing your
-application. Tables needs roughtly 1 min to be created, deleted or updated.
-Items operation rates depends on how much you pay and tests will conflict if
-2 developers run them at the same time.
+**DynamoDB** allows you to store documents composed of unicode, number or binary
+data as well are sets. Each tables must define a ``hash_key`` and may define a
+``range_key``. All other fields are optional.
 
-**ddbmock** brings a tiny in-memory(tm) implementation of DynamoDB API. It can
-either be run as a stand alone server or as a regular library helping you to
-build lightning fast unit and functional tests :)
+**DynamoDB** is really awesome but is terribly slooooow with managment tasks.
+This makes it completly unusable in test environements.
+
+**ddbmock** brings a nice, tiny, in-memory or sqlite implementation of
+DynamoDB along with much better and detailed error messages. Among its niceties,
+it features a double entry point:
+
+ - regular network based entry-point with 1:1 correspondance with stock DynamoDB
+ - **embeded entry-point** with seamless boto intergration 1, ideal to avoid spinning yet another server.
 
 **ddbmock** is **not** intended for production use. It **will lose** your data.
 you've been warned! I currently recommend the "boto extension" mode for unit-tests

File docs/api/database/db.rst

+##############
+DynamoDB class
+##############
+
+.. currentmodule:: ddbmock.database.db
+
+.. autoclass:: DynamoDB
+
+Constructors
+============
+
+__init__
+--------
+
+.. automethod:: DynamoDB.__init__
+
+Batch data manipulations
+========================
+
+get_batch
+---------
+
+.. automethod:: DynamoDB.get_batch
+
+write_batch
+-----------
+
+.. automethod:: DynamoDB.write_batch
+
+Database management
+===================
+
+list_tables
+-----------
+
+.. automethod:: DynamoDB.list_tables
+
+create_table
+------------
+
+.. automethod:: DynamoDB.create_table
+
+delete_table
+------------
+
+.. automethod:: DynamoDB.delete_table
+
+get_table
+---------
+
+.. automethod:: DynamoDB.get_table
+
+hard_reset
+----------
+
+.. automethod:: DynamoDB.hard_reset

File docs/api/database/item.rst

+##########
+Item class
+##########
+
+.. currentmodule:: ddbmock.database.item
+
+.. autoclass:: Item
+
+Constructors
+============
+
+__init__
+--------
+
+.. automethod:: Item.__init__
+
+Item manipulations
+==================
+
+filter
+------
+
+.. automethod:: Item.filter
+
+apply_actions
+-------------
+
+.. automethod:: Item.apply_actions
+
+assert_match_expected
+---------------------
+
+.. automethod:: Item.assert_match_expected
+
+match
+-----
+
+.. automethod:: Item.match
+
+field_match
+-----------
+
+.. automethod:: Item.field_match
+
+read_key
+--------
+
+.. automethod:: Item.read_key
+
+get_field_size
+--------------
+
+.. automethod:: Item.get_field_size
+
+get_size
+--------
+
+.. automethod:: Item.get_size
+
+__sub__
+-------
+
+.. automethod:: Item.__sub__
+
+
+

File docs/api/database/itemsize.rst

+##############
+ItemSize class
+##############
+
+.. currentmodule:: ddbmock.database.item
+
+.. autoclass:: ItemSize
+
+ItemSize manipulations
+======================
+
+__add__
+-------
+
+.. automethod:: ItemSize.__add__
+
+as_units
+--------
+
+.. automethod:: ItemSize.as_units
+
+with_indexing_overhead
+----------------------
+
+.. automethod:: ItemSize.with_indexing_overhead
+

File docs/api/database/key.rst

+##########
+Key class
+##########
+
+.. currentmodule:: ddbmock.database.key
+
+.. autoclass:: Key
+
+Constructors
+============
+
+__init__
+--------
+
+.. automethod:: Key.__init__
+
+from_dict
+---------
+
+.. automethod:: Key.from_dict
+
+Key manipulations
+=================
+
+read
+----
+
+.. automethod:: Key.read
+
+to_dict
+-------
+
+.. automethod:: Key.to_dict
+
+PrimaryKey
+==========
+
+.. autoclass:: PrimaryKey

File docs/api/database/table.rst

+###########
+Table class
+###########
+
+.. currentmodule:: ddbmock.database.table
+
+.. autoclass:: Table
+
+Constructors
+============
+
+__init__
+--------
+
+.. automethod:: Table.__init__
+
+from_dict
+---------
+
+.. automethod:: Table.from_dict
+
+Table manipulations
+===================
+
+truncate
+--------
+
+.. automethod:: Table.truncate
+
+delete
+------
+
+.. automethod:: Table.delete
+
+activate
+--------
+
+.. automethod:: Table.activate
+
+update_throughput
+-----------------
+
+.. automethod:: Table.update_throughput
+
+get_size
+--------
+
+.. automethod:: Table.get_size
+
+to_dict
+-------
+
+.. automethod:: Table.to_dict
+
+Items manipulations
+===================
+
+delete_item
+-----------
+
+.. automethod:: Table.delete_item
+
+update_item
+-----------
+
+.. automethod:: Table.update_item
+
+put
+---
+
+.. automethod:: Table.put
+
+get
+---
+
+.. automethod:: Table.get
+
+query
+-----
+
+.. automethod:: Table.query
+
+scan
+----
+
+.. automethod:: Table.scan

File docs/conf.py

 # built documents.
 #
 # The short X.Y version.
-version = '0.4'
+version = '1.0'
 # The full version, including alpha/beta/rc tags.
-release = '0.4.1.dev'
+release = '1.0.0'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.

File docs/index.rst

 
 .. include:: _include/intro.rst
 
+What is ddbmock *not* useful for ?
+----------------------------------
+
+Do *not* use it in production or as a cheap DynamoDB replacement. I'll never
+stress it enough.
+
+All the focus was on simplicity/hackability and simulation quality. Nothing else.
+
+What is ddbmock useful for ?
+----------------------------
+
+- FAST and RELIABLE unit testing
+- FAST and RELIABLE functional testing
+- experiment with DynamoDB API.
+- RELIABLE throughput planification
+- RELIABLE disk space planification
+- almost any DynamoDB simulation !
+
+ddbmock can also persist your data in SQLITE. This open another vast range of
+possibilities :)
+
+History
+-------
+
+ - v1.0.0 (*): full documentation and bugfixes
+ - v0.4.1: schema persistence + thread safety, bugfixes
+ - v0.4.0: sqlite backend + throughput statistics + refactoring, more documentation, more tests
+ - v0.3.2: batchWriteItem support + pass boto integration tests
+ - v0.3.1: accuracy in item/table sizes + full test coverage
+ - v0.3.0: first public release. Full table lifecycle + most items operations
+
+(?) indicates a future release. These are only ideas or "nice to have".
+
 Documentation
 =============
 
 
    pages/changelog
 
+Database API
+------------
+
+Describe internal database structures. Should be extremely useful for tests.
+
+.. toctree::
+   :maxdepth: 4
+   :glob:
+
+   api/database/*
+
+
 Indices and tables
 ------------------
 

File docs/pages/extending.rst

  - There is a "catch all" in the router that maps to DynamoDB internal server error
 
 
-Adding a method
-===============
+Adding a custom action
+======================
 
 As long as the method follows DynamoDB request structure, it is mostly a matter of
 adding a file to ``ddbmock/routes`` with the following conventions:
     # -*- coding: utf-8 -*-
     # module: ddbmock.routes.hello_world.py
 
-    from ddbmock.utils import load_table()
+    from ddbmock.utils import load_table
 
     @load_table
     def hello_world(post):
 
     def hello_world(post):
         return {
-            'Hello': 'World (and "{you}" too!)'.format(you=post['Name']
+            'Hello': 'World (and "{you}" too!)'.format(you=post['Name'])
         }
 
 Wanna test it?

File docs/pages/getting_started.rst

 Note, to clean patches made in ``boto.dynamodb.layer1``, you can call
 ``clean_boto_patch()`` from  the same module.
 
+Using ddbmock for tests
+=======================
+
+Most tests share the same structure:
+
+ 1. Set the things up
+ 2. Test and validate
+ 3. Clean everything up and start again
+
+If you use ``ddbmock`` as a standalone library (which I recommend for this
+purpose), feel free to access any of the public methods in the ``database`` and
+``table`` to perform direct checks
+
+Here is a template taken from ``GetItem`` functional test using Boto.
+
+::
+
+    # -*- coding: utf-8 -*-
+
+    import unittest
+    import boto
+
+    TABLE_NAME = 'Table-HR'
+    TABLE_RT = 45
+    TABLE_WT = 123
+    TABLE_HK_NAME = u'hash_key'
+    TABLE_HK_TYPE = u'N'
+    TABLE_RK_NAME = u'range_key'
+    TABLE_RK_TYPE = u'S'
+
+    HK_VALUE = u'123'
+    RK_VALUE = u'Decode this data if you are a coder'
+
+
+    ITEM = {
+        TABLE_HK_NAME: {TABLE_HK_TYPE: HK_VALUE},
+        TABLE_RK_NAME: {TABLE_RK_TYPE: RK_VALUE},
+        u'relevant_data': {u'B': u'THVkaWEgaXMgdGhlIGJlc3QgY29tcGFueSBldmVyIQ=='},
+    }
+
+    class TestGetItem(unittest.TestCase):
+        def setUp(self):
+            from ddbmock import connect_boto_patch
+            from ddbmock.database.db import dynamodb
+            from ddbmock.database.table import Table
+            from ddbmock.database.key import PrimaryKey
+
+            # Do a full database wipe
+            dynamodb.hard_reset()
+
+            # Instanciate the keys
+            hash_key = PrimaryKey(TABLE_HK_NAME, TABLE_HK_TYPE)
+            range_key = PrimaryKey(TABLE_RK_NAME, TABLE_RK_TYPE)
+
+            # Create a test table and register it in ``self`` so that you can use it directly
+            self.t1 = Table(TABLE_NAME, TABLE_RT, TABLE_WT, hash_key, range_key)
+
+            # Very important: register the table in the DB
+            dynamodb.data[TABLE_NAME]  = self.t1
+
+            # Unconditionally add some data, for example.
+            self.t1.put(ITEM, {})
+
+            # Create the database connection ie: patch boto
+            self.db = connect_boto_patch()
+
+        def tearDown(self):
+            from ddbmock.database.db import dynamodb
+            from ddbmock import clean_boto_patch
+
+            # Do a full database wipe
+            dynamodb.hard_reset()
+
+            # Remove the patch from Boto code (if any)
+            clean_boto_patch()
+
+        def test_get_hr(self):
+            from ddbmock.database.db import dynamodb
+
+            # Example test
+            expected = {
+                u'ConsumedCapacityUnits': 0.5,
+                u'Item': ITEM,
+            }
+
+            key = {
+                u"HashKeyElement":  {TABLE_HK_TYPE: HK_VALUE},
+                u"RangeKeyElement": {TABLE_RK_TYPE: RK_VALUE},
+            }
+
+            # Example chech
+            self.assertEquals(expected, self.db.layer1.get_item(TABLE_NAME, key))
+
+
+If ddbmock is used as a standalone server, restarting it should do the job, unless
+SQLite persistence is used.
+
 
 Advanced usage
 ==============

File docs/pages/status.rst

 - ``PutItem`` DONE
 - ``DeleteItem`` DONE
 - ``UpdateItem`` ALMOST
-- ``BatchGetItem`` DONE*
-- ``BatchWriteItem`` DONE*
+- ``BatchGetItem`` DONE
+- ``BatchWriteItem`` DONE
 - ``Query`` DONE
-- ``Scan`` DONE*
+- ``Scan`` DONE
 
-There basically are no support for ``ExclusiveStartKey``, and their associated
-features at all in ddbmock. This affects all "*" operations. ``Query`` is the
-only exception.
+All "Bulk" actions will handle the whole batch in a single pass, unless instructed
+to otherwise through ``limit`` parameter. Beware that real dynamoDB will most
+likely split bigger one. If you rely on high level libraries such as Boto, don't
+worry about this.
 
 ``UpdateItem`` has a different behavior when the target item did not exist prior
 the update operation. In particular, the ``ADD`` operator will always behave as
 - ``NOT_CONTAINS`` DONE
 - ``IN`` DONE
 
-``IN`` operator is the only that can not be imported directly as it overlaps
-builtin ``in`` keyword. If you need it, either import it with ``getattr`` on the
-module or as ``in_test`` which, anyway, is its internal name.
+.. note::
+
+    ``IN`` operator is the only that can not be imported directly as it overlaps
+    with builtin ``in`` keyword. If you need it, either import it with ``getattr``
+    on the module or as ``in_test`` which, anyway, is its internal name.
 
 Return value specifications
 ---------------------------
 - ``UPDATED_OLD`` DONE
 - ``UPDATED_NEW`` DONE
 
-Please note that only ``UpdateItem`` supports the 5. Other compatible nethods
-understands only the 2 first.
+.. note::
+
+    Only ``UpdateItem`` recognize them all. Others does only the 2 first
+
 
 Rates and size limitations
 ==========================
 
-basically, none are supported yet
-
 Request rate
 ------------
 
 
 Dates are represented as float timestamps using scientific notation by DynamoDB
 but we only send them as plain number, not caring about the representation. Most
-parsers won't do any difference anyway.
+parsers won't spot any difference anyway.
 [metadata]
 name = ddbmock
-version = 0.4.1.dev
+version = 1.0.0
 summary = Amazon DynamoDB mock implementation
 description-file = README.rst
 author = Jean-Tiare Le Bigot
     Programming Language :: Python :: 2.7
     License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
 requires-dist =
-    Sphinx
+    pyramid
+    onctuous >= 0.5.1
+
+[noah]
+public = True
 
 [nosetests]
 where = tests/
 
 from setuptools import setup, find_packages
 
-install_requires = [
-    # d2to1 bootstrap
+setup_requires = [
     'd2to1',
-
-    'pyramid',
-    'waitress',
-    'onctuous >= 0.5.1',
-
-    'setuptools >= 0.6b1',
-]
-
-tests_requires = [
     'boto',
-
     'nose',
     'nosexcover',
     'coverage',
     'mock',
     'webtest',
+    'Sphinx',
 ]
 
 setup(
     packages=find_packages(),
     include_package_data=True,
     zip_safe=False,
-    install_requires=install_requires,
-    tests_require=tests_requires,
-    test_suite="tests",
-    entry_points = """\
+    setup_requires=setup_requires,
+    test_suite='nose.collector',
+    entry_points="""\
     [paste.app_factory]
     main = ddbmock:main
     """,

File tests/__init__.py

 from ddbmock import config, utils
 
 # Much too slow otherwise
-config.DELAY_CREATING = 0.01
-config.DELAY_UPDATING = 0.01
-config.DELAY_DELETING = 0.01
+config.DELAY_CREATING = 0.02
+config.DELAY_UPDATING = 0.02
+config.DELAY_DELETING = 0.02
 
 # Configure logging
 utils.tp_logger.addHandler(logging.StreamHandler())

File tests/functional/boto/test_scan.py

         ret = db.layer1.scan(TABLE_NAME, None)
         self.assertEqual(expected, ret)
 
+    def test_scan_paged(self):
+        from ddbmock import connect_boto_patch
+        from ddbmock.database.db import dynamodb
+        from boto.dynamodb.exceptions import DynamoDBValidationError
+
+        esk = {
+            u'HashKeyElement': {u'N': u'789'},
+            u'RangeKeyElement': {u'S': u'Waldo-5'},
+        }
+
+        expected1 = {
+            u"Count": 4,
+            u"ScannedCount": 4,
+            u"Items": [ITEM2, ITEM1, ITEM6, ITEM5],
+            u"ConsumedCapacityUnits": 1.5,
+            u'LastEvaluatedKey': esk,
+        }
+        expected2 = {
+            u"Count": 2,
+            u"ScannedCount": 2,
+            u"Items": [ITEM4, ITEM3],
+            u"ConsumedCapacityUnits": 0.5,
+        }
+
+        db = connect_boto_patch()
+
+        ret = db.layer1.scan(TABLE_NAME, limit=4)
+        self.assertEqual(expected1, ret)
+        ret = db.layer1.scan(TABLE_NAME, exclusive_start_key=esk)
+        self.assertEqual(expected2, ret)
+
     def test_scan_all_filter_fields(self):
         from ddbmock import connect_boto_patch
         from ddbmock.database.db import dynamodb

File tests/functional/internal/__init__.py

+# -*- coding: utf-8 -*-

File tests/functional/internal/test_db_schema_persistence.py

+# -*- coding: utf-8 -*-
+
+import unittest, mock
+
+# This module both validates schema persitence/reloading and that sqlite backend is
+# actually working
+
+TABLE_NAME = "tabloid"
+
+TABLE_RT = 45
+TABLE_WT = 123
+
+HASH_KEY = {"AttributeName":"hash_key","AttributeType":"N"}
+RANGE_KEY = {"AttributeName":"range_key","AttributeType":"S"}
+
+class TestDBSchemaPersist(unittest.TestCase):
+    @mock.patch('ddbmock.database.db.Store')
+    def test_db_schema_persistence(self, m_store):
+        from ddbmock.database.db import DynamoDB
+        from ddbmock.database import storage
+        from ddbmock.database.storage.sqlite import Store
+
+        data = {
+            "TableName": TABLE_NAME,
+            "KeySchema": {
+                "HashKeyElement": HASH_KEY,