1. Database - a database to use in single process/thread environment
2. DatabaseTreadSafe - a database to use with threads, readers don't
- block writ
ters etc. GeventDatabase is 1:1 copy of that database.
+ block writers etc. GeventDatabase is 1:1 copy of that database.
3. DatabaseSuperThreadSafe - a database to also use with threads, but
database operations are limited to only one in given time.
4. CodernityDB-HTTP - a HTTP server version of database, for multi
compared also with CouchDB views mechanizm. (you would like probably to see :ref:`simple_index`). You can have as much indexes as you want and single record in database can "exists" in more than one index.
Index itself does not store any information except it's
-*metadata*. You don't have to copy full data everytime in indexes,
+*metadata*. You don't have to copy full data everytime in indexes,
because all indexes different than *id* one, are bound with it by
``_id`` value, and you can easily get content from that *id* index by
adding ``with_doc=True`` to your get queries (please refer to
-Currently *Hash* based index (`Hash Table`_ separate chaining version) and *B+Tree* based (`B Plus Tree`_) are ava
+Currently *Hash* based index (`Hash Table`_ separate chaining version) and *B+Tree* based (`B Plus Tree`_) are avaiable.
Both indexes makes huge use of `Sparse files`_.
-For more information
s about indexes visit :ref:`database_indexes`
+For more information about indexes visit :ref:`database_indexes`
Also please remember that more indexes affects write performance.
Storage needs to save python value to the disk and return the position
and size to allow Index to save that data. The default implementation
uses Python marshal_ to serialize and deserialize Python objects
-passed as value into it. So you will be ab
ble to store those object
+passed as value into it. So you will be able to store those object
that are serializable by marshal_ module.
-During insert into database, incom
ming data is passed to
+During insert into database, incoming data is passed to
``make_key_value`` functions in *all* indexes in order of adding or
changing them in database.
On query operations function ``make_key`` is called to get
ming data is at first processed in *id* index. Then it goes
+Incoming data is at first processed in *id* index. Then it goes
through ``make_key_value`` method, in next stage the value is stored in
*storage*, and at last the metadata is stored in *index*.
Then the procedure is repeated for other indexes.