Files changed (7)
+**pybuckets** is an open source library to help you quickly and easily access cloud based storage services eg. `Amazon S3 <http://aws.amazon.com/s3>`_, `Rackspace Cloudfiles <http://www.rackspacecloud.com/cloud_hosting_products/files>`_.
+*pybuckets* attempts to provide a simple idiomatic interface to these storage providers and attempts to emulate the widely used `python dictionary <http://docs.python.org/tutorial/datastructures.html#dictionaries>`_ interface. It models a storage provider as a Bucket Server which is a dictionary like object containing buckets, each bucket accessible using its key unique to the server. Similarly each bucket is also similar to a dictionary with it containing a number of arbitrary file or string like objects, each one of them also being identified by a key unique to that bucket.
+While the simple dictionary interface is likely useful for an overriding majority of situations, there will clearly be scenarios when a finer control will be required. *pybuckets* again provides the same through a more conventional Object oriented API. Finally despite *pybuckets* attempt to provide a common interface to a variety of storage providers, there will always remain features that are specific to a particular storage provider, or features that have a peculiar implementation for a particular storage provider. *pybuckets* again attempts to allow users to leverage such capabilities by using the storage provider specific API directly.
+Each storage provider already has a set of python packages that offer an API to the storage provider. *pybuckets* does not attempt to rewrite such packages. Instead it essentially wraps these packages to allow its users to use a simple, portable API across all such packages or storage providers.
+For purposes of easy testing, *pybuckets* also provides a built in local file system provider. This provider implements *pybuckets* using the local filesystem. This can assist in early development or in scenarios where it is not practical to use the storage providers used in production during development or testing stages.
+The following code snippets show how *pybuckets* can be used. For purposes of demonstration, the code below uses sample code for accessing Amazon S3.
+*pybuckets* comes with support for a set of storage providers. Activating support for a storage provider simply requires the relevant modules to be imported. In case of Amazon S3, this support is provided by the *pybuckets.botos3* module which wraps the excellent boto libraries for accessing Amazon S3. This module is activated as follows::
+The primary class representing the storage provider is the Bucket Server. Initialise the bucket server as follows::
+Note that the keyword arguments passed to the *get_server()* method are specific to the particular storage provider. These could include path information, authentication credentials, default policies etc.
+.. note:: In many cases if the bucket already exists the operation will still succeed if the bucket has been created using the same credentials earlier.
+.. note:: In most cases if the bucket is not already empty, this operation will fail. In such cases you will need to first individually delete all the objects from the bucket and then delete the bucket
+The *__iter__()* method on the server returns a sequence of all the buckets. To iterate over all the buckets on the server just access the iterator, for example using a for loop.::
+.. note:: Note that while bucket_server.keys() returns a sequence of keys (strings), iterating over the bucket returns a sequence of bucket objects. This is a conscious decision even though it is atypical, to provide an easy api to traverse the bucket_server using its keys to retrieve the buckets or by traversing through the buckets directly.
+You can get a list of all the keys inside a bucket by again using the familiar *keys()* method or by iterating over the bucket.::
+.. note:: pybuckets does not make any assumptions regarding the compatibility of the provided value with the underlying storage provider. Simple string objects representing the file contents work with S3, Cloudfiles and Local Filesystem providers
+This operation is the same as `Set an object corresponding to the key in a bucket`_ above. If the key does not already exist, a new key, object pair will get created.
+.. note:: The function operator ie *()* or *__call__* on the bucket has been overloaded to return an object which is like a file and can be used to write to or read from other files. This allows convenient file access. Once again, the *()* operator on the bucket returns an object which can perform stream operations over other files.
+There is also another convenience method provider by which you can write the value to a file by providing the filename::
+The leftshift operator ie. *<<* similarly supports reading in values for a given key from a file. To read the contents from an open file::
+# S3CreateError - body, box_usage, bucket, error_code, error_message,message,reason,request_id,status