1. paulswartz
  2. django-storages

Commits

paulswartz  committed 0b597ec Merge

Merging updates from upstream

  • Participants
  • Parent commits 9d38dfd, 8a50ff3
  • Branches default

Comments (0)

Files changed (51)

File .hgignore

View file
 syntax:glob
 
+*.DS_Store
+*.egg
+*.egg-info
 *.elc
+*.gz
+*.log
+*.orig
 *.pyc
-*~
-*.orig
-*.log
 *.swp
 *.tmp
-*.DS_Store
-testdb.sqlite
+*~
+_build/
+build/
+dist/*
 django
 local_settings.py
-dist/*
-django_storages.*
 setuptools*
+testdb.sqlite

File .hgtags

View file
+ef009ac1d6a412b7a48bf367740841084b26d585 1.1
+0054d538ccb5e482e07fe43ae47b8d8c82f00818 1.1.1
+0054d538ccb5e482e07fe43ae47b8d8c82f00818 1.1.1
+374f6d14e3073d3f3ad4842fe906507493931713 1.1.1
+3e8e477cf67db87f3ac796e9743b81b71867fd0d 1.1.4

File AUTHORS

View file
     * Jason Christa (patches)
     * Adam Nelson (patches)
     * Erik CW (S3 encryption)
+    * Axel Gembe (Hash path)
+    * Waldemar Kornewald (MongoDB)
 
-Extra thanks to Marty for adding this in Django, 
+Extra thanks to Marty for adding this in Django,
 you can buy his very interesting book (Pro Django).
 
 

File CHANGELOG.rst

View file
+django-storages change log
+==========================
+
+1.1.4 (2012-01-06)
+******************
+
+* Added PendingDeprecationWarning for mosso backend
+* Merged pull request `#13`_ from marcoala, adds ``SFTP_KNOWN_HOST_FILE`` setting to SFTP storage backend
+* Merged pull request `#12`_ from ryankask, fixes HashPathStorage tests that delete remote media
+* Merged pull request `#10`_ from key, adds support for django-mongodb-engine 0.4.0 or later, fixes GridFS file deletion bug
+* Fixed S3BotoStorage performance problem calling modified_time()
+* Added deprecation warning for s3 backend, refs `#40`_
+* Fixed CLOUDFILES_CONNECTION_KWARGS import error, fixes `#78`_
+* Switched to sphinx documentation, set official docs up on http://django-storages.rtfd.org/
+* HashPathStorage uses self.exists now, fixes `#83`_
+
+.. _#13: https://bitbucket.org/david/django-storages/pull-request/13/a-version-of-sftp-storage-that-allows-you
+.. _#12: https://bitbucket.org/david/django-storages/pull-request/12/hashpathstorage-tests-deleted-my-projects
+.. _#10: https://bitbucket.org/david/django-storages/pull-request/10/support-django-mongodb-engine-040
+.. _#40: https://bitbucket.org/david/django-storages/issue/40/deprecate-s3py-backend
+.. _#78: https://bitbucket.org/david/django-storages/issue/78/import-error
+.. _#83: https://bitbucket.org/david/django-storages/issue/6/symlinkorcopystorage-new-custom-storage
+
+1.1.3 (2011-08-15)
+******************
+
+* Created this lovely change log
+* Fixed `#89`_: broken StringIO import in CloudFiles backend
+* Merged `pull request #5`_: HashPathStorage path bug
+
+.. _#89: https://bitbucket.org/david/django-storages/issue/89/112-broke-the-mosso-backend
+.. _pull request #5: https://bitbucket.org/david/django-storages/pull-request/5/fixed-path-bug-and-added-testcase-for
+

File S3.py

View file
 import hashlib
 import hmac
 import httplib
-import re
-import sys
+try:
+    from hashlib import sha1 as sha
+except ImportError:
+    import sha
 import time
 import urllib
 import urlparse
                     = urlparse.urlparse(location)
             if scheme == "http":    is_secure = True
             elif scheme == "https": is_secure = False
-            else: raise invalidURL("Not http/https: " + location)
+            else: raise IOError("Not http/https: " + location)
             if query: path += "?" + query
             # retry with redirect
 

File backends/__init__.py

Empty file removed.

File backends/couchdb.py

-"""
-This is a Custom Storage System for Django with CouchDB backend.
-Created by Christian Klein.
-(c) Copyright 2009 HUDORA GmbH. All Rights Reserved.
-"""
-import os
-from cStringIO import StringIO
-from urlparse import urljoin
-from urllib import quote_plus
-
-from django.conf import settings
-from django.core.files import File
-from django.core.files.storage import Storage
-from django.core.exceptions import ImproperlyConfigured
-
-try:
-    import couchdb
-except ImportError:
-    raise ImproperlyConfigured, "Could not load couchdb dependency.\
-    \nSee http://code.google.com/p/couchdb-python/"
-
-DEFAULT_SERVER= getattr(settings, 'COUCHDB_DEFAULT_SERVER', 'http://couchdb.local:5984')
-STORAGE_OPTIONS= getattr(settings, 'COUCHDB_STORAGE_OPTIONS', {})
-
-
-class CouchDBStorage(Storage):
-    """
-    CouchDBStorage - a Django Storage class for CouchDB.
-
-    The CouchDBStorage can be configured in settings.py, e.g.::
-    
-        COUCHDB_STORAGE_OPTIONS = {
-            'server': "http://example.org", 
-            'database': 'database_name'
-        }
-
-    Alternatively, the configuration can be passed as a dictionary.
-    """
-    def __init__(self, **kwargs):
-        kwargs.update(STORAGE_OPTIONS)
-        self.base_url = kwargs.get('server', DEFAULT_SERVER)
-        server = couchdb.client.Server(self.base_url)
-        self.db = server[kwargs.get('database')]
-
-    def _put_file(self, name, content):
-        self.db[name] = {'size': len(content)}
-        self.db.put_attachment(self.db[name], content, filename='content')
-        return name
-
-    def get_document(self, name):
-        return self.db.get(name)
-
-    def _open(self, name, mode='rb'):
-        couchdb_file = CouchDBFile(name, self, mode=mode)
-        return couchdb_file
-
-    def _save(self, name, content):
-        content.open()
-        if hasattr(content, 'chunks'):
-            content_str = ''.join(chunk for chunk in content.chunks())
-        else:
-            content_str = content.read()
-        name = name.replace('/', '-')
-        return self._put_file(name, content_str)
-
-    def exists(self, name):
-        return name in self.db
-
-    def size(self, name):
-        doc = self.get_document(name)
-        if doc:
-            return doc['size']
-        return 0
-
-    def url(self, name):
-        return urljoin(self.base_url, 
-                       os.path.join(quote_plus(self.db.name), 
-                       quote_plus(name), 
-                       'content'))
-
-    def delete(self, name):
-        try:
-            del self.db[name]
-        except couchdb.client.ResourceNotFound:
-            raise IOError("File not found: %s" % name)
-
-    #def listdir(self, name):
-    # _all_docs?
-    #    pass
-
-
-class CouchDBFile(File):
-    """
-    CouchDBFile - a Django File-like class for CouchDB documents.
-    """
-
-    def __init__(self, name, storage, mode):
-        self._name = name
-        self._storage = storage
-        self._mode = mode
-        self._is_dirty = False
-
-        try:
-            self._doc = self._storage.get_document(name)
-
-            tmp, ext = os.path.split(name)
-            if ext:
-                filename = "content." + ext
-            else:
-                filename = "content"
-            attachment = self._storage.db.get_attachment(self._doc, filename=filename)
-            self.file = StringIO(attachment)
-        except couchdb.client.ResourceNotFound:
-            if 'r' in self._mode:
-                raise ValueError("The file cannot be reopened.")
-            else:
-                self.file = StringIO()
-                self._is_dirty = True
-
-    @property
-    def size(self):
-        return self._doc['size']
-
-    def write(self, content):
-        if 'w' not in self._mode:
-            raise AttributeError("File was opened for read-only access.")
-        self.file = StringIO(content)
-        self._is_dirty = True
-
-    def close(self):
-        if self._is_dirty:
-            self._storage._put_file(self._name, self.file.getvalue())
-        self.file.close()
-
-

File backends/database.py

-# DatabaseStorage for django.
-# 2009 (c) GameKeeper Gambling Ltd, Ivanov E.
-import StringIO
-import urlparse
-
-from django.conf import settings
-from django.core.files import File
-from django.core.files.storage import Storage
-from django.core.exceptions import ImproperlyConfigured
-
-try:
-    import pyodbc
-except ImportError:
-    raise ImproperlyConfigured, "Could not load pyodbc dependency.\
-    \nSee http://code.google.com/p/pyodbc/"
-
-
-class DatabaseStorage(Storage):
-    """
-    Class DatabaseStorage provides storing files in the database. 
-    """
-
-    def __init__(self, option=settings.DB_FILES):
-        """Constructor. 
-        
-        Constructs object using dictionary either specified in contucotr or
-in settings.DB_FILES. 
-        
-        @param option dictionary with 'db_table', 'fname_column',
-'blob_column', 'size_column', 'base_url'  keys. 
-        
-        option['db_table']
-            Table to work with.
-        option['fname_column']
-            Column in the 'db_table' containing filenames (filenames can
-contain pathes). Values should be the same as where FileField keeps
-filenames. 
-            It is used to map filename to blob_column. In sql it's simply
-used in where clause. 
-        option['blob_column']
-            Blob column (for example 'image' type), created manually in the
-'db_table', used to store image.
-        option['size_column']
-            Column to store file size. Used for optimization of size()
-method (another way is to open file and get size)
-        option['base_url']
-            Url prefix used with filenames. Should be mapped to the view,
-that returns an image as result. 
-        """
-        
-        if not option or not (option.has_key('db_table') and option.has_key('fname_column') and option.has_key('blob_column')
-                              and option.has_key('size_column') and option.has_key('base_url') ):
-            raise ValueError("You didn't specify required options")
-        self.db_table = option['db_table']
-        self.fname_column = option['fname_column']
-        self.blob_column = option['blob_column']
-        self.size_column = option['size_column']
-        self.base_url = option['base_url']
-
-        #get database settings
-        self.DATABASE_ODBC_DRIVER = settings.DATABASE_ODBC_DRIVER
-        self.DATABASE_NAME = settings.DATABASE_NAME
-        self.DATABASE_USER = settings.DATABASE_USER
-        self.DATABASE_PASSWORD = settings.DATABASE_PASSWORD
-        self.DATABASE_HOST = settings.DATABASE_HOST
-        
-        self.connection = pyodbc.connect('DRIVER=%s;SERVER=%s;DATABASE=%s;UID=%s;PWD=%s'%(self.DATABASE_ODBC_DRIVER,self.DATABASE_HOST,self.DATABASE_NAME,
-                                                                                          self.DATABASE_USER, self.DATABASE_PASSWORD) )
-        self.cursor = self.connection.cursor()
-
-    def _open(self, name, mode='rb'):
-        """Open a file from database. 
-        
-        @param name filename or relative path to file based on base_url. path should contain only "/", but not "\". Apache sends pathes with "/".
-        If there is no such file in the db, returs None
-        """
-        
-        assert mode == 'rb', "You've tried to open binary file without specifying binary mode! You specified: %s"%mode
-
-        row = self.cursor.execute("SELECT %s from %s where %s = '%s'"%(self.blob_column,self.db_table,self.fname_column,name) ).fetchone()
-        if row is None:
-            return None
-        inMemFile = StringIO.StringIO(row[0])
-        inMemFile.name = name
-        inMemFile.mode = mode
-        
-        retFile = File(inMemFile)
-        return retFile
-
-    def _save(self, name, content):
-        """Save 'content' as file named 'name'.
-        
-        @note '\' in path will be converted to '/'. 
-        """
-        
-        name = name.replace('\\', '/')
-        binary = pyodbc.Binary(content.read())
-        size = len(binary)
-        
-        #todo: check result and do something (exception?) if failed.
-        if self.exists(name):
-            self.cursor.execute("UPDATE %s SET %s = ?, %s = ? WHERE %s = '%s'"%(self.db_table,self.blob_column,self.size_column,self.fname_column,name), 
-                                 (binary, size)  )
-        else:
-            self.cursor.execute("INSERT INTO %s VALUES(?, ?, ?)"%(self.db_table), (name, binary, size)  )
-        self.connection.commit()
-        return name
-
-    def exists(self, name):
-        row = self.cursor.execute("SELECT %s from %s where %s = '%s'"%(self.fname_column,self.db_table,self.fname_column,name)).fetchone()
-        return row is not None
-    
-    def get_available_name(self, name):
-        return name
-
-    def delete(self, name):
-        if self.exists(name):
-            self.cursor.execute("DELETE FROM %s WHERE %s = '%s'"%(self.db_table,self.fname_column,name))
-            self.connection.commit()
-
-    def url(self, name):
-        if self.base_url is None:
-            raise ValueError("This file is not accessible via a URL.")
-        return urlparse.urljoin(self.base_url, name).replace('\\', '/')
-    
-    def size(self, name):
-        row = self.cursor.execute("SELECT %s from %s where %s = '%s'"%(self.size_column,self.db_table,self.fname_column,name)).fetchone()
-        if row is None:
-            return 0
-        else:
-            return int(row[0])

File backends/ftp.py

-# FTP storage class for Django pluggable storage system.
-# Author: Rafal Jonca <jonca.rafal@gmail.com>
-# License: MIT
-# Comes from http://www.djangosnippets.org/snippets/1269/
-#
-# Usage:
-#
-# Add below to settings.py:
-# FTP_STORAGE_LOCATION = '[a]ftp://<user>:<pass>@<host>:<port>/[path]'
-#
-# In models.py you can write:
-# from FTPStorage import FTPStorage
-# fs = FTPStorage()
-# class FTPTest(models.Model):
-#     file = models.FileField(upload_to='a/b/c/', storage=fs)
-
-import os
-import ftplib
-import urlparse
-
-try:
-    from cStringIO import StringIO
-except ImportError:
-    from StringIO import StringIO
-
-from django.conf import settings
-from django.core.files.base import File
-from django.core.files.storage import Storage
-from django.core.exceptions import ImproperlyConfigured
-
-
-class FTPStorageException(Exception): pass
-
-class FTPStorage(Storage):
-    """FTP Storage class for Django pluggable storage system."""
-
-    def __init__(self, location=settings.FTP_STORAGE_LOCATION, base_url=settings.MEDIA_URL):
-        self._config = self._decode_location(location)
-        self._base_url = base_url
-        self._connection = None
-
-    def _decode_location(self, location):
-        """Return splitted configuration data from location."""
-        splitted_url = urlparse.urlparse(location)
-        config = {}
-        
-        if splitted_url.scheme not in ('ftp', 'aftp'):
-            raise ImproperlyConfigured('FTPStorage works only with FTP protocol!')
-        if splitted_url.hostname == '':
-            raise ImproperlyConfigured('You must at least provide hostname!')
-            
-        if splitted_url.scheme == 'aftp':
-            config['active'] = True
-        else:
-            config['active'] = False
-        config['path'] = splitted_url.path
-        config['host'] = splitted_url.hostname
-        config['user'] = splitted_url.username
-        config['passwd'] = splitted_url.password
-        config['port'] = int(splitted_url.port)
-        
-        return config
-
-    def _start_connection(self):
-        # Check if connection is still alive and if not, drop it.
-        if self._connection is not None:
-            try:
-                self._connection.pwd()
-            except ftplib.all_errors, e:
-                self._connection = None
-        
-        # Real reconnect
-        if self._connection is None:
-            ftp = ftplib.FTP()
-            try:
-                ftp.connect(self._config['host'], self._config['port'])
-                ftp.login(self._config['user'], self._config['passwd'])
-                if self._config['active']:
-                    ftp.set_pasv(False)
-                if self._config['path'] != '':
-                    ftp.cwd(self._config['path'])
-                self._connection = ftp
-                return
-            except ftplib.all_errors, e:
-                raise FTPStorageException('Connection or login error using data %s' % repr(self._config))
-
-    def disconnect(self):
-        self._connection.quit()
-        self._connection = None
-
-    def _mkremdirs(self, path):
-        pwd = self._connection.pwd()
-        path_splitted = path.split('/')
-        for path_part in path_splitted:
-            try:
-                self._connection.cwd(path_part)
-            except:
-                try:
-                    self._connection.mkd(path_part)
-                    self._connection.cwd(path_part)
-                except ftplib.all_errors, e:
-                    raise FTPStorageException('Cannot create directory chain %s' % path)                    
-        self._connection.cwd(pwd)
-        return
-
-    def _put_file(self, name, content):
-        # Connection must be open!
-        try:
-            self._mkremdirs(os.path.dirname(name))
-            pwd = self._connection.pwd()
-            self._connection.cwd(os.path.dirname(name))
-            self._connection.storbinary('STOR ' + os.path.basename(name), content.file, content.DEFAULT_CHUNK_SIZE)
-            self._connection.cwd(pwd)
-        except ftplib.all_errors, e:
-            raise FTPStorageException('Error writing file %s' % name)
-
-    def _open(self, name, mode='rb'):
-        remote_file = FTPStorageFile(name, self, mode=mode)
-        return remote_file
-
-    def _read(self, name):
-        memory_file = StringIO()
-        try:
-            pwd = self._connection.pwd()
-            self._connection.cwd(os.path.dirname(name))
-            self._connection.retrbinary('RETR ' + os.path.basename(name), memory_file.write)
-            self._connection.cwd(pwd)
-            return memory_file
-        except ftplib.all_errors, e:
-            raise FTPStorageException('Error reading file %s' % name)
-        
-    def _save(self, name, content):
-        content.open()
-        self._start_connection()
-        self._put_file(name, content)
-        content.close()
-        return name
-
-    def _get_dir_details(self, path):
-        # Connection must be open!
-        try:
-            lines = []
-            self._connection.retrlines('LIST '+path, lines.append)
-            dirs = {}
-            files = {}
-            for line in lines:
-                words = line.split()
-                if len(words) < 6:
-                    continue
-                if words[-2] == '->':
-                    continue
-                if words[0][0] == 'd':
-                    dirs[words[-1]] = 0;
-                elif words[0][0] == '-':
-                    files[words[-1]] = int(words[-5]);
-            return dirs, files
-        except ftplib.all_errors, msg:
-            raise FTPStorageException('Error getting listing for %s' % path)
-
-    def listdir(self, path):
-        self._start_connection()
-        try:
-            dirs, files = self._get_dir_details(path)
-            return dirs.keys(), files.keys()
-        except FTPStorageException, e:
-            raise
-
-    def delete(self, name):
-        if not self.exists(name):
-            return
-        self._start_connection()
-        try:
-            self._connection.delete(name)
-        except ftplib.all_errors, e:
-            raise FTPStorageException('Error when removing %s' % name)                 
-
-    def exists(self, name):
-        self._start_connection()
-        try:
-            if os.path.basename(name) in self._connection.nlst(os.path.dirname(name) + '/'):
-                return True
-            else:
-                return False
-        except ftplib.error_temp, e:
-            return False
-        except ftplib.error_perm, e:
-            # error_perm: 550 Can't find file
-            return False
-        except ftplib.all_errors, e:
-            raise FTPStorageException('Error when testing existence of %s' % name)            
-
-    def size(self, name):
-        self._start_connection()
-        try:
-            dirs, files = self._get_dir_details(os.path.dirname(name))
-            if os.path.basename(name) in files:
-                return files[os.path.basename(name)]
-            else:
-                return 0
-        except FTPStorageException, e:
-            return 0
-
-    def url(self, name):
-        if self._base_url is None:
-            raise ValueError("This file is not accessible via a URL.")
-        return urlparse.urljoin(self._base_url, name).replace('\\', '/')
-
-class FTPStorageFile(File):
-    def __init__(self, name, storage, mode):
-        self._name = name
-        self._storage = storage
-        self._mode = mode
-        self._is_dirty = False
-        self.file = StringIO()
-        self._is_read = False
-    
-    @property
-    def size(self):
-        if not hasattr(self, '_size'):
-            self._size = self._storage.size(self._name)
-        return self._size
-
-    def read(self, num_bytes=None):
-        if not self._is_read:
-            self._storage._start_connection()
-            self.file = self._storage._read(self._name)
-            self._storage._end_connection()
-            self._is_read = True
-            
-        return self.file.read(num_bytes)
-
-    def write(self, content):
-        if 'w' not in self._mode:
-            raise AttributeError("File was opened for read-only access.")
-        self.file = StringIO(content)
-        self._is_dirty = True
-        self._is_read = True
-
-    def close(self):
-        if self._is_dirty:
-            self._storage._start_connection()
-            self._storage._put_file(self._name, self.file.getvalue())
-            self._storage._end_connection()
-        self.file.close()

File backends/image.py

-
-import os
-
-from django.core.files.storage import FileSystemStorage
-from django.core.exceptions import ImproperlyConfigured
-
-try:
-    from PIL import ImageFile as PILImageFile
-except ImportError:
-    raise ImproperlyConfigured, "Could not load PIL dependency.\
-    \nSee http://www.pythonware.com/products/pil/"
-
-
-class ImageStorage(FileSystemStorage):
-    """
-    A FileSystemStorage which normalizes extensions for images.
-    
-    Comes from http://www.djangosnippets.org/snippets/965/
-    """
-    
-    def find_extension(self, format):
-        """Normalizes PIL-returned format into a standard, lowercase extension."""
-        format = format.lower()
-        
-        if format == 'jpeg':
-            format = 'jpg'
-        
-        return format
-    
-    def save(self, name, content):
-        dirname = os.path.dirname(name)
-        basename = os.path.basename(name)
-        
-        # Use PIL to determine filetype
-        
-        p = PILImageFile.Parser()
-        while 1:
-            data = content.read(1024)
-            if not data:
-                break
-            p.feed(data)
-            if p.image:
-                im = p.image
-                break
-        
-        extension = self.find_extension(im.format)
-        
-        # Does the basename already have an extension? If so, replace it.
-        # bare as in without extension
-        bare_basename, _ = os.path.splitext(basename)
-        basename = bare_basename + '.' + extension
-        
-        name = os.path.join(dirname, basename)
-        return super(ImageStorage, self).save(name, content)
-    

File backends/mogile.py

-import urlparse
-import mimetypes
-from StringIO import StringIO
-
-from django.conf import settings
-from django.core.cache import cache
-from django.utils.text import force_unicode
-from django.core.files.storage import Storage
-from django.http import HttpResponse, HttpResponseNotFound
-from django.core.exceptions import ImproperlyConfigured
-
-try:
-    import mogilefs
-except ImportError:
-    raise ImproperlyConfigured, "Could not load mogilefs dependency.\
-    \nSee http://mogilefs.pbworks.com/Client-Libraries"
-
-
-class MogileFSStorage(Storage):
-    """MogileFS filesystem storage"""
-    def __init__(self, base_url=settings.MEDIA_URL):
-        
-        # the MOGILEFS_MEDIA_URL overrides MEDIA_URL
-        if hasattr(settings, 'MOGILEFS_MEDIA_URL'):
-            self.base_url = settings.MOGILEFS_MEDIA_URL
-        else:
-            self.base_url = base_url
-                
-        for var in ('MOGILEFS_TRACKERS', 'MOGILEFS_DOMAIN',):
-            if not hasattr(settings, var):
-                raise ImproperlyConfigured, "You must define %s to use the MogileFS backend." % var
-            
-        self.trackers = settings.MOGILEFS_TRACKERS
-        self.domain = settings.MOGILEFS_DOMAIN
-        self.client = mogilefs.Client(self.domain, self.trackers)
-    
-    def get_mogile_paths(self, filename):
-        return self.client.get_paths(filename)  
-    
-    # The following methods define the Backend API
-
-    def filesize(self, filename):
-        raise NotImplemented
-        #return os.path.getsize(self._get_absolute_path(filename))
-    
-    def path(self, filename):
-        paths = self.get_mogile_paths(filename)
-        if paths:
-            return self.get_mogile_paths(filename)[0]
-        else:
-            return None
-    
-    def url(self, filename):
-        return urlparse.urljoin(self.base_url, filename).replace('\\', '/')
-
-    def open(self, filename, mode='rb'):
-        raise NotImplemented
-        #return open(self._get_absolute_path(filename), mode)
-
-    def exists(self, filename):
-        return filename in self.client
-
-    def save(self, filename, raw_contents):
-        filename = self.get_available_filename(filename)
-        
-        if not hasattr(self, 'mogile_class'):
-            self.mogile_class = None
-
-        # Write the file to mogile
-        success = self.client.send_file(filename, StringIO(raw_contents), self.mogile_class)
-        if success:
-            print "Wrote file to key %s, %s@%s" % (filename, self.domain, self.trackers[0])
-        else:
-            print "FAILURE writing file %s" % (filename)
-
-        return force_unicode(filename.replace('\\', '/'))
-
-    def delete(self, filename):
-        
-        self.client.delete(filename)
-            
-        
-def serve_mogilefs_file(request, key=None):
-    """
-    Called when a user requests an image.
-    Either reproxy the path to perlbal, or serve the image outright
-    """
-    # not the best way to do this, since we create a client each time
-    mimetype = mimetypes.guess_type(key)[0] or "application/x-octet-stream"
-    client = mogilefs.Client(settings.MOGILEFS_DOMAIN, settings.MOGILEFS_TRACKERS)
-    if hasattr(settings, "SERVE_WITH_PERLBAL") and settings.SERVE_WITH_PERLBAL:
-        # we're reproxying with perlbal
-        
-        # check the path cache
-        
-        path = cache.get(key)
-
-        if not path:
-            path = client.get_paths(key)
-            cache.set(key, path, 60)
-    
-        if path:
-            response = HttpResponse(content_type=mimetype)
-            response['X-REPROXY-URL'] = path[0]
-        else:
-            response = HttpResponseNotFound()
-    
-    else:
-        # we don't have perlbal, let's just serve the image via django
-        file_data = client[key]
-        if file_data:
-            response = HttpResponse(file_data, mimetype=mimetype)
-        else:
-            response = HttpResponseNotFound()
-    
-    return response

File backends/mosso.py

-"""
-Custom storage for django with Mosso Cloud Files backend.
-Created by Rich Leland <rich@richleland.com>.
-"""
-from django.conf import settings
-from django.core.exceptions import ImproperlyConfigured
-from django.core.files import File
-from django.core.files.storage import Storage
-from django.utils.text import get_valid_filename
-
-try:
-    import cloudfiles
-    from cloudfiles.errors import NoSuchObject
-except ImportError:
-    raise ImproperlyConfigured("Could not load cloudfiles dependency. See "
-                               "http://www.mosso.com/cloudfiles.jsp.")
-
-# TODO: implement TTL into cloudfiles methods
-CLOUDFILES_TTL = getattr(settings, 'CLOUDFILES_TTL', 600)
-
-
-def cloudfiles_upload_to(self, filename):
-    """
-    Simple, custom upload_to because Cloud Files doesn't support
-    nested containers (directories).
-
-    Actually found this out from @minter:
-    @richleland The Cloud Files APIs do support pseudo-subdirectories, by
-    creating zero-byte files with type application/directory.
-
-    May implement in a future version.
-    """
-    return get_valid_filename(filename)
-
-
-class CloudFilesStorage(Storage):
-    """
-    Custom storage for Mosso Cloud Files.
-    """
-    default_quick_listdir = True
-
-    def __init__(self, username=None, api_key=None, container=None,
-                 connection_kwargs=None):
-        """
-        Initialize the settings for the connection and container.
-        """
-        self.username = username or settings.CLOUDFILES_USERNAME
-        self.api_key = api_key or settings.CLOUDFILES_API_KEY
-        self.container_name = container or settings.CLOUDFILES_CONTAINER
-        self.connection_kwargs = connection_kwargs or {}
-
-    def __getstate__(self):
-        """
-        Return a picklable representation of the storage.
-        """
-        return dict(username=self.username,
-                    api_key=self.api_key,
-                    container_name=self.container_name,
-                    connection_kwargs=self.connection_kwargs)
-
-    def _get_connection(self):
-        if not hasattr(self, '_connection'):
-            self._connection = cloudfiles.get_connection(self.username,
-                                    self.api_key, **self.connection_kwargs)
-        return self._connection
-
-    def _set_connection(self, value):
-        self._connection = value
-
-    connection = property(_get_connection, _set_connection)
-
-    def _get_container(self):
-        if not hasattr(self, '_container'):
-            self.container = self.connection.get_container(
-                                                        self.container_name)
-        return self._container
-
-    def _set_container(self, container):
-        """
-        Set the container, making it publicly available (on Limelight CDN) if
-        it is not already.
-        """
-        if not container.is_public():
-            container.make_public()
-        if hasattr(self, '_container_public_uri'):
-            delattr(self, '_container_public_uri')
-        self._container = container
-
-    container = property(_get_container, _set_container)
-
-    def _get_container_url(self):
-        if not hasattr(self, '_container_public_uri'):
-            self._container_public_uri = self.container.public_uri()
-        return self._container_public_uri
-
-    container_url = property(_get_container_url)
-
-    def _get_cloud_obj(self, name):
-        """
-        Helper function to get retrieve the requested Cloud Files Object.
-        """
-        return self.container.get_object(name)
-
-    def _open(self, name, mode='rb'):
-        """
-        Return the CloudFilesStorageFile.
-        """
-        return CloudFilesStorageFile(storage=self, name=name)
-
-    def _save(self, name, content):
-        """
-        Use the Cloud Files service to write ``content`` to a remote file
-        (called ``name``).
-        """
-        content.open()
-        cloud_obj = self.container.create_object(name)
-        cloud_obj.size = content.file.size
-        # If the content type is available, pass it in directly rather than
-        # getting the cloud object to try to guess.
-        if hasattr(content.file, 'content_type'):
-            cloud_obj.content_type = content.file.content_type
-        cloud_obj.send(content)
-        content.close()
-        return name
-
-    def delete(self, name):
-        """
-        Deletes the specified file from the storage system.
-        """
-        self.container.delete_object(name)
-
-    def exists(self, name):
-        """
-        Returns True if a file referenced by the given name already exists in
-        the storage system, or False if the name is available for a new file.
-        """
-        try:
-            self._get_cloud_obj(name)
-            return True
-        except NoSuchObject:
-            return False
-
-    def listdir(self, path):
-        """
-        Lists the contents of the specified path, returning a 2-tuple; the
-        first being an empty list of directories (not available for quick-
-        listing), the second being a list of filenames.
-
-        If the list of directories is required, use the full_listdir method.
-        """
-        files = []
-        if path and not path.endswith('/'):
-            path = '%s/' % path
-        path_len = len(path)
-        for name in self.container.list_objects(path=path):
-            files.append(name[path_len:])
-        return ([], files)
-
-    def full_listdir(self, path):
-        """
-        Lists the contents of the specified path, returning a 2-tuple of lists;
-        the first item being directories, the second item being files.
-
-        On large containers, this may be a slow operation for root containers
-        because every single object must be returned (cloudfiles does not
-        provide an explicit way of listing directories).
-        """
-        dirs = set()
-        files = []
-        if path and not path.endswith('/'):
-            path = '%s/' % path
-        path_len = len(path)
-        for name in self.container.list_objects(prefix=path):
-            name = name[path_len:]
-            slash = name[1:-1].find('/') + 1
-            if slash:
-                dirs.add(name[:slash])
-            elif name:
-                files.append(name)
-        dirs = list(dirs)
-        dirs.sort()
-        return (dirs, files)
-
-    def size(self, name):
-        """
-        Returns the total size, in bytes, of the file specified by name.
-        """
-        return self._get_cloud_obj(name).size
-
-    def url(self, name):
-        """
-        Returns an absolute URL where the file's contents can be accessed
-        directly by a web browser.
-        """
-        return '%s/%s' % (self.container_url, name)
-
-
-class CloudFilesStorageFile(File):
-    closed = False
-
-    def __init__(self, storage, name, *args, **kwargs):
-        self._storage = storage
-        super(CloudFilesStorageFile, self).__init__(file=None, name=name,
-                                                    *args, **kwargs)
-
-    def _get_size(self):
-        if not hasattr(self, '_size'):
-            self._size = self._storage.size(self.name)
-        return self._size
-
-    def _set_size(self, size):
-        self._size = size
-
-    size = property(_get_size, _set_size)
-
-    def _get_file(self):
-        if not hasattr(self, '_file'):
-            self._file = self._storage._get_cloud_obj(self.name)
-        return self._file
-
-    def _set_file(self, value):
-        if value is None:
-            if hasattr(self, '_file'):
-                del self._file
-        else:
-            self._file = value
-
-    file = property(_get_file, _set_file)
-
-    def read(self, num_bytes=None):
-        data = self.file.read(size=num_bytes or -1, offset=self._pos)
-        self._pos += len(data)
-        return data
-
-    def open(self, *args, **kwargs):
-        """
-        Open the cloud file object.
-        """
-        self.file
-        self._pos = 0
-
-    def close(self, *args, **kwargs):
-        self._pos = 0
-
-    @property
-    def closed(self):
-        return not hasattr(self, '_file')
-
-    def seek(self, pos):
-        self._pos = pos

File backends/overwrite.py

-import os
-
-from django.conf import settings
-from django.core.files.storage import FileSystemStorage
-
-class OverwriteStorage(FileSystemStorage):
-    """
-    Comes from http://www.djangosnippets.org/snippets/976/
-    (even if it already exists in S3Storage for ages)
-    
-    See also Django #4339, which might add this functionality to core.
-    """
-    
-    def get_available_name(self, name):
-        """
-        Returns a filename that's free on the target storage system, and
-        available for new content to be written to.
-        """
-        if self.exists(name):
-            self.delete(name)
-        return name

File backends/s3.py

-import os
-import mimetypes
-import random
-import time
-
-try:
-    from cStringIO import StringIO
-except ImportError:
-    from StringIO import StringIO
-
-from django.conf import settings
-from django.core.exceptions import ImproperlyConfigured
-from django.core.files.base import File
-from django.core.files.storage import Storage
-from django.utils.functional import curry
-
-try:
-    from S3 import AWSAuthConnection, QueryStringAuthGenerator
-except ImportError:
-    raise ImproperlyConfigured, "Could not load amazon's S3 bindings.\
-    \nSee http://developer.amazonwebservices.com/connect/entry.jspa?externalID=134"
-
-ACCESS_KEY_NAME = 'AWS_ACCESS_KEY_ID'
-SECRET_KEY_NAME = 'AWS_SECRET_ACCESS_KEY'
-HEADERS = 'AWS_HEADERS'
-
-DEFAULT_ACL= getattr(settings, 'AWS_DEFAULT_ACL', 'public-read')
-QUERYSTRING_ACTIVE= getattr(settings, 'AWS_QUERYSTRING_ACTIVE', False)
-QUERYSTRING_EXPIRE= getattr(settings, 'AWS_QUERYSTRING_EXPIRE', 60)
-SECURE_URLS= getattr(settings, 'AWS_S3_SECURE_URLS', False)
-BUCKET_PREFIX = lambda: getattr(settings, 'AWS_BUCKET_PREFIX', '')
-
-IS_GZIPPED= getattr(settings, 'AWS_IS_GZIPPED', False) 
-GZIP_CONTENT_TYPES = (
-    'text/css',
-    'application/javascript',
-    'application/x-javascript'
-)
-GZIP_CONTENT_TYPES = getattr(settings, 'GZIP_CONTENT_TYPES', GZIP_CONTENT_TYPES)
-
-if IS_GZIPPED:
-    from gzip import GzipFile
-
-class S3Storage(Storage):
-    """Amazon Simple Storage Service"""
-
-    def __init__(self, bucket=settings.AWS_STORAGE_BUCKET_NAME,
-            access_key=None, secret_key=None, acl=DEFAULT_ACL,
-            calling_format=settings.AWS_CALLING_FORMAT, encrypt=False,
-            gzip=IS_GZIPPED, gzip_content_types=GZIP_CONTENT_TYPES):
-        self.bucket = bucket
-        self.acl = acl
-        self.encrypt = encrypt
-        self.gzip = gzip
-        self.gzip_content_types = gzip_content_types
-        
-        if encrypt:
-            try:
-                import ezPyCrypto
-            except ImportError:
-                raise ImproperlyConfigured, "Could not load ezPyCrypto.\
-                \nSee http://www.freenet.org.nz/ezPyCrypto/ to install it."
-            self.crypto_key = ezPyCrypto.key
-
-        if not access_key and not secret_key:
-            access_key, secret_key = self._get_access_keys()
-
-        self.connection = AWSAuthConnection(access_key, secret_key,
-                            calling_format=calling_format)
-        self._original_make_request = self.connection._make_request
-        self.connection._make_request = self._make_request
-        self.generator = QueryStringAuthGenerator(access_key, secret_key, 
-                            calling_format=calling_format,
-                            is_secure=SECURE_URLS)
-        self.generator.set_expires_in(QUERYSTRING_EXPIRE)
-        
-        self.headers = getattr(settings, HEADERS, {})
-
-    def _get_access_keys(self):
-        access_key = getattr(settings, ACCESS_KEY_NAME, None)
-        secret_key = getattr(settings, SECRET_KEY_NAME, None)
-        if (access_key or secret_key) and (not access_key or not secret_key):
-            access_key = os.environ.get(ACCESS_KEY_NAME)
-            secret_key = os.environ.get(SECRET_KEY_NAME)
-
-        if access_key and secret_key:
-            # Both were provided, so use them
-            return access_key, secret_key
-
-        return None, None
-
-    def _get_connection(self):
-        return AWSAuthConnection(*self._get_access_keys())
-
-    def _clean_name(self, name, prefix=True):
-        # Useful for windows' paths
-        bucket_prefix = BUCKET_PREFIX()
-        if not name.startswith('%s/' % bucket_prefix):
-            prefix_string = bucket_prefix
-        else:
-            prefix_string = ''
-        return os.path.join(prefix_string, os.path.normpath(name).replace('\\', '/'))
-
-    def _compress_string(self, s):
-        """Gzip a given string."""
-        zbuf = StringIO()
-        zfile = GzipFile(mode='wb', compresslevel=6, fileobj=zbuf)
-        zfile.write(s)
-        zfile.close()
-        return zbuf.getvalue()
-        
-    def _put_file(self, name, content):
-        if self.encrypt:
-        
-            # Create a key object
-            key = self.crypto_key()
-        
-            # Read in a public key
-            fd = open(settings.CRYPTO_KEYS_PUBLIC, "rb")
-            public_key = fd.read()
-            fd.close()
-        
-            # import this public key
-            key.importKey(public_key)
-        
-            # Now encrypt some text against this public key
-            content = key.encString(content)
-        
-        content_type = mimetypes.guess_type(name)[0] or "application/x-octet-stream"
-        
-        if self.gzip and content_type in self.gzip_content_types:
-            content = self._compress_string(content)
-            self.headers.update({'Content-Encoding': 'gzip'})
-        
-        self.headers.update({
-            'x-amz-acl': self.acl, 
-            'Content-Type': content_type,
-            'Content-Length' : len(content),
-        })
-        response = self.connection.put(self.bucket, name, content, self.headers)
-        if response.http_response.status not in (200, 206):
-            raise IOError("S3StorageError: %s" % response.message)
-
-    def _open(self, name, mode='rb'):
-        name = self._clean_name(name)
-        remote_file = S3StorageFile(name, self, mode=mode)
-        return remote_file
-
-    def _read(self, name, start_range=None, end_range=None):
-        name = self._clean_name(name)
-        if start_range is None:
-            headers = {}
-        else:
-            headers = {'Range': 'bytes=%s-%s' % (start_range, end_range)}
-        response = self.connection.get(self.bucket, name, headers)
-        if response.http_response.status not in (200, 206):
-            raise IOError("S3StorageError: %s" % response.message)
-        headers = response.http_response.msg
-        
-        if self.encrypt:
-            # Read in a private key
-            fd = open(settings.CRYPTO_KEYS_PRIVATE, "rb")
-            private_key = fd.read()
-            fd.close()
-        
-            # Create a key object, and auto-import private key
-            key = self.crypto_key(private_key)
-        
-            # Decrypt this file
-            response.object.data = key.decString(response.object.data)
-        
-        return response.object.data, headers.get('etag', None), headers.get('content-range', None)
-        
-    def _save(self, name, content):
-        name = self._clean_name(name)
-        content.open()
-        if hasattr(content, 'chunks'):
-            content_str = ''.join(chunk for chunk in content.chunks())
-        else:
-            content_str = content.read()
-        self._put_file(name, content_str)
-        return name
-    
-    def delete(self, name):
-        name = self._clean_name(name)
-        response = self.connection.delete(self.bucket, name)
-        if response.http_response.status != 204:
-            raise IOError("S3StorageError: %s" % response.message)
-
-    def exists(self, name):
-        name = self._clean_name(name)
-        response = self.connection._make_request('HEAD', self.bucket, name)
-        return response.status == 200
-
-    def size(self, name):
-        name = self._clean_name(name)
-        response = self.connection._make_request('HEAD', self.bucket, name)
-        content_length = response.getheader('Content-Length')
-        return content_length and int(content_length) or 0
-    
-    def url(self, name):
-        name = self._clean_name(name)
-        if QUERYSTRING_ACTIVE:
-            return self.generator.generate_url('GET', self.bucket, name)
-        else:
-            return self.generator.make_bare_url(self.bucket, name).replace(
-                '%2F', '/')
-
-    def _make_request(self, *args):
-        for i in range(5):
-            try:
-                response = self._original_make_request(*args)
-                if response.status in (500, 503): # some error we should retry for
-                    raise IOError
-                return response
-            except IOError:
-                if i == 4: # last try, still doesn't work
-                    raise
-                # 0-0.5s after 1 try, up to 0-8s on the last retry
-                sleep_range = ((i + 1) ** 2) /  2.0
-                time.sleep(random.uniform(0, sleep_range))
-
-    ## UNCOMMENT BELOW IF NECESSARY
-    #def get_available_name(self, name):
-    #    """ Overwrite existing file with the same name. """
-    #    name = self._clean_name(name)
-    #    return name
-
-
-class S3StorageFile(File):
-    def __init__(self, name, storage, mode):
-        self._name = name
-        self._storage = storage
-        self._mode = mode
-        self._is_dirty = False
-        self.file = StringIO()
-        self.start_range = 0
-    
-    @property
-    def size(self):
-        if not hasattr(self, '_size'):
-            self._size = self._storage.size(self._name)
-        return self._size
-
-    def read(self, num_bytes=None):
-        if num_bytes is None:
-            args = []
-            self.start_range = 0
-        else:
-            args = [self.start_range, self.start_range+num_bytes-1]
-        data, etags, content_range = self._storage._read(self._name, *args)
-        if content_range is not None:
-            current_range, size = content_range.split(' ', 1)[1].split('/', 1)
-            start_range, end_range = current_range.split('-', 1)
-            self._size, self.start_range = int(size), int(end_range)+1
-        self.file = StringIO(data)
-        return self.file.getvalue()
-
-    def write(self, content):
-        if 'w' not in self._mode:
-            raise AttributeError("File was opened for read-only access.")
-        self.file = StringIO(content)
-        self._is_dirty = True
-
-    def close(self):
-        if self._is_dirty:
-            self._storage._put_file(self._name, self.file.getvalue())
-        self.file.close()

File backends/s3boto.py

-import os
-import mimetypes
-
-try:
-    from cStringIO import StringIO
-except ImportError:
-    from StringIO import StringIO
-
-from django.conf import settings
-from django.core.files.base import File
-from django.core.files.storage import Storage
-from django.core.files.base import ContentFile
-from django.utils.functional import curry
-from django.core.exceptions import ImproperlyConfigured
-
-try:
-    from boto.s3.connection import S3Connection
-    from boto.s3.key import Key
-    from boto.exception import S3CreateError
-except ImportError:
-    raise ImproperlyConfigured, "Could not load Boto's S3 bindings.\
-    \nSee http://code.google.com/p/boto/"
-
-ACCESS_KEY_NAME = 'AWS_ACCESS_KEY_ID'
-SECRET_KEY_NAME = 'AWS_SECRET_ACCESS_KEY'
-HEADERS         = 'AWS_HEADERS'
-BUCKET_NAME     = 'AWS_STORAGE_BUCKET_NAME'
-DEFAULT_ACL     = 'AWS_DEFAULT_ACL'
-QUERYSTRING_AUTH = 'AWS_QUERYSTRING_AUTH'
-QUERYSTRING_EXPIRE = 'AWS_QUERYSTRING_EXPIRE'
-IS_GZIPPED         = 'AWS_IS_GZIPPED'
-LOCATION           = 'AWS_LOCATION'
-
-BUCKET_PREFIX     = getattr(settings, BUCKET_NAME, {})
-HEADERS           = getattr(settings, HEADERS, {})
-DEFAULT_ACL       = getattr(settings, DEFAULT_ACL, 'public-read')
-QUERYSTRING_AUTH  = getattr(settings, QUERYSTRING_AUTH, True)
-QUERYSTRING_EXPIRE= getattr(settings, QUERYSTRING_EXPIRE, 3600)
-IS_GZIPPED        = getattr(settings, IS_GZIPPED, False) 
-GZIP_CONTENT_TYPES = (
-    'text/css',
-    'application/javascript',
-    'application/x-javascript'
-)
-GZIP_CONTENT_TYPES = getattr(settings, 'GZIP_CONTENT_TYPES', GZIP_CONTENT_TYPES)
-LOCATION          = getattr(settings, LOCATION, '')
-
-if IS_GZIPPED:
-    from gzip import GzipFile
-
-class S3BotoStorage(Storage):
-    """Amazon Simple Storage Service using Boto"""
-    
-    def __init__(self, bucket="root", bucketprefix=BUCKET_PREFIX, 
-            access_key=None, secret_key=None, acl=DEFAULT_ACL, headers=HEADERS,
-            gzip=IS_GZIPPED, gzip_content_types=GZIP_CONTENT_TYPES):
-
-
-    def __init__(self, bucket="root", bucketprefix=BUCKET_PREFIX,
-            access_key=None, secret_key=None, acl=DEFAULT_ACL, headers=HEADERS):
-
-        self.acl = acl
-        self.headers = headers
-        self.gzip = gzip
-        self.gzip_content_types = gzip_content_types
-        
-
-        if not access_key and not secret_key:
-             access_key, secret_key = self._get_access_keys()
-
-        self.connection = S3Connection(access_key, secret_key)
-        bucket_name = bucketprefix + bucket
-        try:
-            self.bucket = self.connection.create_bucket(bucket_name, {}, LOCATION)
-            self.bucket.set_acl(self.acl)
-        except S3CreateError:
-            # assuming we own it
-            self.bucket = self.connection.get_bucket(bucket_name)
-
-    def _get_access_keys(self):
-        access_key = getattr(settings, ACCESS_KEY_NAME, None)
-        secret_key = getattr(settings, SECRET_KEY_NAME, None)
-        if (access_key or secret_key) and (not access_key or not secret_key):
-            access_key = os.environ.get(ACCESS_KEY_NAME)
-            secret_key = os.environ.get(SECRET_KEY_NAME)
-
-        if access_key and secret_key:
-            # Both were provided, so use them
-            return access_key, secret_key
-
-        return None, None
-
-    def _clean_name(self, name):
-        # Useful for windows' paths
-        return os.path.normpath(name).replace('\\', '/')
-
-    def _compress_content(self, content):
-        """Gzip a given string."""
-        zbuf = StringIO()
-        zfile = GzipFile(mode='wb', compresslevel=6, fileobj=zbuf)
-        zfile.write(content.read())
-        zfile.close()
-        content.file = zbuf
-        return content
-        
-    def _open(self, name, mode='rb'):
-        name = self._clean_name(name)
-        return S3BotoStorageFile(name, mode, self)
-
-    def _save(self, name, content):
-        name = self._clean_name(name)
-        headers = self.headers
-
-        if hasattr(content.file, 'content_type'):
-            content_type = content.file.content_type
-        else:
-            content_type = mimetypes.guess_type(name)[0] or "application/x-octet-stream"
-            
-        if self.gzip and content_type in self.gzip_content_types:
-            content = self._compress_content(content)
-            headers.update({'Content-Encoding': 'gzip'})
-
-        headers.update({
-            'Content-Type': content_type,
-            'Content-Length' : len(content),
-        })
-        
-        content.name = name
-        k = self.bucket.get_key(name)
-        if not k:
-            k = self.bucket.new_key(name)
-        k.set_contents_from_file(content, headers=headers, policy=self.acl)
-        return name
-
-    def delete(self, name):
-        name = self._clean_name(name)
-        self.bucket.delete_key(name)
-
-    def exists(self, name):
-        name = self._clean_name(name)
-        k = Key(self.bucket, name)
-        return k.exists()
-
-    def listdir(self, name):
-        name = self._clean_name(name)
-        return [l.name for l in self.bucket.list() if not len(name) or l.name[:len(name)] == name]
-
-    def size(self, name):
-        name = self._clean_name(name)
-        return self.bucket.get_key(name).size
-
-    def url(self, name):
-        name = self._clean_name(name)
-        if self.bucket.get_key(name) is None:
-            return ''
-        return self.bucket.get_key(name).generate_url(QUERYSTRING_EXPIRE, method='GET', query_auth=QUERYSTRING_AUTH)
-
-    def get_available_name(self, name):
-        """ Overwrite existing file with the same name. """
-        name = self._clean_name(name)
-        return name
-
-
-class S3BotoStorageFile(File):
-    def __init__(self, name, mode, storage):
-        self._storage = storage
-        self.name = name
-        self._mode = mode
-        self.key = storage.bucket.get_key(name)
-        self._is_dirty = False
-        self.file = StringIO()
-
-    @property
-    def size(self):
-        return self.key.size
-
-    def read(self, *args, **kwargs):
-        self.file = StringIO()
-        self._is_dirty = False
-        self.key.get_contents_to_file(self.file)
-        return self.file.getvalue()
-
-    def write(self, content):
-        if 'w' not in self._mode:
-            raise AttributeError("File was opened for read-only access.")
-        self.file = StringIO(content)
-        self._is_dirty = True
-
-    def close(self):
-        if self._is_dirty:
-            self.key.set_contents_from_string(self.file.getvalue(), headers=self._storage.headers, acl=self._storage.acl)
-        self.key.close()

File backends/symlinkorcopy.py

-import os
-
-from django.conf import settings
-from django.core.files.storage import FileSystemStorage
-
-__doc__ = """
-I needed to efficiently create a mirror of a directory tree (so that 
-"origin pull" CDNs can automatically pull files). The trick was that 
-some files could be modified, and some could be identical to the original. 
-Of course it doesn't make sense to store the exact same data twice on the 
-file system. So I created SymlinkOrCopyStorage.
-
-SymlinkOrCopyStorage allows you to symlink a file when it's identical to 
-the original file and to copy the file if it's modified.
-Of course, it's impossible to know if a file is modified just by looking 
-at the file, without knowing what the original file was.
-That's what the symlinkWithin parameter is for. It accepts one or more paths 
-(if multiple, they should be concatenated using a colon (:)). 
-Files that will be saved using SymlinkOrCopyStorage are then checked on their 
-location: if they are within one of the symlink_within directories, 
-they will be symlinked, otherwise they will be copied.
-
-The rationale is that unmodified files will exist in their original location, 
-e.g. /htdocs/example.com/image.jpg and modified files will be stored in 
-a temporary directory, e.g. /tmp/image.jpg.
-"""
-
-class SymlinkOrCopyStorage(FileSystemStorage):
-    """Stores symlinks to files instead of actual files whenever possible
-    
-    When a file that's being saved is currently stored in the symlink_within
-    directory, then symlink the file. Otherwise, copy the file.
-    """
-    def __init__(self, location=settings.MEDIA_ROOT, base_url=settings.MEDIA_URL, 
-            symlink_within=None):
-        super(SymlinkOrCopyStorage, self).__init__(location, base_url)
-        self.symlink_within = symlink_within.split(":")
-
-    def _save(self, name, content):
-        full_path_dst = self.path(name)
-
-        directory = os.path.dirname(full_path_dst)
-        if not os.path.exists(directory):
-            os.makedirs(directory)
-        elif not os.path.isdir(directory):
-            raise IOError("%s exists and is not a directory." % directory)
-
-        full_path_src = os.path.abspath(content.name)
-
-        symlinked = False
-        # Only symlink if the current platform supports it.
-        if getattr(os, "symlink", False):
-            for path in self.symlink_within:
-                if full_path_src.startswith(path):
-                    os.symlink(full_path_src, full_path_dst)
-                    symlinked = True
-                    break
-
-        if not symlinked:
-            super(SymlinkOrCopyStorage, self)._save(name, content)
-
-        return name

File docs/Makefile

View file
+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS    =
+SPHINXBUILD   = sphinx-build
+PAPER         =
+BUILDDIR      = _build
+
+# Internal variables.
+PAPEROPT_a4     = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
+
+help:
+	@echo "Please use \`make <target>' where <target> is one of"
+	@echo "  html       to make standalone HTML files"
+	@echo "  dirhtml    to make HTML files named index.html in directories"
+	@echo "  singlehtml to make a single large HTML file"
+	@echo "  pickle     to make pickle files"
+	@echo "  json       to make JSON files"
+	@echo "  htmlhelp   to make HTML files and a HTML help project"
+	@echo "  qthelp     to make HTML files and a qthelp project"
+	@echo "  devhelp    to make HTML files and a Devhelp project"
+	@echo "  epub       to make an epub"
+	@echo "  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+	@echo "  latexpdf   to make LaTeX files and run them through pdflatex"
+	@echo "  text       to make text files"
+	@echo "  man        to make manual pages"
+	@echo "  changes    to make an overview of all changed/added/deprecated items"
+	@echo "  linkcheck  to check all external links for integrity"
+	@echo "  doctest    to run all doctests embedded in the documentation (if enabled)"
+
+clean:
+	-rm -rf $(BUILDDIR)/*
+
+html:
+	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
+	@echo
+	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
+
+dirhtml:
+	$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
+	@echo
+	@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
+
+singlehtml:
+	$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
+	@echo
+	@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
+
+pickle:
+	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
+	@echo
+	@echo "Build finished; now you can process the pickle files."
+
+json:
+	$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
+	@echo
+	@echo "Build finished; now you can process the JSON files."
+
+htmlhelp:
+	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
+	@echo
+	@echo "Build finished; now you can run HTML Help Workshop with the" \
+	      ".hhp project file in $(BUILDDIR)/htmlhelp."
+
+qthelp:
+	$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
+	@echo
+	@echo "Build finished; now you can run "qcollectiongenerator" with the" \
+	      ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
+	@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/django-storages.qhcp"
+	@echo "To view the help file:"
+	@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/django-storages.qhc"
+
+devhelp:
+	$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
+	@echo
+	@echo "Build finished."
+	@echo "To view the help file:"
+	@echo "# mkdir -p $$HOME/.local/share/devhelp/django-storages"
+	@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/django-storages"
+	@echo "# devhelp"
+
+epub:
+	$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
+	@echo
+	@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
+
+latex:
+	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+	@echo
+	@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
+	@echo "Run \`make' in that directory to run these through (pdf)latex" \
+	      "(use \`make latexpdf' here to do that automatically)."
+
+latexpdf:
+	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+	@echo "Running LaTeX files through pdflatex..."
+	make -C $(BUILDDIR)/latex all-pdf
+	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
+
+text:
+	$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
+	@echo
+	@echo "Build finished. The text files are in $(BUILDDIR)/text."
+
+man:
+	$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+	@echo
+	@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
+
+changes:
+	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
+	@echo
+	@echo "The overview file is in $(BUILDDIR)/changes."
+
+linkcheck:
+	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
+	@echo
+	@echo "Link check complete; look for any errors in the above output " \
+	      "or in $(BUILDDIR)/linkcheck/output.txt."
+
+doctest:
+	$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
+	@echo "Testing of doctests in the sources finished, look at the " \
+	      "results in $(BUILDDIR)/doctest/output.txt."

File docs/backends/amazon-S3.rst

View file
+Amazon S3
+=========
+
+Usage
+*****
+
+There are two backend APIs for interacting with S3. The first is the s3 backend (in storages/backends/s3.py) which is simple and based on the Amazon S3 Python library. The second is the s3boto backend (in storages/backends/s3boto.py) which is well-maintained by the community and is generally more robust (including connection pooling, etc...). s3boto requires the python-boto library.
+
+Settings
+--------
+
+``DEFAULT_FILE_STORAGE``
+
+This setting sets the path to the S3 storage class, the first part correspond to the filepath and the second the name of the class, if you've got example.com in your PYTHONPATH and store your storage file in example.com/libs/storages/S3Storage.py, the resulting setting will be::
+
+    DEFAULT_FILE_STORAGE = 'libs.storages.S3Storage.S3Storage'
+
+or if you installed using setup.py::
+
+    DEFAULT_FILE_STORAGE = 'storages.backends.s3.S3Storage'
+
+If you keep the same filename as in repository, it should always end with S3Storage.S3Storage.
+
+To use s3boto, this setting will be::
+
+    DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
+
+``AWS_ACCESS_KEY_ID``
+
+Your Amazon Web Services access key, as a string.
+
+``AWS_SECRET_ACCESS_KEY``
+
+Your Amazon Web Services secret access key, as a string.
+
+``AWS_STORAGE_BUCKET_NAME``
+
+Your Amazon Web Services storage bucket name, as a string.
+
+``AWS_CALLING_FORMAT`` (Subdomain hardcoded in s3boto)
+
+The way you'd like to call the Amazon Web Services API, for instance if you prefer subdomains::
+
+    from S3 import CallingFormat
+    AWS_CALLING_FORMAT = CallingFormat.SUBDOMAIN
+
+``AWS_HEADERS`` (optional)
+
+If you'd like to set headers sent with each file of the storage::
+
+    # see http://developer.yahoo.com/performance/rules.html#expires
+    AWS_HEADERS = {
+        'Expires': 'Thu, 15 Apr 2010 20:00:00 GMT',
+        'Cache-Control': 'max-age=86400',
+    }
+
+To allow ``django-admin.py`` collectstatic to automatically put your static files in your bucket set the following in your settings.py::
+
+    STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
+
+Fields
+------
+
+Once you're done, default_storage will be the S3 storage::
+
+    >>> from django.core.files.storage import default_storage
+    >>> print default_storage.__class__
+    <class 'S3Storage.S3Storage'>
+
+The above doesn't seem to be true for django 1.3+ instead look at::
+
+    >>> from django.core.files.storage import default_storage
+    >>> print default_storage.connection
+    S3Connection:s3.amazonaws.com
+
+This way, if you define a new FileField, it will use the S3 storage::
+
+    >>> from django.db import models
+    >>> class Resume(models.Model):
+    ...     pdf = models.FileField(upload_to='pdfs')
+    ...     photos = models.ImageField(upload_to='photos')
+    ...
+    >>> resume = Resume()
+    >>> print resume.pdf.storage
+    <S3Storage.S3Storage object at ...>
+
+Tests
+*****
+
+Initialization::
+
+    >>> from django.core.files.storage import default_storage
+    >>> from django.core.files.base import ContentFile
+    >>> from django.core.cache import cache
+    >>> from models import MyStorage
+
+Storage
+-------
+
+Standard file access options are available, and work as expected::
+
+    >>> default_storage.exists('storage_test')
+    False
+    >>> file = default_storage.open('storage_test', 'w')
+    >>> file.write('storage contents')
+    >>> file.close()
+
+    >>> default_storage.exists('storage_test')
+    True
+    >>> file = default_storage.open('storage_test', 'r')
+    >>> file.read()
+    'storage contents'
+    >>> file.close()
+
+    >>> default_storage.delete('storage_test')
+    >>> default_storage.exists('storage_test')
+    False
+
+Model
+-----
+
+An object without a file has limited functionality::
+
+    >>> obj1 = MyStorage()
+    >>> obj1.normal
+    <FieldFile: None>
+    >>> obj1.normal.size
+    Traceback (most recent call last):
+    ...
+    ValueError: The 'normal' attribute has no file associated with it.
+
+Saving a file enables full functionality::
+
+    >>> obj1.normal.save('django_test.txt', ContentFile('content'))
+    >>> obj1.normal
+    <FieldFile: tests/django_test.txt>
+    >>> obj1.normal.size
+    7
+    >>> obj1.normal.read()
+    'content'
+
+Files can be read in a little at a time, if necessary::
+
+    >>> obj1.normal.open()
+    >>> obj1.normal.read(3)
+    'con'
+    >>> obj1.normal.read()
+    'tent'
+    >>> '-'.join(obj1.normal.chunks(chunk_size=2))
+    'co-nt-en-t'
+
+Save another file with the same name::
+
+    >>> obj2 = MyStorage()
+    >>> obj2.normal.save('django_test.txt', ContentFile('more content'))
+    >>> obj2.normal
+    <FieldFile: tests/django_test_.txt>
+    >>> obj2.normal.size
+    12
+
+Push the objects into the cache to make sure they pickle properly::
+
+    >>> cache.set('obj1', obj1)
+    >>> cache.set('obj2', obj2)
+    >>> cache.get('obj2').normal
+    <FieldFile: tests/django_test_.txt>
+
+Deleting an object deletes the file it uses, if there are no other objects still using that file::
+
+    >>> obj2.delete()
+    >>> obj2.normal.save('django_test.txt', ContentFile('more content'))
+    >>> obj2.normal
+    <FieldFile: tests/django_test_.txt>
+
+Default values allow an object to access a single file::
+
+    >>> obj3 = MyStorage.objects.create()
+    >>> obj3.default
+    <FieldFile: tests/default.txt>
+    >>> obj3.default.read()
+    'default content'
+
+But it shouldn't be deleted, even if there are no more objects using it::
+
+    >>> obj3.delete()
+    >>> obj3 = MyStorage()
+    >>> obj3.default.read()
+    'default content'
+
+Verify the fix for #5655, making sure the directory is only determined once::
+
+    >>> obj4 = MyStorage()
+    >>> obj4.random.save('random_file', ContentFile('random content'))
+    >>> obj4.random
+    <FieldFile: .../random_file>
+
+Clean up the temporary files::
+
+    >>> obj1.normal.delete()
+    >>> obj2.normal.delete()
+    >>> obj3.default.delete()
+    >>> obj4.random.delete()

File docs/backends/couchdb.rst

View file
+CouchDB
+=======
+
+A custom storage system for Django with CouchDB backend.
+

File docs/backends/database.rst

View file
+Database
+========
+
+Class DatabaseStorage can be used with either FileField or ImageField. It can be used to map filenames to database blobs: so you have to use it with a special additional table created manually. The table should contain a pk-column for filenames (better to use the same type that FileField uses: nvarchar(100)), blob field (image type for example) and size field (bigint). You can't just create blob column in the same table, where you defined FileField, since there is no way to find required row in the save() method. Also size field is required to obtain better perfomance (see size() method).
+
+So you can use it with different FileFields and even with different "upload_to" variables used. Thus it implements a kind of root filesystem, where you can define dirs using "upload_to" with FileField and store any files in these dirs.
+
+It uses either settings.DB_FILES_URL or constructor param 'base_url' (see __init__()) to create urls to files. Base url should be mapped to view that provides access to files. To store files in the same table, where FileField is defined you have to define your own field and provide extra argument (e.g. pk) to save().
+
+Raw sql is used for all operations. In constructor or in DB_FILES of settings.py () you should specify a dictionary with db_table, fname_column, blob_column, size_column and 'base_url'. For example I just put to the settings.py the following line::
+
+    DB_FILES = {
+        'db_table': 'FILES',
+        'fname_column':  'FILE_NAME',
+        'blob_column': 'BLOB',
+        'size_column': 'SIZE',
+        'base_url': 'http://localhost/dbfiles/'
+    }
+
+And use it with ImageField as following::
+
+    player_photo = models.ImageField(upload_to="player_photos", storage=DatabaseStorage() )
+
+DatabaseStorage class uses your settings.py file to perform custom connection to your database.
+
+The reason to use custom connection: http://code.djangoproject.com/ticket/5135 Connection string looks like::
+
+    cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=testdb;UID=me;PWD=pass')
+
+It's based on pyodbc module, so can be used with any database supported by pyodbc. I've tested it with MS Sql Express 2005.
+
+Note: It returns special path, which should be mapped to special view, which returns requested file::
+
+    def image_view(request, filename):
+        import os
+        from django.http import HttpResponse
+        from django.conf import settings
+        from django.utils._os import safe_join
+        from filestorage import DatabaseStorage
+        from django.core.exceptions import ObjectDoesNotExist
+
+        storage = DatabaseStorage()
+
+        try:
+            image_file = storage.open(filename, 'rb')
+            file_content = image_file.read()
+        except:
+            filename = 'no_image.gif'
+            path = safe_join(os.path.abspath(settings.MEDIA_ROOT), filename)
+            if not os.path.exists(path):
+                raise ObjectDoesNotExist
+            no_image = open(path, 'rb')
+            file_content = no_image.read()
+
+        response = HttpResponse(file_content, mimetype="image/jpeg")
+        response['Content-Disposition'] = 'inline; filename=%s'%filename
+        return response
+
+Note: If filename exist, blob will be overwritten, to change this remove get_available_name(self, name), so Storage.get_available_name(self, name) will be used to generate new filename.

File docs/backends/ftp.rst

View file
+FTP
+===
+
+.. warning:: This FTP storage is not prepared to work with large files, because it uses memory for temporary data storage. It also does not close FTP connection automatically (but open it lazy and try to reestablish when disconnected).
+
+This implementation was done preliminary for upload files in admin to remote FTP location and read them back on site by HTTP. It was tested mostly in this configuration, so read/write using FTPStorageFile class may break.
+

File docs/backends/image.rst

View file
+Image
+=====
+
+A custom FileSystemStorage made for normalizing extensions. It lets PIL look at the file to determine the format and append an always lower-case extension based on the results.
+

File docs/backends/mogilefs.rst

View file
+MogileFS
+========
+
+This storage allows you to use MogileFS, it comes from this blog post.
+
+The MogileFS storage backend is fairly simple: it uses URLs (or, rather, parts of URLs) as keys into the mogile database. When the user requests a file stored by mogile (say, an avatar), the URL gets passed to a view which, using a client to the mogile tracker, retrieves the "correct" path (the path that points to the actual file data). The view will then either return the path(s) to perlbal to reproxy, or, if you're not using perlbal to reproxy (which you should), it serves the data of the file directly from django.
+
+* ``MOGILEFS_DOMAIN``: The mogile domain that files should read from/written to, e.g "production"
+* ``MOGILEFS_TRACKERS``: A list of trackers to connect to, e.g. ["foo.sample.com:7001", "bar.sample.com:7001"]
+* ``MOGILEFS_MEDIA_URL`` (optional): The prefix for URLs that point to mogile files. This is used in a similar way to ``MEDIA_URL``, e.g. "/mogilefs/"
+* ``SERVE_WITH_PERLBAL``: Boolean that, when True, will pass the paths back in the response in the ``X-REPROXY-URL`` header. If False, django will serve all mogile media files itself (bad idea for production, but useful if you're testing on a setup that doesn't have perlbal running)
+* ``DEFAULT_FILE_STORAGE``: This is the class that's used for the backend. You'll want to set this to ``project.app.storages.MogileFSStorage`` (or wherever you've installed the backend)
+
+Getting files into mogile
+*************************
+
+The great thing about file backends is that we just need to specify the backend in the model file and everything is taken care for us - all the default save() methods work correctly.
+
+For Fluther, we have two main media types we use mogile for: avatars and thumbnails. Mogile defines "classes" that dictate how each type of file is replicated - so you can make sure you have 3 copies of the original avatar but only 1 of the thumbnail.
+
+In order for classes to behave nicely with the backend framework, we've had to do a little tomfoolery. (This is something that may change in future versions of the filestorage framework).
+
+Here's what the models.py file looks like for the avatars::
+
+    from django.core.filestorage import storage
+
+    # TODO: Find a better way to deal with classes. Maybe a generator?
+    class AvatarStorage(storage.__class__):
+        mogile_class = 'avatar'
+
+    class ThumbnailStorage(storage.__class__):
+        mogile_class = 'thumb'
+
+    class Avatar(models.Model):
+        user = models.ForeignKey(User, null=True, blank=True)
+        image = models.ImageField(storage=AvatarStorage())
+        thumb = models.ImageField(storage=ThumbnailStorage())
+
+Each of the custom storage classes defines a class attribute which gets passed to the mogile backend behind the scenes. If you don't want to worry about mogile classes, don't need to define a custom storage engine or specify it in the field - the default should work just fine.
+
+Serving files from mogile
+*************************
+
+Now, all we need to do is plug in the view that serves up mogile data.
+
+Here's what we use::
+
+    urlpatterns += patterns(",
+        (r'^%s(?P<key>.*)' % settings.MOGILEFS_MEDIA_URL[1:],
+            'MogileFSStorage.serve_mogilefs_file')
+    )
+
+Any url beginning with the value of ``MOGILEFS_MEDIA_URL`` will get passed to our view. Since ``MOGILEFS_MEDIA_URL`` requires a leading slash (like ``MEDIA_URL``), we strip that off and pass the rest of the url over to the view.
+
+That's it! Happy mogiling!

File docs/backends/mongodb.rst

View file
+MongoDB
+=======
+
+A GridFS backend that works with django_mongodb_engine and the upcoming GSoC 2010 MongoDB backend which gets developed by Alex Gaynor.
+
+Usage (in settings.py)::
+