Commits

Anonymous committed 68ca430 Merge with conflicts

Merge branch 'master' of http://github.com/sjl/django-hoptoad

Conflicts:
README.rst
setup.py

  • Participants
  • Parent commits f9f18d2, 85cf908

Comments (0)

Files changed (17)

 *.swp
 *.swo
 .DS_Store
-
 docs/.html
 docs/.tmp
+.git*
-Copyright (c) 2009 Steve Losh
+Copyright (c) 2009-2010 Steve Losh and contributors
 
 Permission is hereby granted, free of charge, to any person obtaining a copy
 of this software and associated documentation files (the "Software"), to deal
 
 django-hoptoad requires:
 
-* Python_ 2.5+ (preferably 2.6+ as that's what I've tested it with)
-* PyYAML_ (`pip install pyyaml` or `easy_install pyyaml`)
+* Python_ 2.6
 * Django_ 1.0+
 * A Hoptoad_ account
 
 .. _Python: http://python.org/
-.. _PyYAML: http://pyyaml.org/
 
 
 Installation
     HOPTOAD_API_KEY = 'Your Hoptoad API key.'
 
 
+Advanced-Usage
+--------------
+
+There are more advanced options that can be used to customize your Hoptoad_ notification process; please go to `Configuration guide <http://sjl.bitbucket.org/django-hoptoad/config/>`_ to see more advanced configuration and usage options.
+
 Documentation
 -------------
 

File docs/wiki/config/index.mdown

 
     HOPTOAD_NOTIFY_WHILE_DEBUG = True
 
+Specify an Environment Name
+---------------------------
+
+If your application is deployed in multiple places, an environment name distinguishes production servers from QA or staging servers, so you know which server the error was produced on. Hoptoad's API seems to accept any environment name, but typical examples would be 'Production', 'QA', 'Test', 'Development'. If you have one `settings.py` per environment, you can set this quite simply:
+
+    HOPTOAD_ENV_NAME = 'Production'
+
+If you have a single `settings.py` shared between environments, you may want to set this in a more dynamic fashion. There's no limit (other than Python itself) on how to do this. For example:
+
+    HOPTOAD_ENV_NAME = 'Test' if DEBUG else 'Production'
+
+Or:
+
+    import platform
+    HOPTOAD_ENV_NAME = platform.node()
+
+If `HOPTOAD_ENV_NAME` is not set, the Middleware by default will send the environment name as 'Unknown'.
+
 Specify a Default Timeout
 -------------------------
 
 
     HOPTOAD_IGNORE_AGENTS = ['Googlebot', 'Yahoo! Slurp', 'YahooSeeker']
 
+Use SSL to POST to Hoptoad
+--------------------------
+
+If you want to use SSL (and your account plan supports it) you can use the following setting to enable SSL POSTs:
+
+    HOPTOAD_USE_SSL = True
+
+This will force all HTTP requests to use SSL. There's always a possibility, due to either an account downgrade, or, an expiration of a SSL certificate that Hoptoad might return an error code of `402` on a POST. There is built-in support automatically to try to re-POST the same error message without using SSL. To enable this feature, just add this option:
+
+    HOPTOAD_NO_SSL_FALLBACK = True
+
+This will force a fallback to a non-SSL HTTP post to Hoptoad if the SSL post fails.
+
+Hide Sensitive Request Parameters
+---------------------------------
+
+If a user submits important data (credit card numbers, for example) with a GET
+or POST request and an error occurs, that data will be passed along to
+Hoptoad. If you want to blank out the contents of certain parameters you can
+use this option:
+
+    HOPTOAD_PROTECTED_PARAMS = ['credit_card_number', 'ssn']
+
+Any parameter in this list will have its contents replaced with
+`********************` before it is sent to Hoptoad.
+
+Asynchronous POSTs and Request Handlers
+---------------------------------------
+
+On a highly trafficked website there is a noticeable degree of a delay when POST'ing to Hoptoad -- either due to error limitations, network instability, or other acts of God that can cause an HTTP request to slow down or fail. To fix this, django-hoptoad will spawn a daemon thread by default.  It will spawn a thread pool (with 4 threads) to queue up all errors for maximum throughput. However, this can be configured to your heart's content, including changing the notification handler completely. 
+
+To change the number of threads spawned per threadpool from the default of 4, you can set the following variable to your desired thread count per threadpool:
+
+    HOPTOAD_THREAD_COUNT = 2
+
+There is also built-in support for various other methods of communicating **synchronously** with Hoptoad:
+
+    HOPTOAD_HANDLER = "blocking"
+
+This variable is set to "threadpool" by default. 
+
+There are a few handlers to choose from, (i.e. possible `HOPTOAD_HANDLER` settings):
+
+### "threadpool" 
+
+This is the default setting. Will return a daemonized thread with a 4 worker-thread thread pool to handle all enqueued errors.
+
+### "blocking" 
+
+This will switch from the thread pool approach to a blocking HTTP POST where the entire Django process is halted until this blocking call returns.
+
+Over time there will be more custom handlers with various options to control them.
+
+Writing and Using Custom Handlers
+---------------------------------
+
+There is support for drop-in replacements of handlers so that you can write your own. All you need to do is implement a class which implements an `enqueue` method, which takes two parameters: `payload` and `timeout`. You'll also need to import the API that's needed to report.
+
+For example:
+
+    from hoptoad.api import htv2
+    
+    class SomeAwesomeReporting(object):
+        def enqueue(self, payload, timeout):
+            """This enqueue method is your own implementation"""
+            htv2.report(payload, timeout)
+
+You'll need set two variables in `settings.py` to use your custom handler:
+
+    HOPTOAD_HANDLER = "/path/to/the/custom/implementation.py"
+    HOPTOAD_HANDLER_CLASS = "SomeAwesomeReport"
+
+`HOPTOAD_HANDLER` is the file location to the module that contains your implementation of the custom handler and `HOPTOAD_HANDLER_CLASS` is the name of the actual handler class.
+
+Change the Hoptoad Notification URL
+-----------------------------------
+
+Currently Hoptoad has their notification API at `http://hoptoadapp.com/notifier_api/v2/notices`, but this has been the second time that this was changed.  It may change again, so it's configurable (in case you want to fix the problem before we have a chance to update django-hoptoad with the new URL):
+
+    HOPTOAD_NOTIFICATION_URL = "Hoptoad Notification URL here."
+
+This defaults to `http://hoptoadapp.com/notifier_api/v2/notices`.
+
+Group django-hoptoad Settings
+-----------------------------
+
+As you've probably noticed, these django-hoptoad settings are getting to be extremely abundant, so in order to give you some organization support for your `settings.py`, we've included support for grouping them in a dictionary. You can group them using `HOPTOAD_SETTINGS` as a dictionary:
+
+    HOPTOAD_SETTINGS = { 
+            'HOPTOAD_API_KEY' : 'abc12345...'
+            'HOPTOAD_HANDLER' : 'threadpool',
+            'HOPTOAD_THREAD_COUNT' : 2,
+            'HOPTOAD_USE_SSL' : True,
+            # ...
+     }
+
+
 Problems?
 ---------
 
 If you're having trouble you might want to take a look at the [Troubleshooting Guide][troubleshooting].
 
-[troubleshooting]: /troubleshooting/
+[troubleshooting]: /troubleshooting/

File hoptoad/__init__.py

+from django.conf import settings
+from itertools import ifilter
+
+
+__version__ = 0.3
+VERSION = __version__
+NAME = "django-hoptoad"
+URL = "http://sjl.bitbucket.org/django-hoptoad/"
+
+
+def get_hoptoad_settings():
+    hoptoad_settings = getattr(settings, 'HOPTOAD_SETTINGS', {})
+    
+    if not hoptoad_settings:
+        # do some backward compatibility work to combine all hoptoad
+        # settings in a dictionary
+        
+        # for every attribute that starts with hoptoad
+        for attr in ifilter(lambda x: x.startswith('HOPTOAD'), dir(settings)):
+            hoptoad_settings[attr] = getattr(settings, attr)
+        
+        if not hoptoad_settings:
+            # there were no settings for hoptoad at all..
+            # should probably log here
+            raise MiddlewareNotUsed
+    
+    return hoptoad_settings

File hoptoad/api/__init__.py

Empty file added.

File hoptoad/api/htv1.py

+import traceback
+import urllib2
+import yaml
+
+from django.views.debug import get_safe_settings
+from django.conf import settings
+
+from hoptoad import get_hoptoad_settings
+
+
+PROTECTED_PARAMS = frozenset(get_hoptoad_settings().get('HOPTOAD_PROTECTED_PARAMS', []))
+
+def _parse_environment(request):
+    """Return an environment mapping for a notification from the given request."""
+    env = dict( (str(k), str(v)) for (k, v) in get_safe_settings().items() )
+    env.update( dict( (str(k), str(v)) for (k, v) in request.META.items() ) )
+    
+    env['REQUEST_URI'] = request.build_absolute_uri()
+    
+    return env
+
+def _parse_traceback(trace):
+    """Return the given traceback string formatted for a notification."""
+    p_traceback = [ "%s:%d:in `%s'" % (filename, lineno, funcname)
+                    for filename, lineno, funcname, _
+                    in traceback.extract_tb(trace) ]
+    p_traceback.reverse()
+    
+    return p_traceback
+
+def _parse_message(exc):
+    """Return a message for a notification from the given exception."""
+    return '%s: %s' % (exc.__class__.__name__, str(exc))
+
+def _parse_request(request):
+    """Return a request mapping for a notification from the given request."""
+    request_get = dict( (str(k), str(v)) for (k, v) in request.GET.items() )
+    request_post = dict( (str(k), str(v)) for (k, v) in request.POST.items() )
+    
+    data = request_post or request_get
+    for k in PROTECTED_PARAMS.intersection(data.keys()):
+        data[k] = '********************'
+    
+    return data
+
+def _parse_session(session):
+    """Return a request mapping for a notification from the given session."""
+    return dict( (str(k), str(v)) for (k, v) in session.items() )
+
+
+def _generate_payload(request, exc=None, trace=None, message=None, error_class=None):
+    """Generate a YAML payload for a Hoptoad notification.
+    
+    Parameters:
+    request -- A Django HTTPRequest.  This is required.
+    
+    Keyword parameters:
+    exc -- A Python Exception object.  If this is not given the
+           mess parameter must be.
+    trace -- A Python Traceback object.  This is not required.
+    message -- A string representing the error message.  If this is not
+               given, the exc parameter must be.
+    error_class -- A string representing the error class.  If this is not
+                   given the excc parameter must be.
+    """
+    p_message = message if message else _parse_message(exc)
+    p_error_class = error_class if error_class else exc.__class__.__name__
+    p_traceback = _parse_traceback(trace) if trace else []
+    p_environment = _parse_environment(request)
+    p_request = _parse_request(request)
+    p_session = _parse_session(request.session)
+    
+    return yaml.dump({ 'notice': {
+        'api_key':       settings.HOPTOAD_API_KEY,
+        'error_class':   p_error_class,
+        'error_message': p_message,
+        'backtrace':     p_traceback,
+        'request':       { 'url': request.build_absolute_uri(),
+                           'params': p_request },
+        'session':       { 'key': '', 'data': p_session },
+        'environment':   p_environment,
+    }}, default_flow_style=False)
+
+def _ride_the_toad(payload, timeout):
+    """Send a notification (an HTTP POST request) to Hoptoad.
+    
+    Parameters:
+    payload -- the YAML payload for the request from _generate_payload()
+    timeout -- the maximum timeout, in seconds, or None to use the default
+    """
+    headers = { 'Content-Type': 'application/x-yaml',
+                'Accept': 'text/xml, application/xml', }
+    r = urllib2.Request('http://hoptoadapp.com/notices', payload, headers)
+    try:
+        if timeout:
+            urllib2.urlopen(r, timeout=timeout)
+        else:
+            urllib2.urlopen(r)
+    except urllib2.URLError:
+        pass
+
+def report(payload, timeout):
+    return _ride_the_toad(payload, timeout)

File hoptoad/api/htv2.py

+import sys
+import traceback
+import urllib2
+from xml.dom.minidom import getDOMImplementation
+
+from django.views.debug import get_safe_settings
+from django.conf import settings
+
+from hoptoad import VERSION, NAME, URL
+from hoptoad import get_hoptoad_settings
+from hoptoad.api.htv1 import _parse_environment, _parse_request, _parse_session
+from hoptoad.api.htv1 import _parse_message
+
+
+def _class_name(class_):
+    return class_.__class__.__name__
+
+def _handle_errors(request, response):
+    if response:
+        code = "Http%s" % response
+        msg = "%(code)s: %(response)s at %(uri)s" % {
+                   'code': code,
+                   'response': { 'Http403': "Forbidden",
+                                 'Http404': "Page not found" }[code],
+                   'uri': request.build_absolute_uri()
+        }
+        return (code, msg)
+    
+    exc, inst = sys.exc_info()[:2]
+    return _class_name(inst), _parse_message(inst)
+
+
+def generate_payload(request_tuple):
+    """Generate an XML payload for a Hoptoad notification.
+    
+    Parameters:
+    
+    request_tuple -- A tuple containing a Django HTTPRequest and a possible
+                     response code.
+    """
+    request, response = request_tuple
+    hoptoad_settings = get_hoptoad_settings()
+    
+    p_error_class, p_message = _handle_errors(request, response)
+    
+    # api v2 from: http://help.hoptoadapp.com/faqs/api-2/notifier-api-v2
+    xdoc = getDOMImplementation().createDocument(None, "notice", None)
+    notice = xdoc.firstChild
+    
+    # /notice/@version -- should be 2.0
+    notice.setAttribute('version', '2.0')
+    
+    # /notice/api-key
+    api_key = xdoc.createElement('api-key')
+    api_key_data = xdoc.createTextNode(hoptoad_settings['HOPTOAD_API_KEY'])
+    api_key.appendChild(api_key_data)
+    notice.appendChild(api_key)
+    
+    # /notice/notifier/name
+    # /notice/notifier/version
+    # /notice/notifier/url
+    notifier = xdoc.createElement('notifier')
+    for key, value in zip(["name", "version", "url"], [NAME, VERSION, URL]):
+        key = xdoc.createElement(key)
+        value = xdoc.createTextNode(str(value))
+        key.appendChild(value)
+        notifier.appendChild(key)
+    notice.appendChild(notifier)
+    
+    # /notice/error/class
+    # /notice/error/message
+    error = xdoc.createElement('error')
+    for key, value in zip(["class", "message"], [p_error_class, p_message]):
+        key = xdoc.createElement(key)
+        value = xdoc.createTextNode(value)
+        key.appendChild(value)
+        error.appendChild(key)
+    
+    # /notice/error/backtrace/error/line
+    backtrace = xdoc.createElement('backtrace')
+    
+    # i do this here because I'm afraid of circular reference..
+    reversed_backtrace = list(
+        reversed(traceback.extract_tb(sys.exc_info()[2]))
+    )
+    
+    if reversed_backtrace:
+        for filename, lineno, funcname, text in reversed_backtrace:
+            line = xdoc.createElement('line')
+            line.setAttribute('file', str(filename))
+            line.setAttribute('number', str(lineno))
+            line.setAttribute('method', str(funcname))
+            backtrace.appendChild(line)
+    else:
+        line = xdoc.createElement('line')
+        line.setAttribute('file', 'unknown')
+        line.setAttribute('number', '0')
+        line.setAttribute('method', 'unknown')
+        backtrace.appendChild(line)
+    error.appendChild(backtrace)
+    notice.appendChild(error)
+    
+    # /notice/request
+    xrequest = xdoc.createElement('request')
+    
+    # /notice/request/url -- request.build_absolute_uri()
+    xurl = xdoc.createElement('url')
+    xurl_data = xdoc.createTextNode(request.build_absolute_uri())
+    xurl.appendChild(xurl_data)
+    xrequest.appendChild(xurl)
+    
+    # /notice/request/component -- not sure..
+    comp = xdoc.createElement('component')
+    #comp_data = xdoc.createTextNode('')
+    xrequest.appendChild(comp)
+    
+    # /notice/request/action -- action which error occured
+    # ... no fucking clue..
+    
+    # sjl: "actions" are the Rails equivalent of Django's views
+    #      Is there a way to figure out which view a request object went to
+    #      (if any)?  Anyway, it's not GET/POST so I'm commenting it for now.
+    
+    #action = xdoc.createElement('action') # maybe GET/POST??
+    #action_data = u"%s %s" % (request.method, request.META['PATH_INFO'])
+    #action_data = xdoc.createTextNode(action_data)
+    #action.appendChild(action_data)
+    #xrequest.appendChild(action)
+    
+    # /notice/request/params/var -- check request.GET/request.POST
+    req_params = _parse_request(request).items()
+    if req_params:
+        params = xdoc.createElement('params')
+        for key, value in req_params:
+            var = xdoc.createElement('var')
+            var.setAttribute('key', key)
+            value = xdoc.createTextNode(str(value))
+            var.appendChild(value)
+            params.appendChild(var)
+        xrequest.appendChild(params)
+    
+    # /notice/request/session/var -- check if sessions is enabled..
+    sessions = xdoc.createElement('session')
+    for key, value in _parse_session(request.session).iteritems():
+        var = xdoc.createElement('var')
+        var.setAttribute('key', key)
+        value = xdoc.createTextNode(str(value))
+        var.appendChild(value)
+        sessions.appendChild(var)
+    xrequest.appendChild(sessions)
+    
+    # /notice/request/cgi-data/var -- all meta data
+    cgidata = xdoc.createElement('cgi-data')
+    for key, value in _parse_environment(request).iteritems():
+        var = xdoc.createElement('var')
+        var.setAttribute('key', key)
+        value = xdoc.createTextNode(str(value))
+        var.appendChild(value)
+        cgidata.appendChild(var)
+    xrequest.appendChild(cgidata)
+    notice.appendChild(xrequest)
+    
+    # /notice/server-environment
+    serverenv = xdoc.createElement('server-environment')
+    
+    # /notice/server-environment/project-root -- default to sys.path[0] 
+    projectroot = xdoc.createElement('project-root')
+    projectroot.appendChild(xdoc.createTextNode(sys.path[0]))
+    serverenv.appendChild(projectroot)
+    
+    # /notice/server-environment/environment-name -- environment name? wtf..
+    envname = xdoc.createElement('environment-name')
+    # no idea...
+    
+    # sjl: This is supposed to be set to something like "test", "staging",
+    #      or "production" to help you group the errors in the web interface.
+    #      I'm still thinking about the best way to support this.
+    
+    # bmjames: Taking this from a settings variable. I personally have a
+    #          different settings.py for every environment and my deploy
+    #          script puts the correct one in place, so this makes sense.
+    #          But even if one had a single settings.py shared among
+    #          environments, it should be possible to set this variable
+    #          dynamically. It would simply be the responsibility of
+    #          settings.py to do it, rather than the hoptoad middleware.
+
+    envname_text = hoptoad_settings.get('HOPTOAD_ENV_NAME', 'Unknown')
+    envname_data = xdoc.createTextNode(envname_text)
+    envname.appendChild(envname_data)
+    serverenv.appendChild(envname)
+    notice.appendChild(serverenv)
+    
+    return xdoc.toxml('utf-8')
+
+def _ride_the_toad(payload, timeout, use_ssl):
+    """Send a notification (an HTTP POST request) to Hoptoad.
+    
+    Parameters:
+    payload -- the XML payload for the request from _generate_payload()
+    timeout -- the maximum timeout, in seconds, or None to use the default
+    
+    """
+    headers = { 'Content-Type': 'text/xml' }
+    
+    url_template = '%s://hoptoadapp.com/notifier_api/v2/notices'
+    notification_url = url_template % ("https" if use_ssl else "http")
+    
+    # allow the settings to override all urls
+    notification_url = get_hoptoad_settings().get('HOPTOAD_NOTIFICATION_URL',
+                                                   notification_url)
+    
+    r = urllib2.Request(notification_url, payload, headers)
+    try:
+        if timeout:
+            # timeout is 2.6 addition!
+            response = urllib2.urlopen(r, timeout=timeout)
+        else:
+            response = urllib2.urlopen(r)
+    except urllib2.URLError, err:
+        pass
+    else:
+        try:
+            # getcode is 2.6 addition!!
+            status = response.getcode()
+        except AttributeError:
+            # default to just code
+            status = response.code
+        
+        if status == 403 and use_ssl:
+            if get_hoptoad_settings().get('HOPTOAD_NO_SSL_FALLBACK', False):
+                # if we can not use SSL, re-invoke w/o using SSL
+                _ride_the_toad(payload, timeout, use_ssl=False)
+        if status == 403 and not use_ssl:
+            # we were not trying to use SSL but got a 403 anyway
+            # something else must be wrong (bad API key?)
+            pass
+        if status == 422:
+            # couldn't send to hoptoad..
+            pass
+        if status == 500:
+            # hoptoad is down
+            pass
+
+def report(payload, timeout):
+    use_ssl = get_hoptoad_settings().get('HOPTOAD_USE_SSL', False)
+    return _ride_the_toad(payload, timeout, use_ssl)

File hoptoad/handlers/__init__.py

+"""Implementations of different handlers that communicate with hoptoad in
+various different protocols.
+"""
+import logging
+import os
+import imp
+import pprint
+
+from hoptoad import get_hoptoad_settings
+from hoptoad.handlers.threaded import ThreadedNotifier
+from hoptoad.handlers.blocking import BlockingNotifier
+
+
+logger = logging.getLogger(__name__)
+
+def get_handler(*args, **kwargs):
+    """Returns an initialized handler object"""
+    hoptoad_settings = get_hoptoad_settings()
+    handler = hoptoad_settings.get("HOPTOAD_HANDLER", "threadpool")
+    if handler.lower() == 'threadpool':
+        threads = hoptoad_settings.get("HOPTOAD_THREAD_COUNT", 4)
+        return ThreadedNotifier(threads , *args, **kwargs)
+    elif handler.lower() == 'blocking':
+        return BlockingNotifier(*args, **kwargs)
+    else:
+        _class_module = hoptoad_settings.get('HOPTOAD_HANDLER_CLASS', None)
+        if not _class_module:
+            # not defined, abort setting up hoptoad, skip it.
+            raise MiddlewareNotUsed
+        # module name that we should import from
+        _module_name = os.path.splitext(os.path.basename(handler))[0]
+        # load the module!
+        m = imp.load_module(_module_name,
+                            *imp.find_module(_module_name,
+                                             [os.path.dirname(handler)]))
+
+        # instantiate the class
+        return getattr(m, _class_module)(*args, **kwargs)

File hoptoad/handlers/blocking.py

+import os
+import time
+import logging
+
+from hoptoad.api import htv2
+
+logger = logging.getLogger(__name__)
+
+class BlockingNotifier(object):
+    """A blocking Hoptoad notifier.  """
+    def __init__(self):
+        _threadname = "Hoptoad%s-%d" % (self.__class__.__name__, os.getpid())
+
+    def enqueue(self, payload, timeout):
+        htv2.report(payload, timeout)

File hoptoad/handlers/threaded.py

+import logging
+import os
+import threading
+import time
+
+from hoptoad.api import htv2
+from hoptoad.handlers.utils.threadpool import WorkRequest, ThreadPool
+from hoptoad.handlers.utils.threadpool import NoResultsPending
+
+
+logger = logging.getLogger(__name__)
+
+def _exception_handler(request, exc_info):
+    """Rudimentary exception handler, simply log and moves on.
+    
+    If there's no tuple, it means something went really wrong. Critically log
+    and exit.
+    
+    """
+    if not isinstance(exc_info, tuple):
+        logger.critical(str(request))
+        logger.critical(str(exc_info))
+        sys.exit(1)
+    logger.warn(
+        "* Exception occured in request #%s: %s" % (request.requestID, exc_info)
+    )
+
+
+class ThreadedNotifier(threading.Thread):
+    """A daemon thread that spawns a threadpool of worker threads.
+    
+    Waits for queue additions through the enqueue method.
+    """
+    def __init__(self, threadpool_threadcount, cb=None, exc_cb=None):
+        _threadname = "Hoptoad%s-%d" % (self.__class__.__name__, os.getpid())
+        threading.Thread.__init__(self, name=_threadname)
+        self.threads = threadpool_threadcount
+        self.daemon = True # daemon thread... important!
+        self.callback = cb
+        self.exc_callback = exc_cb or _exception_handler
+        self.pool = ThreadPool(self.threads)
+        self.start()
+    
+    def enqueue(self, payload, timeout):
+        request = WorkRequest(
+            htv2.report,
+            args=(payload, timeout),
+            callback=self.callback,
+            exc_callback=self.exc_callback
+        )
+        
+        # Put the request into the queue where the detached 'run' method will
+        # poll its queue every 0.5 seconds and start working.
+        self.pool.putRequest(request)
+    
+    def run(self):
+        """Actively poll the queue for requests and process them."""
+        while True:
+            try:
+                time.sleep(0.5) # TODO: configure for tuning
+                self.pool.poll()
+            except KeyboardInterrupt:
+                logger.info("* Interrupted!")
+                break
+            except NoResultsPending:
+                pass
+
+

File hoptoad/handlers/utils/__init__.py

Empty file added.

File hoptoad/handlers/utils/threadpool.py

+# -*- coding: UTF-8 -*-
+"""Easy to use object-oriented thread pool framework.
+
+A thread pool is an object that maintains a pool of worker threads to perform
+time consuming operations in parallel. It assigns jobs to the threads
+by putting them in a work request queue, where they are picked up by the
+next available thread. This then performs the requested operation in the
+background and puts the results in another queue.
+
+The thread pool object can then collect the results from all threads from
+this queue as soon as they become available or after all threads have
+finished their work. It's also possible, to define callbacks to handle
+each result as it comes in.
+
+The basic concept and some code was taken from the book "Python in a Nutshell,
+2nd edition" by Alex Martelli, O'Reilly 2006, ISBN 0-596-10046-9, from section
+14.5 "Threaded Program Architecture". I wrapped the main program logic in the
+ThreadPool class, added the WorkRequest class and the callback system and
+tweaked the code here and there. Kudos also to Florent Aide for the exception
+handling mechanism.
+
+Basic usage::
+
+    >>> pool = ThreadPool(poolsize)
+    >>> requests = makeRequests(some_callable, list_of_args, callback)
+    >>> [pool.putRequest(req) for req in requests]
+    >>> pool.wait()
+
+See the end of the module code for a brief, annotated usage example.
+
+Website : http://chrisarndt.de/projects/threadpool/
+
+"""
+__docformat__ = "restructuredtext en"
+
+__all__ = [
+    'makeRequests',
+    'NoResultsPending',
+    'NoWorkersAvailable',
+    'ThreadPool',
+    'WorkRequest',
+    'WorkerThread'
+]
+
+__author__ = "Christopher Arndt"
+__version__ = '1.2.7'
+__revision__ = "$Revision: 416 $"
+__date__ = "$Date: 2009-10-07 05:41:27 +0200 (Wed, 07 Oct 2009) $"
+__license__ = "MIT license"
+
+
+# standard library modules
+import sys
+import threading
+import Queue
+import traceback
+
+
+# exceptions
+class NoResultsPending(Exception):
+    """All work requests have been processed."""
+    pass
+
+class NoWorkersAvailable(Exception):
+    """No worker threads available to process remaining requests."""
+    pass
+
+
+# internal module helper functions
+def _handle_thread_exception(request, exc_info):
+    """Default exception handler callback function.
+
+    This just prints the exception info via ``traceback.print_exception``.
+
+    """
+    traceback.print_exception(*exc_info)
+
+
+# utility functions
+def makeRequests(callable_, args_list, callback=None,
+        exc_callback=_handle_thread_exception):
+    """Create several work requests for same callable with different arguments.
+
+    Convenience function for creating several work requests for the same
+    callable where each invocation of the callable receives different values
+    for its arguments.
+
+    ``args_list`` contains the parameters for each invocation of callable.
+    Each item in ``args_list`` should be either a 2-item tuple of the list of
+    positional arguments and a dictionary of keyword arguments or a single,
+    non-tuple argument.
+
+    See docstring for ``WorkRequest`` for info on ``callback`` and
+    ``exc_callback``.
+
+    """
+    requests = []
+    for item in args_list:
+        if isinstance(item, tuple):
+            requests.append(
+                WorkRequest(callable_, item[0], item[1], callback=callback,
+                    exc_callback=exc_callback)
+            )
+        else:
+            requests.append(
+                WorkRequest(callable_, [item], None, callback=callback,
+                    exc_callback=exc_callback)
+            )
+    return requests
+
+
+# classes
+class WorkerThread(threading.Thread):
+    """Background thread connected to the requests/results queues.
+
+    A worker thread sits in the background and picks up work requests from
+    one queue and puts the results in another until it is dismissed.
+
+    """
+
+    def __init__(self, requests_queue, results_queue, poll_timeout=5, **kwds):
+        """Set up thread in daemonic mode and start it immediatedly.
+
+        ``requests_queue`` and ``results_queue`` are instances of
+        ``Queue.Queue`` passed by the ``ThreadPool`` class when it creates a new
+        worker thread.
+
+        """
+        threading.Thread.__init__(self, **kwds)
+        self.setDaemon(1)
+        self._requests_queue = requests_queue
+        self._results_queue = results_queue
+        self._poll_timeout = poll_timeout
+        self._dismissed = threading.Event()
+        self.start()
+
+    def run(self):
+        """Repeatedly process the job queue until told to exit."""
+        while True:
+            if self._dismissed.isSet():
+                # we are dismissed, break out of loop
+                break
+            # get next work request. If we don't get a new request from the
+            # queue after self._poll_timout seconds, we jump to the start of
+            # the while loop again, to give the thread a chance to exit.
+            try:
+                request = self._requests_queue.get(True, self._poll_timeout)
+            except Queue.Empty:
+                continue
+            else:
+                if self._dismissed.isSet():
+                    # we are dismissed, put back request in queue and exit loop
+                    self._requests_queue.put(request)
+                    break
+                try:
+                    result = request.callable(*request.args, **request.kwds)
+                    self._results_queue.put((request, result))
+                except:
+                    request.exception = True
+                    self._results_queue.put((request, sys.exc_info()))
+
+    def dismiss(self):
+        """Sets a flag to tell the thread to exit when done with current job."""
+        self._dismissed.set()
+
+
+class WorkRequest:
+    """A request to execute a callable for putting in the request queue later.
+
+    See the module function ``makeRequests`` for the common case
+    where you want to build several ``WorkRequest`` objects for the same
+    callable but with different arguments for each call.
+
+    """
+
+    def __init__(self, callable_, args=None, kwds=None, requestID=None,
+            callback=None, exc_callback=_handle_thread_exception):
+        """Create a work request for a callable and attach callbacks.
+
+        A work request consists of the a callable to be executed by a
+        worker thread, a list of positional arguments, a dictionary
+        of keyword arguments.
+
+        A ``callback`` function can be specified, that is called when the
+        results of the request are picked up from the result queue. It must
+        accept two anonymous arguments, the ``WorkRequest`` object and the
+        results of the callable, in that order. If you want to pass additional
+        information to the callback, just stick it on the request object.
+
+        You can also give custom callback for when an exception occurs with
+        the ``exc_callback`` keyword parameter. It should also accept two
+        anonymous arguments, the ``WorkRequest`` and a tuple with the exception
+        details as returned by ``sys.exc_info()``. The default implementation
+        of this callback just prints the exception info via
+        ``traceback.print_exception``. If you want no exception handler
+        callback, just pass in ``None``.
+
+        ``requestID``, if given, must be hashable since it is used by
+        ``ThreadPool`` object to store the results of that work request in a
+        dictionary. It defaults to the return value of ``id(self)``.
+
+        """
+        if requestID is None:
+            self.requestID = id(self)
+        else:
+            try:
+                self.requestID = hash(requestID)
+            except TypeError:
+                raise TypeError("requestID must be hashable.")
+        self.exception = False
+        self.callback = callback
+        self.exc_callback = exc_callback
+        self.callable = callable_
+        self.args = args or []
+        self.kwds = kwds or {}
+
+    def __str__(self):
+        return "<WorkRequest id=%s args=%r kwargs=%r exception=%s>" % \
+            (self.requestID, self.args, self.kwds, self.exception)
+
+class ThreadPool:
+    """A thread pool, distributing work requests and collecting results.
+
+    See the module docstring for more information.
+
+    """
+
+    def __init__(self, num_workers, q_size=0, resq_size=0, poll_timeout=5):
+        """Set up the thread pool and start num_workers worker threads.
+
+        ``num_workers`` is the number of worker threads to start initially.
+
+        If ``q_size > 0`` the size of the work *request queue* is limited and
+        the thread pool blocks when the queue is full and it tries to put
+        more work requests in it (see ``putRequest`` method), unless you also
+        use a positive ``timeout`` value for ``putRequest``.
+
+        If ``resq_size > 0`` the size of the *results queue* is limited and the
+        worker threads will block when the queue is full and they try to put
+        new results in it.
+
+        .. warning:
+            If you set both ``q_size`` and ``resq_size`` to ``!= 0`` there is
+            the possibilty of a deadlock, when the results queue is not pulled
+            regularly and too many jobs are put in the work requests queue.
+            To prevent this, always set ``timeout > 0`` when calling
+            ``ThreadPool.putRequest()`` and catch ``Queue.Full`` exceptions.
+
+        """
+        self._requests_queue = Queue.Queue(q_size)
+        self._results_queue = Queue.Queue(resq_size)
+        self.workers = []
+        self.dismissedWorkers = []
+        self.workRequests = {}
+        self.createWorkers(num_workers, poll_timeout)
+
+    def createWorkers(self, num_workers, poll_timeout=5):
+        """Add num_workers worker threads to the pool.
+
+        ``poll_timout`` sets the interval in seconds (int or float) for how
+        ofte threads should check whether they are dismissed, while waiting for
+        requests.
+
+        """
+        for i in range(num_workers):
+            self.workers.append(WorkerThread(self._requests_queue,
+                self._results_queue, poll_timeout=poll_timeout))
+
+    def dismissWorkers(self, num_workers, do_join=False):
+        """Tell num_workers worker threads to quit after their current task."""
+        dismiss_list = []
+        for i in range(min(num_workers, len(self.workers))):
+            worker = self.workers.pop()
+            worker.dismiss()
+            dismiss_list.append(worker)
+
+        if do_join:
+            for worker in dismiss_list:
+                worker.join()
+        else:
+            self.dismissedWorkers.extend(dismiss_list)
+
+    def joinAllDismissedWorkers(self):
+        """Perform Thread.join() on all worker threads that have been dismissed.
+        """
+        for worker in self.dismissedWorkers:
+            worker.join()
+        self.dismissedWorkers = []
+
+    def putRequest(self, request, block=True, timeout=None):
+        """Put work request into work queue and save its id for later."""
+        assert isinstance(request, WorkRequest)
+        # don't reuse old work requests
+        assert not getattr(request, 'exception', None)
+        self._requests_queue.put(request, block, timeout)
+        self.workRequests[request.requestID] = request
+
+    def poll(self, block=False):
+        """Process any new results in the queue."""
+        while True:
+            # still results pending?
+            if not self.workRequests:
+                raise NoResultsPending
+            # are there still workers to process remaining requests?
+            elif block and not self.workers:
+                raise NoWorkersAvailable
+            try:
+                # get back next results
+                request, result = self._results_queue.get(block=block)
+                # has an exception occured?
+                if request.exception and request.exc_callback:
+                    request.exc_callback(request, result)
+                # hand results to callback, if any
+                if request.callback and not \
+                       (request.exception and request.exc_callback):
+                    request.callback(request, result)
+                del self.workRequests[request.requestID]
+            except Queue.Empty:
+                break
+
+    def wait(self):
+        """Wait for results, blocking until all have arrived."""
+        while 1:
+            try:
+                self.poll(True)
+            except NoResultsPending:
+                break
+
+
+################
+# USAGE EXAMPLE
+################
+
+if __name__ == '__main__':
+    import random
+    import time
+
+    # the work the threads will have to do (rather trivial in our example)
+    def do_something(data):
+        time.sleep(random.randint(1,5))
+        result = round(random.random() * data, 5)
+        # just to show off, we throw an exception once in a while
+        if result > 5:
+            raise RuntimeError("Something extraordinary happened!")
+        return result
+
+    # this will be called each time a result is available
+    def print_result(request, result):
+        print "**** Result from request #%s: %r" % (request.requestID, result)
+
+    # this will be called when an exception occurs within a thread
+    # this example exception handler does little more than the default handler
+    def handle_exception(request, exc_info):
+        if not isinstance(exc_info, tuple):
+            # Something is seriously wrong...
+            print request
+            print exc_info
+            raise SystemExit
+        print "**** Exception occured in request #%s: %s" % \
+          (request.requestID, exc_info)
+
+    # assemble the arguments for each job to a list...
+    data = [random.randint(1,10) for i in range(20)]
+    # ... and build a WorkRequest object for each item in data
+    requests = makeRequests(do_something, data, print_result, handle_exception)
+    # to use the default exception handler, uncomment next line and comment out
+    # the preceding one.
+    #requests = makeRequests(do_something, data, print_result)
+
+    # or the other form of args_lists accepted by makeRequests: ((,), {})
+    data = [((random.randint(1,10),), {}) for i in range(20)]
+    requests.extend(
+        makeRequests(do_something, data, print_result, handle_exception)
+        #makeRequests(do_something, data, print_result)
+        # to use the default exception handler, uncomment next line and comment
+        # out the preceding one.
+    )
+
+    # we create a pool of 3 worker threads
+    print "Creating thread pool with 3 worker threads."
+    main = ThreadPool(3)
+
+    # then we put the work requests in the queue...
+    for req in requests:
+        main.putRequest(req)
+        print "Work request #%s added." % req.requestID
+    # or shorter:
+    # [main.putRequest(req) for req in requests]
+
+    # ...and wait for the results to arrive in the result queue
+    # by using ThreadPool.wait(). This would block until results for
+    # all work requests have arrived:
+    # main.wait()
+
+    # instead we can poll for results while doing something else:
+    i = 0
+    while True:
+        try:
+            time.sleep(0.5)
+            main.poll()
+            print "Main thread working...",
+            print "(active worker threads: %i)" % (threading.activeCount()-1, )
+            if i == 10:
+                print "**** Adding 3 more worker threads..."
+                main.createWorkers(3)
+            if i == 20:
+                print "**** Dismissing 2 worker threads..."
+                main.dismissWorkers(2)
+            i += 1
+        except KeyboardInterrupt:
+            print "**** Interrupted!"
+            break
+        except NoResultsPending:
+            print "**** No pending results."
+            break
+    if main.dismissedWorkers:
+        print "Joining all dismissed worker threads..."
+        main.joinAllDismissedWorkers()
+

File hoptoad/middleware.py

-import sys
-import traceback
-import urllib2
-import yaml
+import itertools
+import logging
 import re
-import os
-import threading
-import logging
-import time
 
-from threadpool import WorkRequest, ThreadPool
-from threadpool import NoResultsPending
+from django.conf import settings
+from django.core.exceptions import MiddlewareNotUsed
 
-from django.core.exceptions import MiddlewareNotUsed
-from django.views.debug import get_safe_settings
-from django.conf import settings
+from hoptoad import get_hoptoad_settings
+from hoptoad.api import htv2
+from hoptoad.handlers import get_handler
 
 
 logger = logging.getLogger(__name__)
 
-def _parse_environment(request):
-    """Return an environment mapping for a notification from the given request."""
-    env = dict( (str(k), str(v)) for (k, v) in get_safe_settings().items() )
-    env.update( dict( (str(k), str(v)) for (k, v) in request.META.items() ) )
-    
-    env['REQUEST_URI'] = request.build_absolute_uri()
-    
-    return env
-
-def _parse_traceback(trace):
-    """Return the given traceback string formatted for a notification."""
-    p_traceback = [ "%s:%d:in `%s'" % (filename, lineno, funcname) 
-                    for filename, lineno, funcname, _
-                    in traceback.extract_tb(trace) ]
-    p_traceback.reverse()
-    
-    return p_traceback
-
-def _parse_message(exc):
-    """Return a message for a notification from the given exception."""
-    return '%s: %s' % (exc.__class__.__name__, str(exc))
-
-def _parse_request(request):
-    """Return a request mapping for a notification from the given request."""
-    request_get = dict( (str(k), str(v)) for (k, v) in request.GET.items() )
-    request_post = dict( (str(k), str(v)) for (k, v) in request.POST.items() )
-    return request_post if request_post else request_get
-
-def _parse_session(session):
-    """Return a request mapping for a notification from the given session."""
-    return dict( (str(k), str(v)) for (k, v) in session.items() )
-
-
-def _generate_payload(request, exc=None, trace=None, message=None, error_class=None):
-    """Generate a YAML payload for a Hoptoad notification.
-    
-    Parameters:
-    request -- A Django HTTPRequest.  This is required.
-    
-    Keyword parameters:
-    exc -- A Python Exception object.  If this is not given the 
-           mess parameter must be.
-    trace -- A Python Traceback object.  This is not required.
-    message -- A string representing the error message.  If this is not
-               given, the exc parameter must be.
-    error_class -- A string representing the error class.  If this is not
-                   given the excc parameter must be.
-    """
-    p_message = message if message else _parse_message(exc)
-    p_error_class = error_class if error_class else exc.__class__.__name__
-    p_traceback = _parse_traceback(trace) if trace else []
-    p_environment = _parse_environment(request)
-    p_request = _parse_request(request)
-    p_session = _parse_session(request.session)
-    
-    return yaml.dump({ 'notice': {
-        'api_key':       settings.HOPTOAD_API_KEY,
-        'error_class':   p_error_class,
-        'error_message': p_message,
-        'backtrace':     p_traceback,
-        'request':       { 'url': request.build_absolute_uri(),
-                           'params': p_request },
-        'session':       { 'key': '', 'data': p_session },
-        'environment':   p_environment,
-    }}, default_flow_style=False)
-
-def _ride_the_toad(payload, timeout):
-    """Send a notification (an HTTP POST request) to Hoptoad.
-    
-    Parameters:
-    payload -- the YAML payload for the request from _generate_payload()
-    timeout -- the maximum timeout, in seconds, or None to use the default
-    """
-    headers = { 'Content-Type': 'application/x-yaml', 
-                'Accept': 'text/xml, application/xml', }
-    r = urllib2.Request('http://hoptoadapp.com/notices', payload, headers)
-    try:
-        if timeout:
-            urllib2.urlopen(r, timeout=timeout)
-        else:
-            urllib2.urlopen(r)
-    except urllib2.URLError:
-        pass
-
-def _exception_handler(request, exc_info):
-    """Rudimentary exception handler, simply log and moves on.
-    
-    If there's no tuple, it means something went really wrong. Critically log
-    and exit.
-    """
-    if not isinstance(exc_info, tuple):
-        logger.critical(str(request))
-        logger.critical(str(exc_info))
-        sys.exit(1)
-    logger.warn(
-        "* Exception occured in request #%s: %s" % (request.requestID, exc_info)
-    )
-
-
-class Runnable(threading.Thread):
-    """A daemon thread that spawns a threadpool of worker threads.
-    
-    Waits for queue additions through the enqueue method.
-    
-    # TODO: Consider using asyncore instead of a threadpool
-    """
-    def __init__(self, threadpool_threadcount):
-        threading.Thread.__init__(self,
-            name="HoptoadThreadRunner-%d" % os.getpid())
-        
-        self.threads = threadpool_threadcount
-        self.daemon = True # daemon thread... important!
-        self.pool = ThreadPool(self.threads)
-    
-    def enqueue(self, payload, timeout, callback=None, exc_callback=_exception_handler):
-        request = WorkRequest(
-            _ride_the_toad,
-            args=(payload, timeout),
-            callback=callback,
-            exc_callback=exc_callback
-        )
-        
-        # Put the request into the queue where the detached 'run' method will
-        # poll its queue every 0.5 seconds and start working.
-        self.pool.putRequest(request)
-    
-    def run(self):
-        """Actively poll the queue for requests and process them."""
-        while True:
-            try:
-                time.sleep(0.5) # TODO: configure for tuning
-                self.pool.poll()
-            except KeyboardInterrupt:
-                logger.info("* Interrupted!")
-                break
-            except NoResultsPending:
-                pass
-    
-
 class HoptoadNotifierMiddleware(object):
     def __init__(self):
         """Initialize the middleware."""
-        all_settings = dir(settings)
-        
-        if 'HOPTOAD_API_KEY' not in all_settings or not settings.HOPTOAD_API_KEY:
+        hoptoad_settings = get_hoptoad_settings()
+        self._init_middleware(hoptoad_settings)
+    
+    def _init_middleware(self, hoptoad_settings):
+        if 'HOPTOAD_API_KEY' not in hoptoad_settings:
             raise MiddlewareNotUsed
         
-        if settings.DEBUG and \
-           (not 'HOPTOAD_NOTIFY_WHILE_DEBUG' in all_settings
-            or not settings.HOPTOAD_NOTIFY_WHILE_DEBUG ):
-            raise MiddlewareNotUsed
+        if settings.DEBUG:
+            if not hoptoad_settings.get('HOPTOAD_NOTIFY_WHILE_DEBUG', False):
+                raise MiddlewareNotUsed
         
-        self.timeout = ( settings.HOPTOAD_TIMEOUT 
-                         if 'HOPTOAD_TIMEOUT' in all_settings else None )
+        self.timeout = hoptoad_settings.get('HOPTOAD_TIMEOUT', None)
+        self.notify_404 = hoptoad_settings.get('HOPTOAD_NOTIFY_404', False)
+        self.notify_403 = hoptoad_settings.get('HOPTOAD_NOTIFY_403', False)
         
-        self.notify_404 = ( settings.HOPTOAD_NOTIFY_404 
-                            if 'HOPTOAD_NOTIFY_404' in all_settings else False )
-        self.notify_403 = ( settings.HOPTOAD_NOTIFY_403 
-                            if 'HOPTOAD_NOTIFY_403' in all_settings else False )
-        self.ignore_agents = ( map(re.compile, settings.HOPTOAD_IGNORE_AGENTS)
-                            if 'HOPTOAD_IGNORE_AGENTS' in all_settings else [] )
-            
-        # Creates a self.thread attribute and starts it.
-        self.initialize_threadpool(all_settings)
+        ignore_agents = hoptoad_settings.get('HOPTOAD_IGNORE_AGENTS', [])
+        self.ignore_agents = map(re.compile, ignore_agents)
+        
+        self.handler = get_handler()
     
     def _ignore(self, request):
-        """Return True if the given request should be ignored, False otherwise."""
+        """Return True if the given request should be ignored,
+        False otherwise.
+
+        """
         ua = request.META.get('HTTP_USER_AGENT', '')
         return any(i.search(ua) for i in self.ignore_agents)
     
-    def initialize_threadpool(self, all_settings):
-        """Initialize an internal threadpool asynchronous POST requests.
-        
-        Also creates a thread attribute and starts the threadpool.
-        """
-        
-        if 'HOPTOAD_THREAD_COUNT' in all_settings:
-            threads = settings.HOPTOAD_THREAD_COUNT
-        else:
-            threads = 4
-        
-        self.thread = Runnable(threads)
-        self.thread.start()
-    
     def process_response(self, request, response):
         """Process a reponse object.
         
         
         Regardless of whether Hoptoad is notified, the reponse object will
         be returned unchanged.
+        
         """
         if self._ignore(request):
             return response
         
-        if self.notify_404 and response.status_code == 404:
-            error_class = 'Http404'
-            
-            message = 'Http404: Page not found at %s' % request.build_absolute_uri()
-            payload = _generate_payload(request, error_class=error_class, message=message)
-            
-            self.thread.enqueue(payload, self.timeout)
-        
-        if self.notify_403 and response.status_code == 403:
-            error_class = 'Http403'
-            
-            message = 'Http403: Forbidden at %s' % request.build_absolute_uri()
-            payload = _generate_payload(request, error_class=error_class, message=message)
-            
-            self.thread.enqueue(payload, self.timeout)
+        sc = response.status_code
+        if sc in [404, 403] and getattr(self, "notify_%d" % sc):
+            self.handler.enqueue(htv2.generate_payload((request, sc)),
+                                 self.timeout)
         
         return response
     
         Hoptoad will be notified of the exception and None will be
         returned so that Django's normal exception handling will then
         be used.
+        
         """
         if self._ignore(request):
             return None
         
-        excc, _, tb = sys.exc_info()
-        
-        payload = _generate_payload(request, exc, tb)
-        self.thread.enqueue(payload, self.timeout)
-        
+        self.handler.enqueue(htv2.generate_payload((request, None)),
+                             self.timeout)
         return None
+    
 

File hoptoad/tests.py

 import urllib2
 from django.test import TestCase
-from django.conf import settings
+from hoptoad import get_hoptoad_settings
 
 class BasicTests(TestCase):
     """Basic tests like setup and connectivity."""
     
     def test_api_key_present(self):
         """Test to make sure an API key is present."""
-        self.assertTrue('HOPTOAD_API_KEY' in dir(settings),
+        hoptoad_settings = get_hoptoad_settings()
+        self.assertTrue('HOPTOAD_API_KEY' in hoptoad_settings,
             msg='The HOPTOAD_API_KEY setting is not present.')
-        self.assertTrue(settings.HOPTOAD_API_KEY,
+        self.assertTrue(hoptoad_settings['HOPTOAD_API_KEY'],
             msg='The HOPTOAD_API_KEY setting is blank.')
     
     def test_hoptoad_connectivity(self):

File hoptoad/threadpool.py

-# -*- coding: UTF-8 -*-
-"""Easy to use object-oriented thread pool framework.
-
-A thread pool is an object that maintains a pool of worker threads to perform
-time consuming operations in parallel. It assigns jobs to the threads
-by putting them in a work request queue, where they are picked up by the
-next available thread. This then performs the requested operation in the
-background and puts the results in another queue.
-
-The thread pool object can then collect the results from all threads from
-this queue as soon as they become available or after all threads have
-finished their work. It's also possible, to define callbacks to handle
-each result as it comes in.
-
-The basic concept and some code was taken from the book "Python in a Nutshell,
-2nd edition" by Alex Martelli, O'Reilly 2006, ISBN 0-596-10046-9, from section
-14.5 "Threaded Program Architecture". I wrapped the main program logic in the
-ThreadPool class, added the WorkRequest class and the callback system and
-tweaked the code here and there. Kudos also to Florent Aide for the exception
-handling mechanism.
-
-Basic usage::
-
-    >>> pool = ThreadPool(poolsize)
-    >>> requests = makeRequests(some_callable, list_of_args, callback)
-    >>> [pool.putRequest(req) for req in requests]
-    >>> pool.wait()
-
-See the end of the module code for a brief, annotated usage example.
-
-Website : http://chrisarndt.de/projects/threadpool/
-
-"""
-__docformat__ = "restructuredtext en"
-
-__all__ = [
-    'makeRequests',
-    'NoResultsPending',
-    'NoWorkersAvailable',
-    'ThreadPool',
-    'WorkRequest',
-    'WorkerThread'
-]
-
-__author__ = "Christopher Arndt"
-__version__ = '1.2.7'
-__revision__ = "$Revision: 416 $"
-__date__ = "$Date: 2009-10-07 05:41:27 +0200 (Wed, 07 Oct 2009) $"
-__license__ = "MIT license"
-
-
-# standard library modules
-import sys
-import threading
-import Queue
-import traceback
-
-
-# exceptions
-class NoResultsPending(Exception):
-    """All work requests have been processed."""
-    pass
-
-class NoWorkersAvailable(Exception):
-    """No worker threads available to process remaining requests."""
-    pass
-
-
-# internal module helper functions
-def _handle_thread_exception(request, exc_info):
-    """Default exception handler callback function.
-
-    This just prints the exception info via ``traceback.print_exception``.
-
-    """
-    traceback.print_exception(*exc_info)
-
-
-# utility functions
-def makeRequests(callable_, args_list, callback=None,
-        exc_callback=_handle_thread_exception):
-    """Create several work requests for same callable with different arguments.
-
-    Convenience function for creating several work requests for the same
-    callable where each invocation of the callable receives different values
-    for its arguments.
-
-    ``args_list`` contains the parameters for each invocation of callable.
-    Each item in ``args_list`` should be either a 2-item tuple of the list of
-    positional arguments and a dictionary of keyword arguments or a single,
-    non-tuple argument.
-
-    See docstring for ``WorkRequest`` for info on ``callback`` and
-    ``exc_callback``.
-
-    """
-    requests = []
-    for item in args_list:
-        if isinstance(item, tuple):
-            requests.append(
-                WorkRequest(callable_, item[0], item[1], callback=callback,
-                    exc_callback=exc_callback)
-            )
-        else:
-            requests.append(
-                WorkRequest(callable_, [item], None, callback=callback,
-                    exc_callback=exc_callback)
-            )
-    return requests
-
-
-# classes
-class WorkerThread(threading.Thread):
-    """Background thread connected to the requests/results queues.
-
-    A worker thread sits in the background and picks up work requests from
-    one queue and puts the results in another until it is dismissed.
-
-    """
-
-    def __init__(self, requests_queue, results_queue, poll_timeout=5, **kwds):
-        """Set up thread in daemonic mode and start it immediatedly.
-
-        ``requests_queue`` and ``results_queue`` are instances of
-        ``Queue.Queue`` passed by the ``ThreadPool`` class when it creates a new
-        worker thread.
-
-        """
-        threading.Thread.__init__(self, **kwds)
-        self.setDaemon(1)
-        self._requests_queue = requests_queue
-        self._results_queue = results_queue
-        self._poll_timeout = poll_timeout
-        self._dismissed = threading.Event()
-        self.start()
-
-    def run(self):
-        """Repeatedly process the job queue until told to exit."""
-        while True:
-            if self._dismissed.isSet():
-                # we are dismissed, break out of loop
-                break
-            # get next work request. If we don't get a new request from the
-            # queue after self._poll_timout seconds, we jump to the start of
-            # the while loop again, to give the thread a chance to exit.
-            try:
-                request = self._requests_queue.get(True, self._poll_timeout)
-            except Queue.Empty:
-                continue
-            else:
-                if self._dismissed.isSet():
-                    # we are dismissed, put back request in queue and exit loop
-                    self._requests_queue.put(request)
-                    break
-                try:
-                    result = request.callable(*request.args, **request.kwds)
-                    self._results_queue.put((request, result))
-                except:
-                    request.exception = True
-                    self._results_queue.put((request, sys.exc_info()))
-
-    def dismiss(self):
-        """Sets a flag to tell the thread to exit when done with current job."""
-        self._dismissed.set()
-
-
-class WorkRequest:
-    """A request to execute a callable for putting in the request queue later.
-
-    See the module function ``makeRequests`` for the common case
-    where you want to build several ``WorkRequest`` objects for the same
-    callable but with different arguments for each call.
-
-    """
-
-    def __init__(self, callable_, args=None, kwds=None, requestID=None,
-            callback=None, exc_callback=_handle_thread_exception):
-        """Create a work request for a callable and attach callbacks.
-
-        A work request consists of the a callable to be executed by a
-        worker thread, a list of positional arguments, a dictionary
-        of keyword arguments.
-
-        A ``callback`` function can be specified, that is called when the
-        results of the request are picked up from the result queue. It must
-        accept two anonymous arguments, the ``WorkRequest`` object and the
-        results of the callable, in that order. If you want to pass additional
-        information to the callback, just stick it on the request object.
-
-        You can also give custom callback for when an exception occurs with
-        the ``exc_callback`` keyword parameter. It should also accept two
-        anonymous arguments, the ``WorkRequest`` and a tuple with the exception
-        details as returned by ``sys.exc_info()``. The default implementation
-        of this callback just prints the exception info via
-        ``traceback.print_exception``. If you want no exception handler
-        callback, just pass in ``None``.
-
-        ``requestID``, if given, must be hashable since it is used by
-        ``ThreadPool`` object to store the results of that work request in a
-        dictionary. It defaults to the return value of ``id(self)``.
-
-        """
-        if requestID is None:
-            self.requestID = id(self)
-        else:
-            try:
-                self.requestID = hash(requestID)
-            except TypeError:
-                raise TypeError("requestID must be hashable.")
-        self.exception = False
-        self.callback = callback
-        self.exc_callback = exc_callback
-        self.callable = callable_
-        self.args = args or []
-        self.kwds = kwds or {}
-
-    def __str__(self):
-        return "<WorkRequest id=%s args=%r kwargs=%r exception=%s>" % \
-            (self.requestID, self.args, self.kwds, self.exception)
-
-class ThreadPool:
-    """A thread pool, distributing work requests and collecting results.
-
-    See the module docstring for more information.
-
-    """
-
-    def __init__(self, num_workers, q_size=0, resq_size=0, poll_timeout=5):
-        """Set up the thread pool and start num_workers worker threads.
-
-        ``num_workers`` is the number of worker threads to start initially.
-
-        If ``q_size > 0`` the size of the work *request queue* is limited and
-        the thread pool blocks when the queue is full and it tries to put
-        more work requests in it (see ``putRequest`` method), unless you also
-        use a positive ``timeout`` value for ``putRequest``.
-
-        If ``resq_size > 0`` the size of the *results queue* is limited and the
-        worker threads will block when the queue is full and they try to put
-        new results in it.
-
-        .. warning:
-            If you set both ``q_size`` and ``resq_size`` to ``!= 0`` there is
-            the possibilty of a deadlock, when the results queue is not pulled
-            regularly and too many jobs are put in the work requests queue.
-            To prevent this, always set ``timeout > 0`` when calling
-            ``ThreadPool.putRequest()`` and catch ``Queue.Full`` exceptions.
-
-        """
-        self._requests_queue = Queue.Queue(q_size)
-        self._results_queue = Queue.Queue(resq_size)
-        self.workers = []
-        self.dismissedWorkers = []
-        self.workRequests = {}
-        self.createWorkers(num_workers, poll_timeout)
-
-    def createWorkers(self, num_workers, poll_timeout=5):
-        """Add num_workers worker threads to the pool.
-
-        ``poll_timout`` sets the interval in seconds (int or float) for how
-        ofte threads should check whether they are dismissed, while waiting for
-        requests.
-
-        """
-        for i in range(num_workers):
-            self.workers.append(WorkerThread(self._requests_queue,
-                self._results_queue, poll_timeout=poll_timeout))
-
-    def dismissWorkers(self, num_workers, do_join=False):
-        """Tell num_workers worker threads to quit after their current task."""
-        dismiss_list = []
-        for i in range(min(num_workers, len(self.workers))):
-            worker = self.workers.pop()
-            worker.dismiss()
-            dismiss_list.append(worker)
-
-        if do_join:
-            for worker in dismiss_list:
-                worker.join()
-        else:
-            self.dismissedWorkers.extend(dismiss_list)
-
-    def joinAllDismissedWorkers(self):
-        """Perform Thread.join() on all worker threads that have been dismissed.
-        """
-        for worker in self.dismissedWorkers:
-            worker.join()
-        self.dismissedWorkers = []
-
-    def putRequest(self, request, block=True, timeout=None):
-        """Put work request into work queue and save its id for later."""
-        assert isinstance(request, WorkRequest)
-        # don't reuse old work requests
-        assert not getattr(request, 'exception', None)
-        self._requests_queue.put(request, block, timeout)
-        self.workRequests[request.requestID] = request
-
-    def poll(self, block=False):
-        """Process any new results in the queue."""
-        while True:
-            # still results pending?
-            if not self.workRequests:
-                raise NoResultsPending
-            # are there still workers to process remaining requests?
-            elif block and not self.workers:
-                raise NoWorkersAvailable
-            try:
-                # get back next results
-                request, result = self._results_queue.get(block=block)
-                # has an exception occured?
-                if request.exception and request.exc_callback:
-                    request.exc_callback(request, result)
-                # hand results to callback, if any
-                if request.callback and not \
-                       (request.exception and request.exc_callback):
-                    request.callback(request, result)
-                del self.workRequests[request.requestID]
-            except Queue.Empty:
-                break
-
-    def wait(self):
-        """Wait for results, blocking until all have arrived."""
-        while 1:
-            try:
-                self.poll(True)
-            except NoResultsPending:
-                break
-
-
-################
-# USAGE EXAMPLE
-################
-
-if __name__ == '__main__':
-    import random
-    import time
-
-    # the work the threads will have to do (rather trivial in our example)
-    def do_something(data):
-        time.sleep(random.randint(1,5))
-        result = round(random.random() * data, 5)
-        # just to show off, we throw an exception once in a while
-        if result > 5:
-            raise RuntimeError("Something extraordinary happened!")
-        return result
-
-    # this will be called each time a result is available
-    def print_result(request, result):
-        print "**** Result from request #%s: %r" % (request.requestID, result)
-
-    # this will be called when an exception occurs within a thread
-    # this example exception handler does little more than the default handler
-    def handle_exception(request, exc_info):
-        if not isinstance(exc_info, tuple):
-            # Something is seriously wrong...
-            print request
-            print exc_info
-            raise SystemExit
-        print "**** Exception occured in request #%s: %s" % \
-          (request.requestID, exc_info)
-
-    # assemble the arguments for each job to a list...
-    data = [random.randint(1,10) for i in range(20)]
-    # ... and build a WorkRequest object for each item in data
-    requests = makeRequests(do_something, data, print_result, handle_exception)
-    # to use the default exception handler, uncomment next line and comment out
-    # the preceding one.
-    #requests = makeRequests(do_something, data, print_result)
-
-    # or the other form of args_lists accepted by makeRequests: ((,), {})
-    data = [((random.randint(1,10),), {}) for i in range(20)]
-    requests.extend(
-        makeRequests(do_something, data, print_result, handle_exception)
-        #makeRequests(do_something, data, print_result)
-        # to use the default exception handler, uncomment next line and comment
-        # out the preceding one.
-    )
-
-    # we create a pool of 3 worker threads
-    print "Creating thread pool with 3 worker threads."
-    main = ThreadPool(3)
-
-    # then we put the work requests in the queue...
-    for req in requests:
-        main.putRequest(req)
-        print "Work request #%s added." % req.requestID
-    # or shorter:
-    # [main.putRequest(req) for req in requests]
-
-    # ...and wait for the results to arrive in the result queue
-    # by using ThreadPool.wait(). This would block until results for
-    # all work requests have arrived:
-    # main.wait()
-
-    # instead we can poll for results while doing something else:
-    i = 0
-    while True:
-        try:
-            time.sleep(0.5)
-            main.poll()
-            print "Main thread working...",
-            print "(active worker threads: %i)" % (threading.activeCount()-1, )
-            if i == 10:
-                print "**** Adding 3 more worker threads..."
-                main.createWorkers(3)
-            if i == 20:
-                print "**** Dismissing 2 worker threads..."
-                main.dismissWorkers(2)
-            i += 1
-        except KeyboardInterrupt:
-            print "**** Interrupted!"
-            break
-        except NoResultsPending:
-            print "**** No pending results."
-            break
-    if main.dismissedWorkers:
-        print "Joining all dismissed worker threads..."
-        main.joinAllDismissedWorkers()
-
 import os
 from setuptools import setup, find_packages
 
+README_PATH = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'README.rst')
+
 setup(
     name='django-hoptoad',
-    version='0.2',
-    description='django-hoptoad is some simple Middleware for letting Django-driven websites report their errors to Hoptoad.',
-    long_description=open(os.path.join(os.path.abspath(os.path.dirname(__file__)), 'README.rst')).read(),
-    author='Steve Losh',
-    author_email='steve@stevelosh.com',
-    url='http://stevelosh.com/projects/django-hoptoad/',
+    version='0.3',
+    description='django-hoptoad is some simple Middleware for letting '
+                'Django-driven websites report their errors to Hoptoad.',
+    long_description=open(README_PATH).read(),
+    author='Steve Losh, Mahmoud Abdelkader',
+    author_email='steve@stevelosh.com, mahmoud@linux.com',
+    url='http://sjl.bitbucket.org/django-hoptoad/',
     packages=find_packages(),
-    install_requires=['pyyaml'],
     classifiers=[
         'Development Status :: 4 - Beta',
         'Environment :: Web Environment',
         'Programming Language :: Python',
         'Programming Language :: Python :: 2.6',
     ],
-)
+)