Commits

Steve Losh committed faaed67 Merge

Merge the v2.0 changes with the others.

Comments (0)

Files changed (14)

README

--*- markdown -*-
-
-django-hoptoad
-==============
-
-django-hoptoad is some simple Middleware for letting [Django][]-driven websites report their errors to [Hoptoad][].  Now [ponies][] can ride the toad too.
-
-[Django]: http://djangoproject.com/
-[Hoptoad]: http://hoptoadapp.com/
-[ponies]: http://djangopony.com/
-
-
-Requirements
-------------
-
-django-hoptoad requires:
-
-* [Python][] 2.5+ (preferably 2.6+ as that's what I've tested it with)
-* [PyYAML][] (`pip install pyyaml` or `easy_install pyyaml`)
-* [Django][] 1.0+
-* A [Hoptoad][] account
-
-[Python]: http://python.org/
-[PyYAML]: http://pyyaml.org/
-
-
-Installation
-------------
-
-Grab the the django-hoptoad code by cloning the [Mercurial][] repository (or just [download the latest version][tip-dl] and unzip it somewhere):
-
-    hg clone http://bitbucket.org/sjl/django-hoptoad/
-
-There's a git mirror too if you *really* want it.
-
-    git clone git://github.com/sjl/django-hoptoad.git
-
-Once you download it, you can install it in the usual manner:
-
-    cd django-hoptoad
-    python setup.py install
-
-If you'd prefer to be able to update at any time by pulling down changes with Mercurial or git, you can symlink the module into your `site-packages` directory instead of using `python setup.py install`:
-
-    ln -s /full/path/to/django-hoptoad/hoptoad /full/path/to/site-packages/
-
-To make sure it works you can run:
-
-    python -c 'import hoptoad'
-
-[Mercurial]: http://mercurial.selenic.com/
-[tip-dl]: http://bitbucket.org/sjl/django-hoptoad/get/tip.zip
-
-
-Usage
------
-
-To set up a Django project to notify Hoptoad of its errors, you need to do two things in its `settings.py` file.
-
-First, add the `HoptoadNotifierMiddleware` as the last item in the `MIDDLEWARE_CLASSES` setting:
-
-    MIDDLEWARE_CLASSES = (
-        # ... other middleware classes ...
-        'hoptoad.middleware.HoptoadNotifierMiddleware',
-    )
-
-Next, you'll need to add a `HOPTOAD_API_KEY` setting.  You can get the key from the Hoptoad project page.
-
-    HOPTOAD_API_KEY = 'Your Hoptoad API key.'
-
-
-Documentation
--------------
-
-The documentation for django-hoptoad is at the [project page][project].  There's a [Quick Start guide][quickstart], [Configuration guide][config], [Troubleshooting guide][troubleshooting], and a few other things there.
-
-The documentation is stored in the `docs/` directory of the repository if you prefer to read it offline.
-
-[project]: http://sjl.bitbucket.org/django-hoptoad/
-[quickstart]: http://sjl.bitbucket.org/django-hoptoad/quickstart/
-[config]: http://sjl.bitbucket.org/django-hoptoad/config/
-[troubleshooting]: http://sjl.bitbucket.org/django-hoptoad/troubleshooting/
-
-
-Suggestions
------------
-
-This Middleware is a work in progress.  If you have a suggestion or find a bug please [add an issue][issues] and let me know.
-
-[issues]: http://bitbucket.org/sjl/django-hoptoad/issues/?status=new&status=open
+
+django-hoptoad
+==============
+
+django-hoptoad is some simple Middleware for letting Django_-driven websites report their errors to Hoptoad_.  Now ponies_ can ride the toad too.
+
+.. _Django: http://djangoproject.com/
+.. _Hoptoad: http://hoptoadapp.com/
+.. _ponies: http://djangopony.com/
+
+
+Requirements
+------------
+
+django-hoptoad requires:
+
+* Python_ 2.5+ (preferably 2.6+ as that's what I've tested it with)
+* PyYAML_ (`pip install pyyaml` or `easy_install pyyaml`)
+* Django_ 1.0+
+* A Hoptoad_ account
+
+.. _Python: http://python.org/
+.. _PyYAML: http://pyyaml.org/
+
+
+Installation
+------------
+
+Grab the the django-hoptoad code by cloning the Mercurial_ repository (or just `download the latest version <http://bitbucket.org/sjl/django-hoptoad/get/tip.zip>`_ and unzip it somewhere)::
+
+    hg clone http://bitbucket.org/sjl/django-hoptoad/
+
+There's a git mirror too if you *really* want it::
+
+    git clone git://github.com/sjl/django-hoptoad.git
+
+Once you download it, you can install it in the usual manner::
+
+    cd django-hoptoad
+    python setup.py install
+
+If you'd prefer to be able to update at any time by pulling down changes with Mercurial or git, you can symlink the module into your ``site-packages`` directory instead of using ``python setup.py install``::
+
+    ln -s /full/path/to/django-hoptoad/hoptoad /full/path/to/site-packages/
+
+To make sure it works you can run::
+
+    python -c 'import hoptoad'
+
+.. _Mercurial: http://mercurial.selenic.com/
+
+
+Usage
+-----
+
+To set up a Django project to notify Hoptoad of its errors, you need to do two things in its `settings.py` file.
+
+First, add the ``HoptoadNotifierMiddleware`` as the last item in the ``MIDDLEWARE_CLASSES``` setting::
+
+    MIDDLEWARE_CLASSES = (
+        # ... other middleware classes ...
+        'hoptoad.middleware.HoptoadNotifierMiddleware',
+    )
+
+Next, you'll need to add a ``HOPTOAD_API_KEY`` setting.  You can get the key from the Hoptoad project page::
+
+    HOPTOAD_API_KEY = 'Your Hoptoad API key.'
+
+
+Documentation
+-------------
+
+The documentation for django-hoptoad is at the `project page <http://sjl.bitbucket.org/django-hoptoad/>`_. There's a `Quick Start guide <http://sjl.bitbucket.org/django-hoptoad/quickstart/>`_, `Configuration guide <http://sjl.bitbucket.org/django-hoptoad/config/>`_, `Troubleshooting guide <http://sjl.bitbucket.org/django-hoptoad/troubleshooting/>`_, and a few other things there.
+
+The documentation is stored in the ``docs/`` directory of the repository if you prefer to read it offline.
+
+
+Suggestions
+-----------
+
+This Middleware is a work in progress.  If you have a suggestion or find a bug please `add an issue <http://bitbucket.org/sjl/django-hoptoad/issues/?status=new&status=open>`_ and let me know.

hoptoad/__init__.py

+from django.conf import settings
+
+
+__version__ = 0.3
+VERSION = __version__
+NAME = "django-hoptoad"
+URL = "http://bitbucket.org/sjl/django-hoptoad"
+
+
+def get_hoptoad_settings():
+    hoptoad_settings = getattr(settings, "HOPTOAD_SETTINGS", None)
+
+    if not hoptoad_settings:
+        # do some backward compatibility work to combine all hoptoad
+        # settings in a dictionary
+        hoptoad_settings = {}
+        # for every attribute that starts with hoptoad
+        for attr in itertools.ifilter(lambda x: x.startswith('HOPTOAD'),
+                                      dir(settings)):
+            hoptoad_settings[attr] = getattr(settings, attr)
+
+        if not hoptoad_settings:
+            # there were no settings for hoptoad at all..
+            # should probably log here
+            raise MiddlewareNotUsed
+
+    return hoptoad_settings

hoptoad/api/__init__.py

Empty file added.

hoptoad/api/htv1.py

+import traceback
+import urllib2
+import yaml
+
+from django.views.debug import get_safe_settings
+from django.conf import settings
+
+
+def _parse_environment(request):
+    """Return an environment mapping for a notification from the given request."""
+    env = dict( (str(k), str(v)) for (k, v) in get_safe_settings().items() )
+    env.update( dict( (str(k), str(v)) for (k, v) in request.META.items() ) )
+
+    env['REQUEST_URI'] = request.build_absolute_uri()
+
+    return env
+
+def _parse_traceback(trace):
+    """Return the given traceback string formatted for a notification."""
+    p_traceback = [ "%s:%d:in `%s'" % (filename, lineno, funcname)
+                    for filename, lineno, funcname, _
+                    in traceback.extract_tb(trace) ]
+    p_traceback.reverse()
+
+    return p_traceback
+
+def _parse_message(exc):
+    """Return a message for a notification from the given exception."""
+    return '%s: %s' % (exc.__class__.__name__, str(exc))
+
+def _parse_request(request):
+    """Return a request mapping for a notification from the given request."""
+    request_get = dict( (str(k), str(v)) for (k, v) in request.GET.items() )
+    request_post = dict( (str(k), str(v)) for (k, v) in request.POST.items() )
+    return request_post if request_post else request_get
+
+def _parse_session(session):
+    """Return a request mapping for a notification from the given session."""
+    return dict( (str(k), str(v)) for (k, v) in session.items() )
+
+
+def _generate_payload(request, exc=None, trace=None, message=None, error_class=None):
+    """Generate a YAML payload for a Hoptoad notification.
+
+    Parameters:
+    request -- A Django HTTPRequest.  This is required.
+
+    Keyword parameters:
+    exc -- A Python Exception object.  If this is not given the
+           mess parameter must be.
+    trace -- A Python Traceback object.  This is not required.
+    message -- A string representing the error message.  If this is not
+               given, the exc parameter must be.
+    error_class -- A string representing the error class.  If this is not
+                   given the excc parameter must be.
+    """
+    p_message = message if message else _parse_message(exc)
+    p_error_class = error_class if error_class else exc.__class__.__name__
+    p_traceback = _parse_traceback(trace) if trace else []
+    p_environment = _parse_environment(request)
+    p_request = _parse_request(request)
+    p_session = _parse_session(request.session)
+
+    return yaml.dump({ 'notice': {
+        'api_key':       settings.HOPTOAD_API_KEY,
+        'error_class':   p_error_class,
+        'error_message': p_message,
+        'backtrace':     p_traceback,
+        'request':       { 'url': request.build_absolute_uri(),
+                           'params': p_request },
+        'session':       { 'key': '', 'data': p_session },
+        'environment':   p_environment,
+    }}, default_flow_style=False)
+
+def _ride_the_toad(payload, timeout):
+    """Send a notification (an HTTP POST request) to Hoptoad.
+
+    Parameters:
+    payload -- the YAML payload for the request from _generate_payload()
+    timeout -- the maximum timeout, in seconds, or None to use the default
+    """
+    headers = { 'Content-Type': 'application/x-yaml',
+                'Accept': 'text/xml, application/xml', }
+    r = urllib2.Request('http://hoptoadapp.com/notices', payload, headers)
+    try:
+        if timeout:
+            urllib2.urlopen(r, timeout=timeout)
+        else:
+            urllib2.urlopen(r)
+    except urllib2.URLError:
+        pass
+
+def report(payload, timeout):
+    return _ride_the_toad(payload, timeout)

hoptoad/api/htv2.py

+import sys
+import traceback
+import urllib2
+import yaml
+from xml.dom.minidom import getDOMImplementation
+
+from django.views.debug import get_safe_settings
+from django.conf import settings
+
+from hoptoad import VERSION, NAME, URL
+from hoptoad import get_hoptoad_settings
+from hoptoad.api.htv1 import _parse_environment, _parse_request, _parse_session
+from hoptoad.api.htv1 import _parse_message
+
+def _class_name(class_):
+    return class_.__class__.__name__
+
+def _handle_errors(request, response, exc):
+    if response:
+        code = "Http%s" % response
+        msg = "%(code)s: %(response)s at %(uri)s" % {
+                   'code' : code,
+                   'response' : {'Http403' : "Forbidden",
+                                 'Http404' : "Page not found"}[code],
+                   'uri' : request.build_absolute_uri()
+                }
+        return (code, msg)
+
+    excc, inst = sys.exc_info()[:2]
+    if exc:
+        excc = exc
+    return _class_name(excc), _parse_message(excc)
+
+
+def generate_payload(request, response=None, exc=None):
+    """Generate an XML payload for a Hoptoad notification.
+
+    Parameters:
+    request -- A Django HTTPRequest.
+
+    """
+    hoptoad_settings = get_hoptoad_settings()
+    p_error_class, p_message = _handle_errors(request, response, exc)
+
+    # api v2 from: http://help.hoptoadapp.com/faqs/api-2/notifier-api-v2
+    xdoc = getDOMImplementation().createDocument(None, "notice", None)
+    notice = xdoc.firstChild
+
+    # /notice/@version -- should be 2.0
+    notice.setAttribute('version', '2.0')
+
+    # /notice/api-key
+    api_key = xdoc.createElement('api-key')
+    api_key_data = xdoc.createTextNode(hoptoad_settings['HOPTOAD_API_KEY'])
+    api_key.appendChild(api_key_data)
+    notice.appendChild(api_key)
+
+    # /notice/notifier/name
+    # /notice/notifier/version
+    # /notice/notifier/url
+    notifier = xdoc.createElement('notifier')
+    for key, value in zip(["name", "version", "url"], [NAME, VERSION, URL]):
+        key = xdoc.createElement(key)
+        value = xdoc.createTextNode(str(value))
+        key.appendChild(value)
+        notifier.appendChild(key)
+    notice.appendChild(notifier)
+
+    # /notice/error/class
+    # /notice/error/message
+    error = xdoc.createElement('error')
+    for key, value in zip(["class", "message"], [p_error_class, p_message]):
+        key = xdoc.createElement(key)
+        value = xdoc.createTextNode(value)
+        key.appendChild(value)
+        error.appendChild(key)
+
+    # /notice/error/backtrace/error/line
+    backtrace = xdoc.createElement('backtrace')
+    # i do this here because I'm afraid of circular reference..
+    reversed_backtrace = reversed(traceback.extract_tb(sys.exc_info()[2]))
+    for filename, lineno, funcname, text in reversed_backtrace:
+        line = xdoc.createElement('line')
+        line.setAttribute('file', str(filename))
+        line.setAttribute('number', str(lineno))
+        line.setAttribute('method', str(funcname))
+        backtrace.appendChild(line)
+    error.appendChild(backtrace)
+    notice.appendChild(error)
+
+    # /notice/request
+    xrequest = xdoc.createElement('request')
+
+    # /notice/request/url -- request.build_absolute_uri()
+    xurl = xdoc.createElement('url')
+    xurl_data = xdoc.createTextNode(request.build_absolute_uri())
+    xurl.appendChild(xurl_data)
+    xrequest.appendChild(xurl)
+
+    # /notice/request/component -- not sure..
+    comp = xdoc.createElement('component')
+    #comp_data = xdoc.createTextNode('')
+    xrequest.appendChild(comp)
+
+    # /notice/request/action -- action which error occured
+    # ... no fucking clue..
+    action = xdoc.createElement('action') # maybe GET/POST??
+    action_data = u"%s %s" % (request.method, request.META['PATH_INFO'])
+    action_data = xdoc.createTextNode(action_data)
+    action.appendChild(action_data)
+    xrequest.appendChild(action)
+
+    # /notice/request/params/var -- check request.GET/request.POST
+    params = xdoc.createElement('params')
+    for key, value in _parse_request(request).iteritems():
+        var = xdoc.createElement('var')
+        var.setAttribute('key', key)
+        value = xdoc.createTextNode(str(value))
+        var.appendChild(value)
+        params.appendChild(var)
+    xrequest.appendChild(params)
+
+    # /notice/request/session/var -- check if sessions is enabled..
+    sessions = xdoc.createElement('session')
+    for key, value in _parse_session(request.session).iteritems():
+        var = xdoc.createElement('var')
+        var.setAttribute('key', key)
+        value = xdoc.createTextNode(str(value))
+        var.appendChild(value)
+        sessions.appendChild(var)
+    xrequest.appendChild(params)
+
+    # /notice/request/cgi-data/var -- all meta data
+    cgidata = xdoc.createElement('cgi-data')
+    for key, value in _parse_environment(request).iteritems():
+        var = xdoc.createElement('var')
+        var.setAttribute('key', key)
+        value = xdoc.createTextNode(str(value))
+        var.appendChild(value)
+        cgidata.appendChild(var)
+    xrequest.appendChild(cgidata)
+    notice.appendChild(xrequest)
+
+    serverenv = xdoc.createElement('server-environment')
+    # /notice/server-environment/project-root -- default to sys.path[0] 
+    projectroot = xdoc.createElement('project-root')
+    projectroot.appendChild(xdoc.createTextNode(sys.path[0]))
+    serverenv.appendChild(projectroot)
+    # /notice/server-environment/environment-name -- environment name? wtf..
+    envname = xdoc.createElement('environment-name')
+    # no idea...
+    # envname.appendChild(xdoc.createTextNode())
+    serverenv.appendChild(envname)
+    notice.appendChild(serverenv)
+
+    return xdoc.toxml('utf-8')
+
+def _ride_the_toad(payload, timeout, use_ssl):
+    """Send a notification (an HTTP POST request) to Hoptoad.
+
+    Parameters:
+    payload -- the YAML payload for the request from _generate_payload()
+    timeout -- the maximum timeout, in seconds, or None to use the default
+
+    """
+    headers = { 'Content-Type': 'text/xml' }
+
+    # url calculation
+    url_template = '%s://hoptoadapp.com/notifier_api/v2/notices'
+    notification_url = url_template % ("https" if use_ssl else "http")
+    # allow the settings to override all urls
+    notification_url = get_hoptoad_settings().get('HOPTOAD_NOTIFICATION_URL',
+                                                   notification_url)
+
+    r = urllib2.Request(notification_url, payload, headers)
+    try:
+        if timeout:
+            # timeout is 2.6 addition!!
+            response = urllib2.urlopen(r, timeout=timeout)
+        else:
+            response = urllib2.urlopen(r)
+    except urllib2.URLError:
+        pass
+    else:
+        # getcode is 2.6 addition!!
+        status = response.getcode()
+
+        if status == 403:
+            # if we can not use SSL, re-invoke w/o using SSL
+            _ride_the_toad(payload, timeout, use_ssl=False)
+        if status == 422:
+            # couldn't send to hoptoad..
+            pass
+        if status == 500:
+            # hoptoad is down
+            pass
+
+def report(payload, timeout):
+    use_ssl = get_hoptoad_settings().get('HOPTOAD_USE_SSL', False)
+    return _ride_the_toad(payload, timeout, use_ssl)

hoptoad/handlers/__init__.py

+"""Implementations of different handlers that communicate with hoptoad in
+various different protocols.
+"""
+import logging
+
+from hoptoad import get_hoptoad_settings
+from hoptoad.handlers.threaded import ThreadedNotifier
+
+logger = logging.getLogger(__name__)
+
+
+def get_handler(*args, **kwargs):
+    """Returns an initialized handler object"""
+    hoptoad_settings = get_hoptoad_settings()
+    handler = hoptoad_settings.get("HOPTOAD_HANDLER", "threadpool")
+    if handler.lower() == 'threadpool':
+        threads = hoptoad_settings.get("HOPTOAD_THREAD_COUNT", 4)
+        return ThreadedNotifier(threads , *args, **kwargs)

hoptoad/handlers/threaded.py

+import os
+import threading
+import time
+import logging
+
+from hoptoad.api import htv2
+
+from hoptoad.handlers.utils.threadpool import WorkRequest, ThreadPool
+from hoptoad.handlers.utils.threadpool import NoResultsPending
+
+
+logger = logging.getLogger(__name__)
+
+
+def _exception_handler(request, exc_info):
+    """Rudimentary exception handler, simply log and moves on.
+
+    If there's no tuple, it means something went really wrong. Critically log
+    and exit.
+
+    """
+    if not isinstance(exc_info, tuple):
+        logger.critical(str(request))
+        logger.critical(str(exc_info))
+        sys.exit(1)
+    logger.warn(
+        "* Exception occured in request #%s: %s" % (request.requestID, exc_info)
+    )
+
+
+class ThreadedNotifier(threading.Thread):
+    """A daemon thread that spawns a threadpool of worker threads.
+
+    Waits for queue additions through the enqueue method.
+
+    """
+    def __init__(self, threadpool_threadcount, cb=None, exc_cb=None):
+        _threadname = "Hoptoad%s-%d" % (self.__class__.__name__, os.getpid())
+        threading.Thread.__init__(self, name=_threadname)
+        self.threads = threadpool_threadcount
+        self.daemon = True # daemon thread... important!
+        self.callback = cb
+        self.exc_callback = exc_cb or _exception_handler
+        self.pool = ThreadPool(self.threads)
+        # start the thread pool
+        self.start()
+
+    def enqueue(self, payload, timeout):
+        request = WorkRequest(
+            htv2.report,
+            args=(payload, timeout),
+            callback=self.callback,
+            exc_callback=self.exc_callback
+        )
+
+        # Put the request into the queue where the detached 'run' method will
+        # poll its queue every 0.5 seconds and start working.
+        self.pool.putRequest(request)
+
+    def run(self):
+        """Actively poll the queue for requests and process them."""
+        while True:
+            try:
+                time.sleep(0.5) # TODO: configure for tuning
+                self.pool.poll()
+            except KeyboardInterrupt:
+                logger.info("* Interrupted!")
+                break
+            except NoResultsPending:
+                pass
+
+

hoptoad/handlers/utils/__init__.py

Empty file added.

hoptoad/handlers/utils/threadpool.py

+# -*- coding: UTF-8 -*-
+"""Easy to use object-oriented thread pool framework.
+
+A thread pool is an object that maintains a pool of worker threads to perform
+time consuming operations in parallel. It assigns jobs to the threads
+by putting them in a work request queue, where they are picked up by the
+next available thread. This then performs the requested operation in the
+background and puts the results in another queue.
+
+The thread pool object can then collect the results from all threads from
+this queue as soon as they become available or after all threads have
+finished their work. It's also possible, to define callbacks to handle
+each result as it comes in.
+
+The basic concept and some code was taken from the book "Python in a Nutshell,
+2nd edition" by Alex Martelli, O'Reilly 2006, ISBN 0-596-10046-9, from section
+14.5 "Threaded Program Architecture". I wrapped the main program logic in the
+ThreadPool class, added the WorkRequest class and the callback system and
+tweaked the code here and there. Kudos also to Florent Aide for the exception
+handling mechanism.
+
+Basic usage::
+
+    >>> pool = ThreadPool(poolsize)
+    >>> requests = makeRequests(some_callable, list_of_args, callback)
+    >>> [pool.putRequest(req) for req in requests]
+    >>> pool.wait()
+
+See the end of the module code for a brief, annotated usage example.
+
+Website : http://chrisarndt.de/projects/threadpool/
+
+"""
+__docformat__ = "restructuredtext en"
+
+__all__ = [
+    'makeRequests',
+    'NoResultsPending',
+    'NoWorkersAvailable',
+    'ThreadPool',
+    'WorkRequest',
+    'WorkerThread'
+]
+
+__author__ = "Christopher Arndt"
+__version__ = '1.2.7'
+__revision__ = "$Revision: 416 $"
+__date__ = "$Date: 2009-10-07 05:41:27 +0200 (Wed, 07 Oct 2009) $"
+__license__ = "MIT license"
+
+
+# standard library modules
+import sys
+import threading
+import Queue
+import traceback
+
+
+# exceptions
+class NoResultsPending(Exception):
+    """All work requests have been processed."""
+    pass
+
+class NoWorkersAvailable(Exception):
+    """No worker threads available to process remaining requests."""
+    pass
+
+
+# internal module helper functions
+def _handle_thread_exception(request, exc_info):
+    """Default exception handler callback function.
+
+    This just prints the exception info via ``traceback.print_exception``.
+
+    """
+    traceback.print_exception(*exc_info)
+
+
+# utility functions
+def makeRequests(callable_, args_list, callback=None,
+        exc_callback=_handle_thread_exception):
+    """Create several work requests for same callable with different arguments.
+
+    Convenience function for creating several work requests for the same
+    callable where each invocation of the callable receives different values
+    for its arguments.
+
+    ``args_list`` contains the parameters for each invocation of callable.
+    Each item in ``args_list`` should be either a 2-item tuple of the list of
+    positional arguments and a dictionary of keyword arguments or a single,
+    non-tuple argument.
+
+    See docstring for ``WorkRequest`` for info on ``callback`` and
+    ``exc_callback``.
+
+    """
+    requests = []
+    for item in args_list:
+        if isinstance(item, tuple):
+            requests.append(
+                WorkRequest(callable_, item[0], item[1], callback=callback,
+                    exc_callback=exc_callback)
+            )
+        else:
+            requests.append(
+                WorkRequest(callable_, [item], None, callback=callback,
+                    exc_callback=exc_callback)
+            )
+    return requests
+
+
+# classes
+class WorkerThread(threading.Thread):
+    """Background thread connected to the requests/results queues.
+
+    A worker thread sits in the background and picks up work requests from
+    one queue and puts the results in another until it is dismissed.
+
+    """
+
+    def __init__(self, requests_queue, results_queue, poll_timeout=5, **kwds):
+        """Set up thread in daemonic mode and start it immediatedly.
+
+        ``requests_queue`` and ``results_queue`` are instances of
+        ``Queue.Queue`` passed by the ``ThreadPool`` class when it creates a new
+        worker thread.
+
+        """
+        threading.Thread.__init__(self, **kwds)
+        self.setDaemon(1)
+        self._requests_queue = requests_queue
+        self._results_queue = results_queue
+        self._poll_timeout = poll_timeout
+        self._dismissed = threading.Event()
+        self.start()
+
+    def run(self):
+        """Repeatedly process the job queue until told to exit."""
+        while True:
+            if self._dismissed.isSet():
+                # we are dismissed, break out of loop
+                break
+            # get next work request. If we don't get a new request from the
+            # queue after self._poll_timout seconds, we jump to the start of
+            # the while loop again, to give the thread a chance to exit.
+            try:
+                request = self._requests_queue.get(True, self._poll_timeout)
+            except Queue.Empty:
+                continue
+            else:
+                if self._dismissed.isSet():
+                    # we are dismissed, put back request in queue and exit loop
+                    self._requests_queue.put(request)
+                    break
+                try:
+                    result = request.callable(*request.args, **request.kwds)
+                    self._results_queue.put((request, result))
+                except:
+                    request.exception = True
+                    self._results_queue.put((request, sys.exc_info()))
+
+    def dismiss(self):
+        """Sets a flag to tell the thread to exit when done with current job."""
+        self._dismissed.set()
+
+
+class WorkRequest:
+    """A request to execute a callable for putting in the request queue later.
+
+    See the module function ``makeRequests`` for the common case
+    where you want to build several ``WorkRequest`` objects for the same
+    callable but with different arguments for each call.
+
+    """
+
+    def __init__(self, callable_, args=None, kwds=None, requestID=None,
+            callback=None, exc_callback=_handle_thread_exception):
+        """Create a work request for a callable and attach callbacks.
+
+        A work request consists of the a callable to be executed by a
+        worker thread, a list of positional arguments, a dictionary
+        of keyword arguments.
+
+        A ``callback`` function can be specified, that is called when the
+        results of the request are picked up from the result queue. It must
+        accept two anonymous arguments, the ``WorkRequest`` object and the
+        results of the callable, in that order. If you want to pass additional
+        information to the callback, just stick it on the request object.
+
+        You can also give custom callback for when an exception occurs with
+        the ``exc_callback`` keyword parameter. It should also accept two
+        anonymous arguments, the ``WorkRequest`` and a tuple with the exception
+        details as returned by ``sys.exc_info()``. The default implementation
+        of this callback just prints the exception info via
+        ``traceback.print_exception``. If you want no exception handler
+        callback, just pass in ``None``.
+
+        ``requestID``, if given, must be hashable since it is used by
+        ``ThreadPool`` object to store the results of that work request in a
+        dictionary. It defaults to the return value of ``id(self)``.
+
+        """
+        if requestID is None:
+            self.requestID = id(self)
+        else:
+            try:
+                self.requestID = hash(requestID)
+            except TypeError:
+                raise TypeError("requestID must be hashable.")
+        self.exception = False
+        self.callback = callback
+        self.exc_callback = exc_callback
+        self.callable = callable_
+        self.args = args or []
+        self.kwds = kwds or {}
+
+    def __str__(self):
+        return "<WorkRequest id=%s args=%r kwargs=%r exception=%s>" % \
+            (self.requestID, self.args, self.kwds, self.exception)
+
+class ThreadPool:
+    """A thread pool, distributing work requests and collecting results.
+
+    See the module docstring for more information.
+
+    """
+
+    def __init__(self, num_workers, q_size=0, resq_size=0, poll_timeout=5):
+        """Set up the thread pool and start num_workers worker threads.
+
+        ``num_workers`` is the number of worker threads to start initially.
+
+        If ``q_size > 0`` the size of the work *request queue* is limited and
+        the thread pool blocks when the queue is full and it tries to put
+        more work requests in it (see ``putRequest`` method), unless you also
+        use a positive ``timeout`` value for ``putRequest``.
+
+        If ``resq_size > 0`` the size of the *results queue* is limited and the
+        worker threads will block when the queue is full and they try to put
+        new results in it.
+
+        .. warning:
+            If you set both ``q_size`` and ``resq_size`` to ``!= 0`` there is
+            the possibilty of a deadlock, when the results queue is not pulled
+            regularly and too many jobs are put in the work requests queue.
+            To prevent this, always set ``timeout > 0`` when calling
+            ``ThreadPool.putRequest()`` and catch ``Queue.Full`` exceptions.
+
+        """
+        self._requests_queue = Queue.Queue(q_size)
+        self._results_queue = Queue.Queue(resq_size)
+        self.workers = []
+        self.dismissedWorkers = []
+        self.workRequests = {}
+        self.createWorkers(num_workers, poll_timeout)
+
+    def createWorkers(self, num_workers, poll_timeout=5):
+        """Add num_workers worker threads to the pool.
+
+        ``poll_timout`` sets the interval in seconds (int or float) for how
+        ofte threads should check whether they are dismissed, while waiting for
+        requests.
+
+        """
+        for i in range(num_workers):
+            self.workers.append(WorkerThread(self._requests_queue,
+                self._results_queue, poll_timeout=poll_timeout))
+
+    def dismissWorkers(self, num_workers, do_join=False):
+        """Tell num_workers worker threads to quit after their current task."""
+        dismiss_list = []
+        for i in range(min(num_workers, len(self.workers))):
+            worker = self.workers.pop()
+            worker.dismiss()
+            dismiss_list.append(worker)
+
+        if do_join:
+            for worker in dismiss_list:
+                worker.join()
+        else:
+            self.dismissedWorkers.extend(dismiss_list)
+
+    def joinAllDismissedWorkers(self):
+        """Perform Thread.join() on all worker threads that have been dismissed.
+        """
+        for worker in self.dismissedWorkers:
+            worker.join()
+        self.dismissedWorkers = []
+
+    def putRequest(self, request, block=True, timeout=None):
+        """Put work request into work queue and save its id for later."""
+        assert isinstance(request, WorkRequest)
+        # don't reuse old work requests
+        assert not getattr(request, 'exception', None)
+        self._requests_queue.put(request, block, timeout)
+        self.workRequests[request.requestID] = request
+
+    def poll(self, block=False):
+        """Process any new results in the queue."""
+        while True:
+            # still results pending?
+            if not self.workRequests:
+                raise NoResultsPending
+            # are there still workers to process remaining requests?
+            elif block and not self.workers:
+                raise NoWorkersAvailable
+            try:
+                # get back next results
+                request, result = self._results_queue.get(block=block)
+                # has an exception occured?
+                if request.exception and request.exc_callback:
+                    request.exc_callback(request, result)
+                # hand results to callback, if any
+                if request.callback and not \
+                       (request.exception and request.exc_callback):
+                    request.callback(request, result)
+                del self.workRequests[request.requestID]
+            except Queue.Empty:
+                break
+
+    def wait(self):
+        """Wait for results, blocking until all have arrived."""
+        while 1:
+            try:
+                self.poll(True)
+            except NoResultsPending:
+                break
+
+
+################
+# USAGE EXAMPLE
+################
+
+if __name__ == '__main__':
+    import random
+    import time
+
+    # the work the threads will have to do (rather trivial in our example)
+    def do_something(data):
+        time.sleep(random.randint(1,5))
+        result = round(random.random() * data, 5)
+        # just to show off, we throw an exception once in a while
+        if result > 5:
+            raise RuntimeError("Something extraordinary happened!")
+        return result
+
+    # this will be called each time a result is available
+    def print_result(request, result):
+        print "**** Result from request #%s: %r" % (request.requestID, result)
+
+    # this will be called when an exception occurs within a thread
+    # this example exception handler does little more than the default handler
+    def handle_exception(request, exc_info):
+        if not isinstance(exc_info, tuple):
+            # Something is seriously wrong...
+            print request
+            print exc_info
+            raise SystemExit
+        print "**** Exception occured in request #%s: %s" % \
+          (request.requestID, exc_info)
+
+    # assemble the arguments for each job to a list...
+    data = [random.randint(1,10) for i in range(20)]
+    # ... and build a WorkRequest object for each item in data
+    requests = makeRequests(do_something, data, print_result, handle_exception)
+    # to use the default exception handler, uncomment next line and comment out
+    # the preceding one.
+    #requests = makeRequests(do_something, data, print_result)
+
+    # or the other form of args_lists accepted by makeRequests: ((,), {})
+    data = [((random.randint(1,10),), {}) for i in range(20)]
+    requests.extend(
+        makeRequests(do_something, data, print_result, handle_exception)
+        #makeRequests(do_something, data, print_result)
+        # to use the default exception handler, uncomment next line and comment
+        # out the preceding one.
+    )
+
+    # we create a pool of 3 worker threads
+    print "Creating thread pool with 3 worker threads."
+    main = ThreadPool(3)
+
+    # then we put the work requests in the queue...
+    for req in requests:
+        main.putRequest(req)
+        print "Work request #%s added." % req.requestID
+    # or shorter:
+    # [main.putRequest(req) for req in requests]
+
+    # ...and wait for the results to arrive in the result queue
+    # by using ThreadPool.wait(). This would block until results for
+    # all work requests have arrived:
+    # main.wait()
+
+    # instead we can poll for results while doing something else:
+    i = 0
+    while True:
+        try:
+            time.sleep(0.5)
+            main.poll()
+            print "Main thread working...",
+            print "(active worker threads: %i)" % (threading.activeCount()-1, )
+            if i == 10:
+                print "**** Adding 3 more worker threads..."
+                main.createWorkers(3)
+            if i == 20:
+                print "**** Dismissing 2 worker threads..."
+                main.dismissWorkers(2)
+            i += 1
+        except KeyboardInterrupt:
+            print "**** Interrupted!"
+            break
+        except NoResultsPending:
+            print "**** No pending results."
+            break
+    if main.dismissedWorkers:
+        print "Joining all dismissed worker threads..."
+        main.joinAllDismissedWorkers()
+

hoptoad/middleware.py

-import sys
-import traceback
-import urllib2
-import yaml
 import re
-import os
-import threading
 import logging
-import time
-
-from threadpool import WorkRequest, ThreadPool
-from threadpool import NoResultsPending
+import itertools
 
 from django.core.exceptions import MiddlewareNotUsed
-from django.views.debug import get_safe_settings
 from django.conf import settings
 
+from hoptoad import get_hoptoad_settings
+from hoptoad.handlers import get_handler
+from hoptoad.api import htv2
+
 
 logger = logging.getLogger(__name__)
 
-def _parse_environment(request):
-    """Return an environment mapping for a notification from the given request."""
-    env = dict( (str(k), str(v)) for (k, v) in get_safe_settings().items() )
-    env.update( dict( (str(k), str(v)) for (k, v) in request.META.items() ) )
-    
-    env['REQUEST_URI'] = request.build_absolute_uri()
-    
-    return env
-
-def _parse_traceback(trace):
-    """Return the given traceback string formatted for a notification."""
-    p_traceback = [ "%s:%d:in `%s'" % (filename, lineno, funcname) 
-                    for filename, lineno, funcname, _
-                    in traceback.extract_tb(trace) ]
-    p_traceback.reverse()
-    
-    return p_traceback
-
-def _parse_message(exc):
-    """Return a message for a notification from the given exception."""
-    return '%s: %s' % (exc.__class__.__name__, str(exc))
-
-def _parse_request(request):
-    """Return a request mapping for a notification from the given request."""
-    request_get = dict( (str(k), str(v)) for (k, v) in request.GET.items() )
-    request_post = dict( (str(k), str(v)) for (k, v) in request.POST.items() )
-    return request_post if request_post else request_get
-
-def _parse_session(session):
-    """Return a request mapping for a notification from the given session."""
-    return dict( (str(k), str(v)) for (k, v) in session.items() )
-
-
-def _generate_payload(request, exc=None, trace=None, message=None, error_class=None):
-    """Generate a YAML payload for a Hoptoad notification.
-    
-    Parameters:
-    request -- A Django HTTPRequest.  This is required.
-    
-    Keyword parameters:
-    exc -- A Python Exception object.  If this is not given the 
-           mess parameter must be.
-    trace -- A Python Traceback object.  This is not required.
-    message -- A string representing the error message.  If this is not
-               given, the exc parameter must be.
-    error_class -- A string representing the error class.  If this is not
-                   given the excc parameter must be.
-    """
-    p_message = message if message else _parse_message(exc)
-    p_error_class = error_class if error_class else exc.__class__.__name__
-    p_traceback = _parse_traceback(trace) if trace else []
-    p_environment = _parse_environment(request)
-    p_request = _parse_request(request)
-    p_session = _parse_session(request.session)
-    
-    return yaml.dump({ 'notice': {
-        'api_key':       settings.HOPTOAD_API_KEY,
-        'error_class':   p_error_class,
-        'error_message': p_message,
-        'backtrace':     p_traceback,
-        'request':       { 'url': request.build_absolute_uri(),
-                           'params': p_request },
-        'session':       { 'key': '', 'data': p_session },
-        'environment':   p_environment,
-    }}, default_flow_style=False)
-
-def _ride_the_toad(payload, timeout):
-    """Send a notification (an HTTP POST request) to Hoptoad.
-    
-    Parameters:
-    payload -- the YAML payload for the request from _generate_payload()
-    timeout -- the maximum timeout, in seconds, or None to use the default
-    """
-    headers = { 'Content-Type': 'application/x-yaml', 
-                'Accept': 'text/xml, application/xml', }
-    r = urllib2.Request('http://hoptoadapp.com/notices', payload, headers)
-    try:
-        if timeout:
-            urllib2.urlopen(r, timeout=timeout)
-        else:
-            urllib2.urlopen(r)
-    except urllib2.URLError:
-        pass
-
-def _exception_handler(request, exc_info):
-    """Rudimentary exception handler, simply log and moves on.
-    
-    If there's no tuple, it means something went really wrong. Critically log
-    and exit.
-    """
-    if not isinstance(exc_info, tuple):
-        logger.critical(str(request))
-        logger.critical(str(exc_info))
-        sys.exit(1)
-    logger.warn(
-        "* Exception occured in request #%s: %s" % (request.requestID, exc_info)
-    )
-
-
-class Runnable(threading.Thread):
-    """A daemon thread that spawns a threadpool of worker threads.
-    
-    Waits for queue additions through the enqueue method.
-    
-    # TODO: Consider using asyncore instead of a threadpool
-    """
-    def __init__(self, threadpool_threadcount):
-        threading.Thread.__init__(self,
-            name="HoptoadThreadRunner-%d" % os.getpid())
-        
-        self.threads = threadpool_threadcount
-        self.daemon = True # daemon thread... important!
-        self.pool = ThreadPool(self.threads)
-    
-    def enqueue(self, payload, timeout, callback=None, exc_callback=_exception_handler):
-        request = WorkRequest(
-            _ride_the_toad,
-            args=(payload, timeout),
-            callback=callback,
-            exc_callback=exc_callback
-        )
-        
-        # Put the request into the queue where the detached 'run' method will
-        # poll its queue every 0.5 seconds and start working.
-        self.pool.putRequest(request)
-    
-    def run(self):
-        """Actively poll the queue for requests and process them."""
-        while True:
-            try:
-                time.sleep(0.5) # TODO: configure for tuning
-                self.pool.poll()
-            except KeyboardInterrupt:
-                logger.info("* Interrupted!")
-                break
-            except NoResultsPending:
-                pass
-    
 
 class HoptoadNotifierMiddleware(object):
     def __init__(self):
         """Initialize the middleware."""
-        all_settings = dir(settings)
-        
-        if 'HOPTOAD_API_KEY' not in all_settings or not settings.HOPTOAD_API_KEY:
+
+        hoptoad_settings = get_hoptoad_settings()
+        self._init_middleware(hoptoad_settings)
+
+    def _init_middleware(self, hoptoad_settings):
+
+        if 'HOPTOAD_API_KEY' not in hoptoad_settings:
+            # no api key, abort!
             raise MiddlewareNotUsed
-        
-        if settings.DEBUG and \
-           (not 'HOPTOAD_NOTIFY_WHILE_DEBUG' in all_settings
-            or not settings.HOPTOAD_NOTIFY_WHILE_DEBUG ):
-            raise MiddlewareNotUsed
-        
-        self.timeout = ( settings.HOPTOAD_TIMEOUT 
-                         if 'HOPTOAD_TIMEOUT' in all_settings else None )
-        
-        self.notify_404 = ( settings.HOPTOAD_NOTIFY_404 
-                            if 'HOPTOAD_NOTIFY_404' in all_settings else False )
-        self.notify_403 = ( settings.HOPTOAD_NOTIFY_403 
-                            if 'HOPTOAD_NOTIFY_403' in all_settings else False )
-        self.ignore_agents = ( map(re.compile, settings.HOPTOAD_IGNORE_AGENTS)
-                            if 'HOPTOAD_IGNORE_AGENTS' in all_settings else [] )
-            
-        # Creates a self.thread attribute and starts it.
-        self.initialize_threadpool(all_settings)
-    
+
+        if settings.DEBUG:
+            if not hoptoad_settings.get('HOPTOAD_NOTIFY_WHILE_DEBUG', None):
+                # do not use hoptoad if you're in debug mode..
+                raise MiddlewareNotUsed
+
+        self.timeout = hoptoad_settings.get('HOPTOAD_TIMEOUT', None)
+        self.notify_404 = hoptoad_settings.get('HOPTOAD_NOTIFY_404', False)
+        self.notify_403 = hoptoad_settings.get('HOPTOAD_NOTIFY_403', False)
+
+        ignorable_agents = hoptoad_settings.get('HOPTOAD_IGNORE_AGENTS', [])
+        self.ignore_agents = map(re.compile, ignorable_agents)
+
+        self.handler = get_handler()
+
     def _ignore(self, request):
         """Return True if the given request should be ignored, False otherwise."""
         ua = request.META.get('HTTP_USER_AGENT', '')
         return any(i.search(ua) for i in self.ignore_agents)
-    
-    def initialize_threadpool(self, all_settings):
-        """Initialize an internal threadpool asynchronous POST requests.
-        
-        Also creates a thread attribute and starts the threadpool.
-        """
-        
-        if 'HOPTOAD_THREAD_COUNT' in all_settings:
-            threads = settings.HOPTOAD_THREAD_COUNT
-        else:
-            threads = 4
-        
-        self.thread = Runnable(threads)
-        self.thread.start()
-    
+
     def process_response(self, request, response):
         """Process a reponse object.
-        
+
         Hoptoad will be notified of a 404 error if the response is a 404
         and 404 tracking is enabled in the settings.
-        
+
         Hoptoad will be notified of a 403 error if the response is a 403
         and 403 tracking is enabled in the settings.
-        
+
         Regardless of whether Hoptoad is notified, the reponse object will
         be returned unchanged.
+
         """
         if self._ignore(request):
             return response
-        
-        if self.notify_404 and response.status_code == 404:
-            error_class = 'Http404'
-            
-            message = 'Http404: Page not found at %s' % request.build_absolute_uri()
-            payload = _generate_payload(request, error_class=error_class, message=message)
-            
-            self.thread.enqueue(payload, self.timeout)
-        
-        if self.notify_403 and response.status_code == 403:
-            error_class = 'Http403'
-            
-            message = 'Http403: Forbidden at %s' % request.build_absolute_uri()
-            payload = _generate_payload(request, error_class=error_class, message=message)
-            
-            self.thread.enqueue(payload, self.timeout)
-        
+
+        sc = response.status_code
+        if sc in [404, 403] and getattr(self, "notify_%d" % sc):
+            self.handler.enqueue(htv2.generate_payload(request, response=sc),
+                                 self.timeout)
+
         return response
-    
+
     def process_exception(self, request, exc):
         """Process an exception.
-        
+
         Hoptoad will be notified of the exception and None will be
         returned so that Django's normal exception handling will then
         be used.
+
         """
         if self._ignore(request):
             return None
-        
-        excc, _, tb = sys.exc_info()
-        
-        payload = _generate_payload(request, exc, tb)
-        self.thread.enqueue(payload, self.timeout)
-        
+
+        self.handler.enqueue(htv2.generate_payload(request, exc=exc),
+                             self.timeout)
         return None
 

hoptoad/tests.py

File contents unchanged.

hoptoad/threadpool.py

-# -*- coding: UTF-8 -*-
-"""Easy to use object-oriented thread pool framework.
-
-A thread pool is an object that maintains a pool of worker threads to perform
-time consuming operations in parallel. It assigns jobs to the threads
-by putting them in a work request queue, where they are picked up by the
-next available thread. This then performs the requested operation in the
-background and puts the results in another queue.
-
-The thread pool object can then collect the results from all threads from
-this queue as soon as they become available or after all threads have
-finished their work. It's also possible, to define callbacks to handle
-each result as it comes in.
-
-The basic concept and some code was taken from the book "Python in a Nutshell,
-2nd edition" by Alex Martelli, O'Reilly 2006, ISBN 0-596-10046-9, from section
-14.5 "Threaded Program Architecture". I wrapped the main program logic in the
-ThreadPool class, added the WorkRequest class and the callback system and
-tweaked the code here and there. Kudos also to Florent Aide for the exception
-handling mechanism.
-
-Basic usage::
-
-    >>> pool = ThreadPool(poolsize)
-    >>> requests = makeRequests(some_callable, list_of_args, callback)
-    >>> [pool.putRequest(req) for req in requests]
-    >>> pool.wait()
-
-See the end of the module code for a brief, annotated usage example.
-
-Website : http://chrisarndt.de/projects/threadpool/
-
-"""
-__docformat__ = "restructuredtext en"
-
-__all__ = [
-    'makeRequests',
-    'NoResultsPending',
-    'NoWorkersAvailable',
-    'ThreadPool',
-    'WorkRequest',
-    'WorkerThread'
-]
-
-__author__ = "Christopher Arndt"
-__version__ = '1.2.7'
-__revision__ = "$Revision: 416 $"
-__date__ = "$Date: 2009-10-07 05:41:27 +0200 (Wed, 07 Oct 2009) $"
-__license__ = "MIT license"
-
-
-# standard library modules
-import sys
-import threading
-import Queue
-import traceback
-
-
-# exceptions
-class NoResultsPending(Exception):
-    """All work requests have been processed."""
-    pass
-
-class NoWorkersAvailable(Exception):
-    """No worker threads available to process remaining requests."""
-    pass
-
-
-# internal module helper functions
-def _handle_thread_exception(request, exc_info):
-    """Default exception handler callback function.
-
-    This just prints the exception info via ``traceback.print_exception``.
-
-    """
-    traceback.print_exception(*exc_info)
-
-
-# utility functions
-def makeRequests(callable_, args_list, callback=None,
-        exc_callback=_handle_thread_exception):
-    """Create several work requests for same callable with different arguments.
-
-    Convenience function for creating several work requests for the same
-    callable where each invocation of the callable receives different values
-    for its arguments.
-
-    ``args_list`` contains the parameters for each invocation of callable.
-    Each item in ``args_list`` should be either a 2-item tuple of the list of
-    positional arguments and a dictionary of keyword arguments or a single,
-    non-tuple argument.
-
-    See docstring for ``WorkRequest`` for info on ``callback`` and
-    ``exc_callback``.
-
-    """
-    requests = []
-    for item in args_list:
-        if isinstance(item, tuple):
-            requests.append(
-                WorkRequest(callable_, item[0], item[1], callback=callback,
-                    exc_callback=exc_callback)
-            )
-        else:
-            requests.append(
-                WorkRequest(callable_, [item], None, callback=callback,
-                    exc_callback=exc_callback)
-            )
-    return requests
-
-
-# classes
-class WorkerThread(threading.Thread):
-    """Background thread connected to the requests/results queues.
-
-    A worker thread sits in the background and picks up work requests from
-    one queue and puts the results in another until it is dismissed.
-
-    """
-
-    def __init__(self, requests_queue, results_queue, poll_timeout=5, **kwds):
-        """Set up thread in daemonic mode and start it immediatedly.
-
-        ``requests_queue`` and ``results_queue`` are instances of
-        ``Queue.Queue`` passed by the ``ThreadPool`` class when it creates a new
-        worker thread.
-
-        """
-        threading.Thread.__init__(self, **kwds)
-        self.setDaemon(1)
-        self._requests_queue = requests_queue
-        self._results_queue = results_queue
-        self._poll_timeout = poll_timeout
-        self._dismissed = threading.Event()
-        self.start()
-
-    def run(self):
-        """Repeatedly process the job queue until told to exit."""
-        while True:
-            if self._dismissed.isSet():
-                # we are dismissed, break out of loop
-                break
-            # get next work request. If we don't get a new request from the
-            # queue after self._poll_timout seconds, we jump to the start of
-            # the while loop again, to give the thread a chance to exit.
-            try:
-                request = self._requests_queue.get(True, self._poll_timeout)
-            except Queue.Empty:
-                continue
-            else:
-                if self._dismissed.isSet():
-                    # we are dismissed, put back request in queue and exit loop
-                    self._requests_queue.put(request)
-                    break
-                try:
-                    result = request.callable(*request.args, **request.kwds)
-                    self._results_queue.put((request, result))
-                except:
-                    request.exception = True
-                    self._results_queue.put((request, sys.exc_info()))
-
-    def dismiss(self):
-        """Sets a flag to tell the thread to exit when done with current job."""
-        self._dismissed.set()
-
-
-class WorkRequest:
-    """A request to execute a callable for putting in the request queue later.
-
-    See the module function ``makeRequests`` for the common case
-    where you want to build several ``WorkRequest`` objects for the same
-    callable but with different arguments for each call.
-
-    """
-
-    def __init__(self, callable_, args=None, kwds=None, requestID=None,
-            callback=None, exc_callback=_handle_thread_exception):
-        """Create a work request for a callable and attach callbacks.
-
-        A work request consists of the a callable to be executed by a
-        worker thread, a list of positional arguments, a dictionary
-        of keyword arguments.
-
-        A ``callback`` function can be specified, that is called when the
-        results of the request are picked up from the result queue. It must
-        accept two anonymous arguments, the ``WorkRequest`` object and the
-        results of the callable, in that order. If you want to pass additional
-        information to the callback, just stick it on the request object.
-
-        You can also give custom callback for when an exception occurs with
-        the ``exc_callback`` keyword parameter. It should also accept two
-        anonymous arguments, the ``WorkRequest`` and a tuple with the exception
-        details as returned by ``sys.exc_info()``. The default implementation
-        of this callback just prints the exception info via
-        ``traceback.print_exception``. If you want no exception handler
-        callback, just pass in ``None``.
-
-        ``requestID``, if given, must be hashable since it is used by
-        ``ThreadPool`` object to store the results of that work request in a
-        dictionary. It defaults to the return value of ``id(self)``.
-
-        """
-        if requestID is None:
-            self.requestID = id(self)
-        else:
-            try:
-                self.requestID = hash(requestID)
-            except TypeError:
-                raise TypeError("requestID must be hashable.")
-        self.exception = False
-        self.callback = callback
-        self.exc_callback = exc_callback
-        self.callable = callable_
-        self.args = args or []
-        self.kwds = kwds or {}
-
-    def __str__(self):
-        return "<WorkRequest id=%s args=%r kwargs=%r exception=%s>" % \
-            (self.requestID, self.args, self.kwds, self.exception)
-
-class ThreadPool:
-    """A thread pool, distributing work requests and collecting results.
-
-    See the module docstring for more information.
-
-    """
-
-    def __init__(self, num_workers, q_size=0, resq_size=0, poll_timeout=5):
-        """Set up the thread pool and start num_workers worker threads.
-
-        ``num_workers`` is the number of worker threads to start initially.
-
-        If ``q_size > 0`` the size of the work *request queue* is limited and
-        the thread pool blocks when the queue is full and it tries to put
-        more work requests in it (see ``putRequest`` method), unless you also
-        use a positive ``timeout`` value for ``putRequest``.
-
-        If ``resq_size > 0`` the size of the *results queue* is limited and the
-        worker threads will block when the queue is full and they try to put
-        new results in it.
-
-        .. warning:
-            If you set both ``q_size`` and ``resq_size`` to ``!= 0`` there is
-            the possibilty of a deadlock, when the results queue is not pulled
-            regularly and too many jobs are put in the work requests queue.
-            To prevent this, always set ``timeout > 0`` when calling
-            ``ThreadPool.putRequest()`` and catch ``Queue.Full`` exceptions.
-
-        """
-        self._requests_queue = Queue.Queue(q_size)
-        self._results_queue = Queue.Queue(resq_size)
-        self.workers = []
-        self.dismissedWorkers = []
-        self.workRequests = {}
-        self.createWorkers(num_workers, poll_timeout)
-
-    def createWorkers(self, num_workers, poll_timeout=5):
-        """Add num_workers worker threads to the pool.
-
-        ``poll_timout`` sets the interval in seconds (int or float) for how
-        ofte threads should check whether they are dismissed, while waiting for
-        requests.
-
-        """
-        for i in range(num_workers):
-            self.workers.append(WorkerThread(self._requests_queue,
-                self._results_queue, poll_timeout=poll_timeout))
-
-    def dismissWorkers(self, num_workers, do_join=False):
-        """Tell num_workers worker threads to quit after their current task."""
-        dismiss_list = []
-        for i in range(min(num_workers, len(self.workers))):
-            worker = self.workers.pop()
-            worker.dismiss()
-            dismiss_list.append(worker)
-
-        if do_join:
-            for worker in dismiss_list:
-                worker.join()
-        else:
-            self.dismissedWorkers.extend(dismiss_list)
-
-    def joinAllDismissedWorkers(self):
-        """Perform Thread.join() on all worker threads that have been dismissed.
-        """
-        for worker in self.dismissedWorkers:
-            worker.join()
-        self.dismissedWorkers = []
-
-    def putRequest(self, request, block=True, timeout=None):
-        """Put work request into work queue and save its id for later."""
-        assert isinstance(request, WorkRequest)
-        # don't reuse old work requests
-        assert not getattr(request, 'exception', None)
-        self._requests_queue.put(request, block, timeout)
-        self.workRequests[request.requestID] = request
-
-    def poll(self, block=False):
-        """Process any new results in the queue."""
-        while True:
-            # still results pending?
-            if not self.workRequests:
-                raise NoResultsPending
-            # are there still workers to process remaining requests?
-            elif block and not self.workers:
-                raise NoWorkersAvailable
-            try:
-                # get back next results
-                request, result = self._results_queue.get(block=block)
-                # has an exception occured?
-                if request.exception and request.exc_callback:
-                    request.exc_callback(request, result)
-                # hand results to callback, if any
-                if request.callback and not \
-                       (request.exception and request.exc_callback):
-                    request.callback(request, result)
-                del self.workRequests[request.requestID]
-            except Queue.Empty:
-                break
-
-    def wait(self):
-        """Wait for results, blocking until all have arrived."""
-        while 1:
-            try:
-                self.poll(True)
-            except NoResultsPending:
-                break
-
-
-################
-# USAGE EXAMPLE
-################
-
-if __name__ == '__main__':
-    import random
-    import time
-
-    # the work the threads will have to do (rather trivial in our example)
-    def do_something(data):
-        time.sleep(random.randint(1,5))
-        result = round(random.random() * data, 5)
-        # just to show off, we throw an exception once in a while
-        if result > 5:
-            raise RuntimeError("Something extraordinary happened!")
-        return result
-
-    # this will be called each time a result is available
-    def print_result(request, result):
-        print "**** Result from request #%s: %r" % (request.requestID, result)
-
-    # this will be called when an exception occurs within a thread
-    # this example exception handler does little more than the default handler
-    def handle_exception(request, exc_info):
-        if not isinstance(exc_info, tuple):
-            # Something is seriously wrong...
-            print request
-            print exc_info
-            raise SystemExit
-        print "**** Exception occured in request #%s: %s" % \
-          (request.requestID, exc_info)
-
-    # assemble the arguments for each job to a list...
-    data = [random.randint(1,10) for i in range(20)]
-    # ... and build a WorkRequest object for each item in data
-    requests = makeRequests(do_something, data, print_result, handle_exception)
-    # to use the default exception handler, uncomment next line and comment out
-    # the preceding one.
-    #requests = makeRequests(do_something, data, print_result)
-
-    # or the other form of args_lists accepted by makeRequests: ((,), {})
-    data = [((random.randint(1,10),), {}) for i in range(20)]
-    requests.extend(
-        makeRequests(do_something, data, print_result, handle_exception)
-        #makeRequests(do_something, data, print_result)
-        # to use the default exception handler, uncomment next line and comment
-        # out the preceding one.
-    )
-
-    # we create a pool of 3 worker threads
-    print "Creating thread pool with 3 worker threads."
-    main = ThreadPool(3)
-
-    # then we put the work requests in the queue...
-    for req in requests:
-        main.putRequest(req)
-        print "Work request #%s added." % req.requestID
-    # or shorter:
-    # [main.putRequest(req) for req in requests]
-
-    # ...and wait for the results to arrive in the result queue
-    # by using ThreadPool.wait(). This would block until results for
-    # all work requests have arrived:
-    # main.wait()
-
-    # instead we can poll for results while doing something else:
-    i = 0
-    while True:
-        try:
-            time.sleep(0.5)
-            main.poll()
-            print "Main thread working...",
-            print "(active worker threads: %i)" % (threading.activeCount()-1, )
-            if i == 10:
-                print "**** Adding 3 more worker threads..."
-                main.createWorkers(3)
-            if i == 20:
-                print "**** Dismissing 2 worker threads..."
-                main.dismissWorkers(2)
-            i += 1
-        except KeyboardInterrupt:
-            print "**** Interrupted!"
-            break
-        except NoResultsPending:
-            print "**** No pending results."
-            break
-    if main.dismissedWorkers:
-        print "Joining all dismissed worker threads..."
-        main.joinAllDismissedWorkers()
-
 
 setup(
     name='django-hoptoad',
-    version='0.2',
+    version='0.3',
     description='django-hoptoad is some simple Middleware for letting Django-driven websites report their errors to Hoptoad.',
-    long_description=open(os.path.join(os.path.abspath(os.path.dirname(__file__)), 'README')).read(),
+    long_description=open(os.path.join(os.path.abspath(os.path.dirname(__file__)), 'README.rst')).read(),
     author='Steve Losh',
     author_email='steve@stevelosh.com',
     url='http://stevelosh.com/projects/django-hoptoad/',
     packages=find_packages(),
-    requires='pyyaml',
+    install_requires=['pyyaml'],
     classifiers=[
         'Development Status :: 4 - Beta',
         'Environment :: Web Environment',
         'Programming Language :: Python',
         'Programming Language :: Python :: 2.6',
     ],
-)
+)