Anonymous avatar Anonymous committed 0a96f16 Merge

merge with which_linden tip

Comments (0)

Files changed (30)

 d93641aa79ba6bb83134b023ddf4466f91f9f9a4 0.9.4
 6f60cccc3dff2b973f502c9125a98395304b94d4 0.9.5
 3deba93213ac50802682d2867d2ad1d08f9d1427 0.9.6
+6fe4d22d4cdc31fb71cf01318e7c6f09a17f8795 0.9.7
 * Cesar Alaniz, for uncovering bugs of great import
 * the grugq, for contributing patches, suggestions, and use cases
 * Ralf Schmitt, for wsgi/webob incompatibility bug report and suggested fix
-* Benoit Chesneau, bug report on green.os and patch to fix it
+* Benoit Chesneau, bug report on green.os and patch to fix it
+* Slant, better iterator implementation in tpool
+* Ambroff, nice pygtk hub
+0.9.7
+=====
+* GreenPipe is now a context manager (thanks, quad)
+* tpool.Proxy supports iterators properly
+* bug fixes in eventlet.green.os (thanks, Benoit)
+* much code cleanup from Tavis
+* a few more example apps
+* multitudinous improvements in Py3k compatibility from amajorek
+
+
 0.9.6
 =====
 * new EVENTLET_HUB environment variable allows you to select a hub without code
-Getting Started
+Eventlet is a concurrent networking library for Python that allows you to change how you run your code, not how you write it.
+
+It uses epoll or libevent for highly scalable non-blocking I/O.  Coroutines ensure that the developer uses a blocking style of programming that is similar to threading, but provide the benefits of non-blocking I/O.  The event dispatch is implicit, which means you can easily use Eventlet from the Python interpreter, or as a small part of a larger application.
+
+It's easy to get started using Eventlet, and easy to convert existing 
+applications to use it.  Start off by looking at the `examples`_, 
+`common design patterns`_, and the list of `basic API primitives`_.
+
+.. _examples: http://eventlet.net/doc/examples.html
+.. _common design patterns: http://eventlet.net/doc/design_patterns.html
+.. _basic API primitives: http://eventlet.net/doc/basic_usage.html
+
+Quick Example
 ===============
 
-There's some good documentation up at: http://eventlet.net/doc/
+Here's something you can try right on the command line::
 
-Here's something you can try right on the command line:
+    % python
+    >>> import eventlet 
+    >>> from eventlet.green import urllib2
+    >>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net')
+    >>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com')
+    >>> gt2.wait()
+    >>> gt.wait()
 
-% python
->>> import eventlet 
->>> from eventlet.green import urllib2
->>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net')
->>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com')
->>> gt2.wait()
->>> gt.wait()
 
-Also, look at the examples in the examples directory.
+Getting Eventlet
+==================
+
+The easiest way to get Eventlet is to use easy_install or pip::
+
+  easy_install eventlet
+  pip install eventlet
+
+The development `tip`_ is available via easy_install as well::
+
+  easy_install 'eventlet==dev'
+  pip install 'eventlet==dev'
+
+.. _tip: http://bitbucket.org/which_linden/eventlet/get/tip.zip#egg=eventlet-dev
 
 Building the Docs Locally
 =========================

benchmarks/localhost_socket.py

 BYTES=1000
 SIZE=1
 CONCURRENCY=50
+TRIES=5
 
 def reader(sock):
     expect = BYTES
                       default=SIZE)
     parser.add_option('-c', '--concurrency', type='int', dest='concurrency', 
                       default=CONCURRENCY)
+    parser.add_option('-t', '--tries', type='int', dest='tries', 
+                      default=TRIES)
+
     
     opts, args = parser.parse_args()
     BYTES=opts.bytes
     SIZE=opts.size
     CONCURRENCY=opts.concurrency
+    TRIES=opts.tries
     
     funcs = [launch_green_threads]
     if opts.threading:
         funcs = [launch_green_threads, launch_heavy_threads]
-    results = benchmarks.measure_best(3, 3,
+    results = benchmarks.measure_best(TRIES, 3,
                                       lambda: None, lambda: None,
                                       *funcs)
     print "green:", results[launch_green_threads]
 
 # You can set these variables from the command line.
 SPHINXOPTS    =
-SPHINXBUILD   = PYTHONPATH=../:$PYTHONPATH sphinx-build
+SPHINXBUILD   = PYTHONPATH=../:$(PYTHONPATH) sphinx-build
 PAPER         =
 
 # Internal variables.
 
 .. literalinclude:: ../examples/feedscraper.py
 
+.. _forwarder_example:
+
 Port Forwarder
 -----------------------
 ``examples/forwarder.py``
 
+.. literalinclude:: ../examples/forwarder.py
+
 .. _producer_consumer_example:
 
 Producer Consumer/Recursive Web Crawler

doc/real_index.html

 
 <p>Alternately, you can download the source tarball:
 <ul>
-<li><a href="http://pypi.python.org/packages/source/e/eventlet/eventlet-0.9.6.tar.gz">eventlet-0.9.6.tar.gz</a></li>
+<li><a href="http://pypi.python.org/packages/source/e/eventlet/eventlet-0.9.7.tar.gz">eventlet-0.9.7.tar.gz</a></li>
 </ul>
 </p>
 
   
 That will run all the tests, though the output will be a little weird because it will look like Nose is running about 20 tests, each of which consists of a bunch of sub-tests.  Not all test modules are present in all versions of Python, so there will be an occasional printout of "Not importing %s, it doesn't exist in this installation/version of Python".
 
+If you see "Ran 0 tests in 0.001s", it means that your Python installation lacks its own tests.  This is usually the case for Linux distributions.  One way to get the missing tests is to download a source tarball (of the same version you have installed on your system!) and copy its Lib/test directory into the correct place on your PYTHONPATH.
+
 
 Testing Eventlet Hubs
 ---------------------
 
   coverage html -d cover --omit='tempmod,<console>,tests'
  
-(``tempmod`` and ``console`` are omitted because they gets thrown away at the completion of their unit tests and coverage.py isn't smart enough to detect this.)
+(``tempmod`` and ``console`` are omitted because they gets thrown away at the completion of their unit tests and coverage.py isn't smart enough to detect this.)

eventlet/__init__.py

-version_info = (0, 9, 7, 'dev1')
+version_info = (0, 9, 7, "dev1")
 __version__ = ".".join(map(str, version_info))
 
 try:
     serve = convenience.serve
     StopServe = convenience.StopServe
 
-    getcurrent = greenlet.getcurrent
+    getcurrent = greenlet.greenlet.getcurrent
     
     # deprecated    
     TimeoutError = timeout.Timeout

eventlet/debug.py

         result.append(repr(l))
     return os.linesep.join(result)
     
-def hub_listener_stacks(state):
+def hub_listener_stacks(state = False):
     """Toggles whether or not the hub records the stack when clients register 
     listeners on file descriptors.  This can be useful when trying to figure 
     out what the hub is up to at any given moment.  To inspect the stacks
     from eventlet import hubs
     hubs.get_hub().set_debug_listeners(state)
     
-def hub_timer_stacks(state):
+def hub_timer_stacks(state = False):
     """Toggles whether or not the hub records the stack when timers are set.  
     To inspect the stacks of the current timers, call :func:`format_hub_timers` 
     at critical junctures in the application logic.
     """
     from eventlet.hubs import timer
     timer._g_debug = state
+
+def hub_prevent_multiple_readers(state = True):
+    from eventlet.hubs import hub
+    hub.g_prevent_multiple_readers = state
     
-def hub_exceptions(state):
+def hub_exceptions(state = True):
     """Toggles whether the hub prints exceptions that are raised from its
     timers.  This can be useful to see how greenthreads are terminating.
     """
     from eventlet import greenpool
     greenpool.DEBUG = state
     
-def tpool_exceptions(state):
+def tpool_exceptions(state = False):
     """Toggles whether tpool itself prints exceptions that are raised from 
     functions that are executed in it, in addition to raising them like
     it normally does."""

eventlet/green/ssl.py

                     return func(*a, **kw)
                 except SSLError, exc:
                     if get_errno(exc) == SSL_ERROR_WANT_READ:
-                        trampoline(self.fileno(), 
+                        trampoline(self, 
                                    read=True, 
                                    timeout=self.gettimeout(), 
                                    timeout_exc=timeout_exc('timed out'))
                     elif get_errno(exc) == SSL_ERROR_WANT_WRITE:
-                        trampoline(self.fileno(), 
+                        trampoline(self, 
                                    write=True, 
                                    timeout=self.gettimeout(), 
                                    timeout_exc=timeout_exc('timed out'))
             raise ValueError("sendto not allowed on instances of %s" %
                              self.__class__)
         else:
-            trampoline(self.fileno(), write=True, timeout_exc=timeout_exc('timed out'))
+            trampoline(self, write=True, timeout_exc=timeout_exc('timed out'))
             return socket.sendto(self, data, addr, flags)
 
     def sendall (self, data, flags=0):
                     if self.act_non_blocking:
                         raise
                     if get_errno(e) == errno.EWOULDBLOCK:
-                        trampoline(self.fileno(), write=True, 
+                        trampoline(self, write=True, 
                                    timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
                     if get_errno(e) in SOCKET_CLOSED:
                         return ''
                     if self.act_non_blocking:
                         raise
                     if get_errno(e) == errno.EWOULDBLOCK:
-                        trampoline(self.fileno(), read=True, 
+                        trampoline(self, read=True, 
                                    timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
                     if get_errno(e) in SOCKET_CLOSED:
                         return ''
         
     def recv_into (self, buffer, nbytes=None, flags=0):
         if not self.act_non_blocking:
-            trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
+            trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
         return super(GreenSSLSocket, self).recv_into(buffer, nbytes, flags)
 
     def recvfrom (self, addr, buflen=1024, flags=0):
         if not self.act_non_blocking:
-            trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
+            trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
         return super(GreenSSLSocket, self).recvfrom(addr, buflen, flags)
         
     def recvfrom_into (self, buffer, nbytes=None, flags=0):
         if not self.act_non_blocking:
-            trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
+            trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
         return super(GreenSSLSocket, self).recvfrom_into(buffer, nbytes, flags)
 
     def unwrap(self):
                         return real_connect(self, addr)
                     except orig_socket.error, exc:
                         if get_errno(exc) in CONNECT_ERR:
-                            trampoline(self.fileno(), write=True)
+                            trampoline(self, write=True)
                         elif get_errno(exc) in CONNECT_SUCCESS:
                             return
                         else:
                         real_connect(self, addr)
                     except orig_socket.error, exc:
                         if get_errno(exc) in CONNECT_ERR:
-                            trampoline(self.fileno(), write=True, 
+                            trampoline(self, write=True, 
                                        timeout=end-time.time(), timeout_exc=timeout_exc('timed out'))
                         elif get_errno(exc) in CONNECT_SUCCESS:
                             return
                 except orig_socket.error, e:
                     if get_errno(e) != errno.EWOULDBLOCK:
                         raise
-                    trampoline(self.fileno(), read=True, timeout=self.gettimeout(),
+                    trampoline(self, read=True, timeout=self.gettimeout(),
                                    timeout_exc=timeout_exc('timed out'))
 
         new_ssl = type(self)(newsock,

eventlet/greenpool.py

 
     def waitall(self):
         """Waits until all greenthreads in the pool are finished working."""
+        assert greenthread.getcurrent() not in self.coroutines_running, \
+                          "Calling waitall() from within one of the "\
+                          "GreenPool's greenthreads will never terminate."
         if self.running():
             self.no_coros_running.wait()
 
     def imap(self, function, *iterables):
         """This is the same as :func:`itertools.imap`, and has the same
         concurrency and memory behavior as :meth:`starmap`.
+        
+        It's quite convenient for, e.g., farming out jobs from a file::
+           
+           def worker(line):
+               return do_something(line)
+           pool = GreenPool()
+           for result in pool.imap(worker, open("filename", 'r')):
+               print result
         """
         return self.starmap(function, itertools.izip(*iterables))
 

eventlet/hubs/__init__.py

     current = greenlet.getcurrent()
     assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
     assert not (read and write), 'not allowed to trampoline for reading and writing'
-    fileno = getattr(fd, 'fileno', lambda: fd)()
+    try:
+        fileno = fd.fileno()
+    except AttributeError:
+        fileno = fd
     if timeout is not None:
         t = hub.schedule_call_global(timeout, current.throw, timeout_exc)
     try:

eventlet/hubs/hub.py

 import heapq
 import sys
 import traceback
+import warnings
 
 from eventlet.support import greenlets as greenlet, clear_sys_exc_info
 from eventlet.hubs import timer
 from eventlet import patcher
 time = patcher.original('time')
 
+g_prevent_multiple_readers = True
+
 READ="read"
 WRITE="write"
 
         self.evtype = evtype
         self.fileno = fileno
         self.cb = cb
-    def __call__(self, *args, **kw):
-        return self.cb(*args, **kw)
     def __repr__(self):
         return "%s(%r, %r, %r)" % (type(self).__name__, self.evtype, self.fileno, self.cb)
     __str__ = __repr__
 
 
+noop = FdListener(READ, 0, lambda x: None)
+
 # in debug mode, track the call site that created the listener
 class DebugListener(FdListener):
     def __init__(self, evtype, fileno, cb):
 
     def __init__(self, clock=time.time):
         self.listeners = {READ:{}, WRITE:{}}
+        self.secondaries = {READ:{}, WRITE:{}}
 
         self.clock = clock
         self.greenlet = greenlet.greenlet(self.run)
         is ready for reading/writing.
         """
         listener = self.lclass(evtype, fileno, cb)
-        self.listeners[evtype].setdefault(fileno, []).append(listener)
+        bucket = self.listeners[evtype]
+        if fileno in bucket:
+            if g_prevent_multiple_readers:
+               raise RuntimeError("Second simultaneous %s on fileno %s "\
+                     "detected.  Unless you really know what you're doing, "\
+                     "make sure that only one greenthread can %s any "\
+                     "particular socket.  Consider using a pools.Pool. "\
+                     "If you do know what you're doing and want to disable "\
+                     "this error, call "\
+                     "eventlet.debug.hub_multiple_reader_prevention(False)" % (
+                     evtype, fileno, evtype))
+            # store off the second listener in another structure
+            self.secondaries[evtype].setdefault(fileno, []).append(listener)
+        else:
+            bucket[fileno] = listener
         return listener
 
     def remove(self, listener):
-        listener_list = self.listeners[listener.evtype].pop(listener.fileno, [])
-        try:
-            listener_list.remove(listener)
-        except ValueError:
-            pass
-        if listener_list:
-            self.listeners[listener.evtype][listener.fileno] = listener_list
+        fileno = listener.fileno
+        evtype = listener.evtype
+        self.listeners[evtype].pop(fileno, None)
+        # migrate a secondary listener to be the primary listener
+        if fileno in self.secondaries[evtype]:
+            sec = self.secondaries[evtype].get(fileno, ())
+            if not sec:
+                return
+            self.listeners[evtype][fileno] = sec.pop(0)
+            if not sec:
+                del self.secondaries[evtype][fileno]
 
     def remove_descriptor(self, fileno):
         """ Completely remove all listeners for this fileno.  For internal use
         only."""
         self.listeners[READ].pop(fileno, None)
         self.listeners[WRITE].pop(fileno, None)
+        self.secondaries[READ].pop(fileno, None)
+        self.secondaries[WRITE].pop(fileno, None)
 
     def stop(self):
         self.abort()

eventlet/hubs/poll.py

 sleep = time.sleep
 
 from eventlet.support import get_errno, clear_sys_exc_info
-from eventlet.hubs.hub import BaseHub, READ, WRITE
+from eventlet.hubs.hub import BaseHub, READ, WRITE, noop
 
 EXC_MASK = select.POLLERR | select.POLLHUP
 READ_MASK = select.POLLIN | select.POLLPRI
         else: 
             try:
                 self.poll.unregister(fileno)
-            except KeyError:
-                pass
-            except (IOError, OSError):
+            except (KeyError, IOError, OSError):
                 # raised if we try to remove a fileno that was
                 # already removed/invalid
                 pass
         super(Hub, self).remove_descriptor(fileno)
         try:
             self.poll.unregister(fileno)
-        except (KeyError, ValueError):
-            pass
-        except (IOError, OSError):
+        except (KeyError, ValueError, IOError, OSError):
             # raised if we try to remove a fileno that was
             # already removed/invalid
             pass
                 sleep(seconds)
             return
         try:
-            presult = self.poll.poll(seconds * self.WAIT_MULTIPLIER)
+            presult = self.poll.poll(int(seconds * self.WAIT_MULTIPLIER))
         except select.error, e:
             if get_errno(e) == errno.EINTR:
                 return
 
         for fileno, event in presult:
             try:
-                listener = None
-                try:
-                    if event & READ_MASK:
-                        listener = readers[fileno][0]
-                    if event & WRITE_MASK:
-                        listener = writers[fileno][0]
-                except KeyError:
-                    pass
-                else:
-                    if listener:
-                        listener(fileno)
+                if event & READ_MASK:
+                    readers.get(fileno, noop).cb(fileno)
+                if event & WRITE_MASK:
+                    writers.get(fileno, noop).cb(fileno)
                 if event & select.POLLNVAL:
                     self.remove_descriptor(fileno)
                     continue
                 if event & EXC_MASK:
-                    for listeners in (readers.get(fileno, []), 
-                                      writers.get(fileno, [])):
-                        for listener in listeners:
-                            listener(fileno)
+                    readers.get(fileno, noop).cb(fileno)
+                    writers.get(fileno, noop).cb(fileno)
             except SYSTEM_EXCEPTIONS:
                 raise
             except:

eventlet/hubs/pyevent.py

         elif evtype is WRITE:
             evt = event.write(fileno, cb, fileno)
 
-        listener = FdListener(evtype, fileno, evt)
-        self.listeners[evtype].setdefault(fileno, []).append(listener)
-        return listener
+        return super(Hub,self).add(evtype, fileno, evt)
 
     def signal(self, signalnum, handler):
         def wrapper():
 
     def remove_descriptor(self, fileno):
         for lcontainer in self.listeners.itervalues():
-            l_list = lcontainer.pop(fileno, None)
-            for listener in l_list:
+            listener = lcontainer.pop(fileno, None)
+            if listener:
                 try:
                     listener.cb.delete()
                 except self.SYSTEM_EXCEPTIONS:

eventlet/hubs/selects.py

 select = patcher.original('select')
 time = patcher.original('time')
 
-from eventlet.hubs.hub import BaseHub, READ, WRITE
+from eventlet.hubs.hub import BaseHub, READ, WRITE, noop
 
 try:
     BAD_SOCK = set((errno.EBADF, errno.WSAENOTSOCK))
                 raise
 
         for fileno in er:
-            for reader in readers.get(fileno, ()):
-                reader(fileno)
-            for writer in writers.get(fileno, ()):
-                writer(fileno)
+            readers.get(fileno, noop).cb(fileno)
+            writers.get(fileno, noop).cb(fileno)
             
         for listeners, events in ((readers, r), (writers, w)):
             for fileno in events:
                 try:
-                    l_list = listeners[fileno]
-                    if l_list:
-                        l_list[0](fileno)
+                    listeners.get(fileno, noop).cb(fileno)
                 except self.SYSTEM_EXCEPTIONS:
                     raise
                 except:

eventlet/pools.py

 
 class Pool(object):
     """
-    Pool is a base class that implements resource limitation and construction.
-    It is meant to be subclassed.  When subclassing, define only
-    the :meth:`create` method to implement the desired resource::
+    Pool class implements resource limitation and construction.
+
+    There are two ways of using Pool: passing a `create` argument or
+    subclassing. In either case you must provide a way to create
+    the resource.
+
+    When using `create` argument, pass a function with no arguments::
+
+        http_pool = pools.Pool(create=httplib2.Http)
+
+    If you need to pass arguments, build a nullary function with either
+    `lambda` expression::
+
+        http_pool = pools.Pool(create=lambda: httplib2.Http(timeout=90))
+
+    or :func:`functools.partial`::
+
+        from functools import partial
+        http_pool = pools.Pool(create=partial(httplib2.Http, timeout=90))
+
+    When subclassing, define only the :meth:`create` method
+    to implement the desired resource::
 
         class MyPool(pools.Pool):
             def create(self):
     greenthread calling :meth:`get` to cooperatively yield until an item
     is :meth:`put` in.
     """
-    def __init__(self, min_size=0, max_size=4, order_as_stack=False):
+    def __init__(self, min_size=0, max_size=4, order_as_stack=False, create=None):
         """*order_as_stack* governs the ordering of the items in the free pool.
         If ``False`` (the default), the free items collection (of items that
         were created and were put back in the pool) acts as a round-robin,
         self.current_size = 0
         self.channel = queue.LightQueue(0)
         self.free_items = collections.deque()
+        if create is not None:
+            self.create = create
+
         for x in xrange(min_size):
             self.current_size += 1
             self.free_items.append(self.create())
         return max(0, self.channel.getting() - self.channel.putting())
 
     def create(self):
-        """Generate a new pool item.  This method must be overridden in order
-        for the pool to function.  It accepts no arguments and returns a single
-        instance of whatever thing the pool is supposed to contain.
+        """Generate a new pool item.  In order for the pool to function,
+        either this method must be overriden in a subclass or pool must be
+        created with `create`=callable argument.  It accepts no arguments
+        and returns a single instance of whatever thing the pool is supposed
+        to contain.
 
         In general, :meth:`create` is called whenever the pool exceeds its
         previous high-water mark of concurrently-checked-out-items.  In other

eventlet/support/greenlets.py

 try:
     import greenlet
-    getcurrent = greenlet.getcurrent
-    GreenletExit = greenlet.GreenletExit
+    getcurrent = greenlet.greenlet.getcurrent
+    GreenletExit = greenlet.greenlet.GreenletExit
     greenlet = greenlet.greenlet
 except ImportError, e:
     raise

eventlet/tpool.py

     # the following are a buncha methods that the python interpeter
     # doesn't use getattr to retrieve and therefore have to be defined
     # explicitly
-    def __iter__(self):
-        return proxy_call(self._autowrap, self._obj.__iter__)
     def __getitem__(self, key):
         return proxy_call(self._autowrap, self._obj.__getitem__, key)    
     def __setitem__(self, key, value):
         return len(self._obj)
     def __nonzero__(self):
         return bool(self._obj)
+    def __iter__(self):
+        if iter(self._obj) == self._obj:
+            return self
+        else:
+            return Proxy(iter(self._obj))
+    def next(self):
+        return proxy_call(self._autowrap, self._obj.next)
 
 
 

examples/forwarder.py

-""" This is an incredibly simple port forwarder from port 7000 to 22 on localhost.  It calls a callback function when the socket is closed, to demonstrate one way that you could start to do interesting things by
+""" This is an incredibly simple port forwarder from port 7000 to 22 on 
+localhost.  It calls a callback function when the socket is closed, to 
+demonstrate one way that you could start to do interesting things by
 starting from a simple framework like this.
 """
 

examples/producer_consumer.py

 number of "workers", so you don't have to write that tedious management code 
 yourself.
 """
+from __future__ import with_statement
 
 from eventlet.green import urllib2
 import eventlet

examples/websocket.html

 </head>
 <body>
 <h3>Plot</h3>
+<p>(Only tested in Chrome)</p>
 <div id="holder" style="width:600px;height:300px"></div>
 </body>
-</html>
+</html>

examples/websocket.py

 import collections
 import errno
+import eventlet
 from eventlet import wsgi
 from eventlet import pools
-import eventlet
-from eventlet.common import get_errno
+from eventlet.support import get_errno
 
 class WebSocketWSGI(object):
     def __init__(self, handler, origin):
 
 from setuptools import find_packages, setup
 from eventlet import __version__
+from os import path
 import sys
 
 requirements = []
     packages=find_packages(exclude=['tests']),
     install_requires=requirements,
     zip_safe=False,
-    long_description="""
-    Eventlet is a networking library written in Python. It achieves
-    high scalability by using non-blocking io while at the same time
-    retaining high programmer usability by using coroutines to make
-    the non-blocking io operations appear blocking at the source code
-    level.""",
+    long_description=open(
+        path.join(
+            path.dirname(__file__),
+            'README'
+        )
+    ).read(),
     test_suite = 'nose.collector',
     classifiers=[
     "License :: OSI Approved :: MIT License",

tests/greenio_test.py

 
         gt.wait()
 
+    @skip_with_pyevent
+    def test_raised_multiple_readers(self):
+        debug.hub_prevent_multiple_readers(True)
+
+        def handle(sock, addr):
+            sock.recv(1)
+            sock.sendall("a")
+            raise eventlet.StopServe()
+        listener = eventlet.listen(('127.0.0.1', 0))
+        server = eventlet.spawn(eventlet.serve, 
+                                listener,
+                                handle)
+        def reader(s):
+            s.recv(1)
+
+        s = eventlet.connect(('127.0.0.1', listener.getsockname()[1]))
+        a = eventlet.spawn(reader, s)
+        eventlet.sleep(0)
+        self.assertRaises(RuntimeError, s.recv, 1)
+        s.sendall('b')
+        a.wait()
+        
 
 class TestGreenIoLong(LimitedTestCase):
     TEST_TIMEOUT=10  # the test here might take a while depending on the OS
     @skip_with_pyevent
     def test_multiple_readers(self, clibufsize=False):
+        debug.hub_prevent_multiple_readers(False)
         recvsize = 2 * min_buf_size()
         sendsize = 10 * recvsize
         # test that we can have multiple coroutines reading
         listener.close()
         self.assert_(len(results1) > 0)
         self.assert_(len(results2) > 0)
+        debug.hub_prevent_multiple_readers()
 
     @skipped  # by rdw because it fails but it's not clear how to make it pass
     @skip_with_pyevent

tests/greenpool_test.py

     def test_waitall_on_nothing(self):
         p = greenpool.GreenPool()
         p.waitall()
+        
+    def test_recursive_waitall(self):
+        p = greenpool.GreenPool()
+        gt = p.spawn(p.waitall)
+        self.assertRaises(AssertionError, gt.wait)
 
             
 class GreenPile(tests.LimitedTestCase):

tests/stdlib/all.py

 Many of these tests make connections to external servers, and all.py tries to skip these tests rather than failing them, so you can get some work done on a plane.
 """
 
+from eventlet import debug
+debug.hub_prevent_multiple_readers(False)
 
 def restart_hub():
     from eventlet import hubs

tests/tpool_test.py

         for i in prox:
             result.append(i)
         self.assertEquals(range(10), result)
+        
+    @skip_with_pyevent
+    def test_wrap_iterator2(self):
+        def foo():
+            import time
+            for x in xrange(10):
+                yield x
+                time.sleep(0.01)
+                
+        counter = [0]
+        def tick():
+            for i in xrange(100):
+                counter[0]+=1
+                eventlet.sleep(0.001)
+                
+        gt = eventlet.spawn(tick)
+        previtem = 0
+        for item in tpool.Proxy(foo()):
+            self.assert_(item >= previtem)
+        # make sure the tick happened at least a few times so that we know
+        # that our iterations in foo() were actually tpooled
+        self.assert_(counter[0] > 10)
+        gt.wait()
+
 
     @skip_with_pyevent
     def test_raising_exceptions(self):
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.