Cherrypy becomes unresponsive under heavy load. Looks like DOS attack
This issue is continuation of https://bitbucket.org/cherrypy/cherrypy/issue/1205/cherrypy-dropping-connection-under-heavy There is a serious bug in threadpool shrink(). Under heavy connection load, cheryypy is too busy in creating and destroying threads that makes it really slow. However the real problem comes when it becomes unresponsive and never recovers even if number of connections drop to 0. In shrink(), there are far too many shutdown requests are being added. self._queue.put(_SHUTDOWNREQUEST).
For example if at a given time number of idle threads are > max_sparse. So idle - max_sparse shutdown request are put in the queue. So idle - max_sparse threads will terminate. However if these threads which are still running, and monitor thead calls shrink(), another idle - max_spare shutdown requests will be added to queue. Resulting in 2*(idle-max_sprase ) theads getting terminated and culled. In other words shrink() can bring threadpool down to 0 worker threads and never grow. Hence a complete unresponsiveness.
Besides this there is another bug in shrink where a worker thread is being removed during iteration.
def shrink(self, amount):
"""Kill off worker threads (not below self.min).""" # Grow/shrink the pool if necessary. # Remove any dead threads from our list # avoid possible duplicate shutdown request setting amount=0 for t in self._threads: if not t.isAlive(): t.handled = True amount = 0 self._threads = [t for t in self._threads if not t.handled] """ if queue already contains shutdown requests, don't add more. It may potentially be duplicate shutdown request""" if not self._queue.empty(): amount = 0 if amount > 0 : for i in xrange(min(amount, len(self._threads) - self.min)): # Put a number of shutdown requests on the queue equal # to 'amount'. Once each of those is processed by a worker, # that worker will terminate and be culled from our list # in self.put. self._queue.put(_SHUTDOWNREQUEST)
This fix and fix for 1205 resolved the issue completely.