Caching has a performance-killing race condition?
If I cache a page that takes 2 seconds to generate, then this solves performance problems nicely for most visitors if I get requests less frequently than every 2 seconds. But if someone hits a freshly-expired page and gets a 2 second wait, and then another person hits it before it's done, their thread goes to work generating the page as well, and I get a snowball effect because the more threads re-generating that page simultaneously, the longer each takes. All of a sudden I have a bunch of threads generating the same thing and the server can become unresponsive.
Ideally I think only one thread should re-generate the content, and any other threads would wait on a lock or something of that nature while the one thread is working. Then, when the content is done generating, the lock would be released and they would all get their content instantly without having had to do redundant work.
My understanding of tools and the caching code is still forming, so even if the cherrypy developers don't have the desire/time to fix this, but some suggestions/tips/ideas on how to code this up could be given, I might try to implement it myself.