Here's a quote from the JSR166y Overview:
Like all lightweight-task frameworks, ForkJoin (FJ) does not explicitly cope with blocked IO: If a worker thread blocks on IO, then (1) it is not available to help process other tasks (2) Other worker threads waiting for the task to complete (i.e., to join it) may run out of work and waste CPU cycles. Neither of these issues completely eliminates potential use, but they do require a lot of explicit care. For example, you could place tasks that may experience sustained blockages in their own small ForkJoinPools. (The Fortress folks do something along these lines mapping fortress "fair" threads onto forkjoin.) You can also dynamically increase worker pool sizes (we have methods to do this) when blockages may occur. All in all though, the reason for the restrictions and advice are that we do not have good automated support for these kinds of cases, and don't yet know of the best practices, or whether it is a good idea at all.
Basically this boils down to: Don't block or do anything else that causes a single task to run for an extended period.
That's fine for the FJ pool, but problems arise when the FJ pool backs a higher-level concurrency framework such as Scala Actors because these caveats vanish. Consequently, people frequently use actors for IO or have an actor running an essentially infinite computation, and then get really confused when their other actors are starved. The two solutions to this problem are:
- Increase the size of the thread pool, which means you have to know how many threads you need to avoid starvation induced deadlock
- Turn off the FJ based scheduler, which potentially means a substantial reduction in performance (EPFL has consistently resisted moving to existing java.util.concurrency classes based on performance claims)