This is part 6 of my notes from reading Java Concurrency in Practice.
NOTE: These summaries are NOT meant to replace the book. I highly recommend buying your own copy of the book if you haven't already read it.
Chapter 8 - Applying Thread Pools
- In the Executor framework, there is an implicit coupling between tasks and execution policies. Not all tasks are compatible with all execution policies.
- If a task depends on the results of other tasks, then the execution policy must be carefully managed to avoid liveness problems. Deadlocks can happen if the thread pool is bounded, i.e. thread starvation deadlock.
- Will always deadlock if using Executors.newSingleThreadExecutor().
- Other resources like JDBC connections may also be a bottleneck.
- Document any pool sizing or configuration constraints.
- Tasks that rely on thread confinement for thread-safety will not work well with thread pools.
- Responsiveness of time-sensitive tasks may be bad if we use a single thread executor or if we submit several long running tasks to a small thread pool. Use timed resource waits instead of unbounded waits.
- Tasks that use ThreadLocal cannot be used with the standard Executor implementation as Executors may reuse or kill threads. Do not use ThreadLocal to communicate value between tasks.
- For compute-intensive tasks, an Ncpu-processor system achieves optimum utilization with a thread pool of Ncpu + 1 threads. For tasks that include I/O or other blocking operations, use a larger thread pool since not all threads will be schedulable at all times.
- ThreadPoolExecutor is the base class of executors returned by Executors.newCachedThreadPool, newFixedThreadPool and newScheduledThreadExecutor. It is highly configurable.
- We can specify the type of BlockingQueue that holds tasks awaiting execution.
- unbounded LinkedBlockingQueue is the default for newFixedThreadPool and newSingleThreadExecutor.
- Another option is to use a bounded LinkedBlockingQueue, ArrayBlockingQueue or PriorityBlockingQueue.
- SynchronousQueue - not really a queue. It is a mechanism for managing handoffs between threads. Another thread must be waiting to accept handoff - if pool maximum size has not been reached a new thread is created. If no thread is available, the task is rejected. Handoff is more efficient as we don't have to place the Runnable in an Queue. newCachedThreadPool uses a SynchronousQueue
- newCachedThreadPool is a good default choice for an Executor.
- Saturation Policy for a ThreadPoolExecutor can be modified by calling setRejectedExecutionHandler().
- abort - causes execute() to throw the unchecked RejectedExecutionException. Caller catches this exception and implements its own overflow handling. This is the default.
- discard - silently discard the newly submitted task.
- discard-oldest - discard tasks that would be executed next and tries to resubmit the new task.
- caller-runs - Tries to slow down the flow of new task submission by pushing some of the work to the caller. It executes the newly submitted task not in a pool thread, but in the thread that calls execute().
- There is no predefined saturation policy to make execute() block when the work queue is full. However, this can be achieved using a Semaphore to bound the task injection rate.
- Thread Factories - whenever a thread pool needs to create a thread, it uses a thread factory. ThreadFactory.newThread() is called whenever a thread pool needs to create a new thread. Default thread factory creates a new non-daemon thread with no special configuration. Use a custom thread factory to to specify an UncaughtExceptionHandler for pool threads, or instantiate an instance of a custom Thread class that does debug logging, or give pool threads more meaningful names.
- Most ThreadPoolExecutor options can be changed after construction via setters. Executors.unconfigurableExecutorService wraps an existing ExecutorService to ensure that its configuration cannot be changed further. newSingleThreadExecutor() returns such a wrapped Executor rather than a raw ThreadPoolExecutor. This is because newSingleThreadExecutor is implemented as a thread pool with one thread, and no one should be able to increase the pool size.
- ThreadPoolExecutor was designed for extension.
- beforeExecute and afterExecute hooks are called in the thread that executes the task. Used for logging, timing, monitoring, statistics gathering. Use ThreadLocal to share values between beforeExecute and afterExecute.
- afterExecute is not called if task completes with an Error (regular exception is okay)
- If beforeExcute throws a RuntimeException, the task is not executed and afterExecute() is not called.
- terminated hook is called after the thread pool has shutdown - all tasks have finished and all worker threads have shut down. Useful for releasing resources allocated by the Executor, notification, logging, finalize statistics gathering.
- Parallelizing recursive algorithms
- Sequential loops are suitable for parallelization when each iteration is independent of others, and the work done in each iteration is significant to offset cost of task creation.
- Sequential loops within recursive algorithms can be parallelized. Easier if iteration does not need value of recursive iterations it invokes.
- In order to wait for all results, create a new Executor, schedule the parallel tasks, call executor.shutdown() and then awaitTermination().
No comments:
Post a Comment