I recently gave a talk about how Conviva uses Spark at AMPCamp 2012. Spark has helped Conviva speed up online video data analysis by 30x. My talk video is embedded below.
Saturday, October 06, 2012
Tuesday, October 02, 2012
Java Concurrency in Practice - Summary
I recently read Java Concurrency in Practice by Brian Goetz. It is a fantastic book. Every Java programmer must read it. In order to firmly internalize the concepts described in the book, I read it a second time and took down notes summarizing each chapter. My notes were very popular when I was in school and college. Hopefully, these notes will be useful to others as well.
NOTE: These summaries are NOT meant to replace the book. I highly recommend buying your own copy of the book if you haven't already read it.
- Part 1 - Introduction, Thread Safety
- Part 2 - Sharing Objects
- Part 3 - Composing Objects, Building Blocks
- Part 4 - Task Execution
- Part 5 - Cancellation & Shutdown
- Part 6 - Applying Thread Pools
- Part 7 - GUI Applications, Avoiding Liveness Hazards
- Part 8 - Performance & Scalability, Testing Concurrent Programs
- Part 9 - Explicit Locks, Building Custom Synchronizers
- Part 10 - Atomic Variables & Non-blocking Synchronization, Java Memory Model
Monday, October 01, 2012
Java Concurrency in Practice - Summary - Part 10
This is part 10 of my notes from reading Java Concurrency in Practice.
NOTE: These summaries are NOT meant to replace the book. I highly recommend buying your own copy of the book if you haven't already read it.
Chapter 15 - Atomic Variables and non-blocking synchronization
- Non-blocking algorithms use low-level atomic machine instructions like compare and swap (CAS) instead of locks to ensure data integrity under concurrent access. An algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread. An algorithm is called lock-free if some thread can make progress at each step.
- Locking has many disadvantages. For small operations like incrementing a counter, the overhead imposed by locking is just too much.
- Almost every modern processor supports atomic Compare and Swap (CAS) instructions. CAS atomically updates a memory location to a new value if the old value matches what we expect. Otherwise it does nothing. In either case, it returns the original value of the memory location.
- CAS: "I think V should have the value A. If it does put B there, otherwise don't change it but tell me I was wrong."
- CAS is an optimistic technique - proceeds with update and detects collisions.
- A thread that loses in CAS is not suspended. It can choose to retry, do some other recovery action or do nothing.
- Primary disadvantage of CAS is that the caller must deal with contention. Locking automatically deals with contention by blocking.
- Rule of thumb : cost of uncontended lock acquisition and release is twice the cost of a CAS on a multi-processor system.
- Atomic Variables
- AtomicInteger, AtomicLong, AtomicBoolean, AtomicReference
- Elements in the atomic array classes (available in Integer, Long and Reference versions) can be updated atomically.
- Not good candidates for keys in hash based collections since they are mutable.
- At low to moderate contention, Atomic variables provide better scalability. At high contention, locks are better.
- Refer the book for non-blocking stack and linked list code listing.
- Atomic field updaters can be used to update existing volatile variables atomically. Only to be used when there is a performance problem associated with creating multiple new Atomic* variables (like for each node of a linked list).
- ABA problem - During an update, the value of a reference field can change from A to B and back to A. In this case, we do not want the update to proceed. But in regular CAS, it will. This problem usually happens when we manage our own object pools. One option to avoid this problem is to use versioned references with AtomicMarkedReference and AtomicStampedReference.
Chapter 16 - The Java Memory Model
- A memory model is required because compilers can reorder instructions, store variables in caches instead of memory, etc. Hence the result of assigning a value to a variable may not be immediately visible to another thread. The Java Memory Model (JMM) specifies the minimal guarantees the JVM must make about when writes to variables become visible to other threads.
- The JMM shields application developers from differences in memory models across different processor architectures.
- Within a single thread, the JVM must maintain serial semantics.
- Actions - reads/writes to variables, locks/unlocks, starting/joining threads.
- The JMM defines a partial ordering called happens-before between actions:
- Program rule order - Each action in a thread happens-before every subsequent action in that thread.
- Monitor lock rule - An unlock on a monitor happens-before every subsequent lock on the same monitor lock.
- Volatile variable rule - A write to a volatile variable happens-before every subsequent read of that same variable.
- Thread start rule - A call to Thread.start happens-before every action in the started thread.
- Thread termination rule - Any action in a thread happens-before any other thread detects that it has terminated (either by successfully returning from Thread.join or by Thread.isAlive() returning false).
- Interruption rule - A thread calling interrupt() on another thread happens-before the interrupted thread detecting the interrupt.
- Finalizer rule - The end of an object's constructor happens-before the start of its finalizer.
- Transitivity - A happens-before B and B happens-before C implies A happens-before C.
- Placing an item in a thread-safe collection happens-before another thread retrieves that item.*
- Counting down on a CountdDownLatch happens-before a thread returns from await().
- Releasing a permit to a Semaphore happens-before acquiring a permit from the same Semaphore.
- A FutureTasks computation happens-before another thread successfully returns from Future.get().
- Submitting a Runnable/Callable to an Executor happens-before the task begins execution.
- A thread arriving at a CycleBarrier or Exchanger happens-before the barrier action, which happens-before threads are released from the barrier.
- Piggybacking - advanced technique that uses an existing happens-before ordering to ensure visibility of an object X, rather than creating an happens-before relationship for publishing X. Do not use unless absolutely necessary for performance.
- Static initialization provides safe publication as static initializers are run by the JVM at class initialization time. You can use static initialization lazily by using a private static holder class.
- Double-checked locking - incorrect technique to avoid synchronization during lazy initialization. First check if resource is non null without holding lock. If non-null, just use it. If null, synchronize and initialize it. The problem is that using the resource without any happens-before relationship can expose it in a partially constructed state.
Subscribe to:
Posts (Atom)