Serial programs are sequentially structured so that later calculations can depend on the results of earlier calculations. In parallel programs, you cannot depend on the order of statements in the code to match the execution order; you must recognize any data dependencies and ensure that the computations are executed in the correct order. Managing data dependency in parallel code involves isolating dependencies within individual tasks or coordinating the execution of multiple tasks.

When data dependencies exist across parallel tasks, you must coordinate the execution of the tasks. Without explicit coordination, parallel tasks may read stale or uninitialized values from memory and use them instead of the intended values. If you are lucky, these unexpected values will cause the program to crash. In the worst case, your program will incorporate these incorrect values and produce incorrect results.

Recall the earlier shared memory example where two parallel workers (threads, in this case), both write to the same memory location in their respective computations. The illustration is reproduced below. This is an example of a race condition because two or more threads access the same location in memory and at least one of the threads writes to the shared location.

Two threads accessing a single array element in shared memory. Thread 0 reads the array element, adds 2 to the value and then writes the newly computed sum back to the array element. Thread 1 sets the value of the array element to 3.
In this example, two threads of execution each have read and write access to the same vector in the memory of the parent task.

In the example above, the two threads read and write to the same location in memory, creating data dependency. The programmer probably intends these read and write operations to occur in a particular order but did not take any steps to manage the data dependency. The computer will execute an interleaving of the operations in the two threads, but the particular interleaving is not defined and could change each time the program executes. In this case, each interleaving produces a different result (for simplicity, assume the vector was initialized with zeros)! The figure below illustrates three possible interleavings.

three possible interleavings of the operations within two threads
The operation in the second thread could occur before, between, or after the operations in the first thread, leading to different results.

Suppose you want the first thread (thread 0) to use the value placed in the vector by the second thread (thread 1); you need to use a mechanism that ensures thread 1 has written the value before thread 0 reads it. Without some mechanism to coordinate when the threads execute their instructions, we could get a different result each time the programs runs!

The mechanisms available to coordinate parallel workers depend on the system architecture. In shared memory programming, use a semaphore or other locking mechanism to ensure that the results produced by one thread are written to memory before another thread attempts to read those results. In distributed memory programming, use messages to communicate the results of earlier calculations to the workers that depend on those results.

The existence of data dependencies between separate threads of execution necessarily affects program efficiency because it causes implicit synchronization of the threads. It is very important to identify all of the data dependencies early in the development of a parallel program. Discovering a dependency after most of the parallelism has been introduced can be very disruptive.

It is best to do calculations that involve data dependencies within the same thread, where the order of the instructions enforces the dependency. While isolating data dependencies within individual threads is not always possible, a program that minimizes the dependencies across threads will usually be more efficient.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement