Consider a parallel program running on a single node with eight cores. The table below shows the wallclock time and the speedup achieved when the program uses additional cores. The two figures show visualizations of the table data.

Wallclock time and speedup achieved when the program uses additional cores.
Core CountWallclock Time [s]Speedup
1 11.26 1.0
2 5.69 1.98
3 3.87 2.91
4 2.99 3.76
5 2.53 4.45
6 2.26 4.98
7 2.17 5.18
8 2.23 5.04
Visualization of speedup as core count increases. Refer to the speedup column in the data table for raw numbers.
Visualization of speedup as core count increases. Refer to the speedup column in the data table for raw numbers.
Visualization of walltime as core count increases. Refer to the wallclock column in the data table for raw numbers.
Visualization of walltime as core count increases. Refer to the wallclock column in the data table for raw numbers.
Exercise Questions:

Click the arrow next to each question to reveal the answer. Alternatively, use the tab key to select the element and press return to expand.
HTML tags are generally accessible to assistive technologies.

  1. At what number of cores is the program fastest?
    Answer: The computation is fastest with seven cores.
  2. At what number of cores is the calculation most efficient?
    Answer: Visually, 1, 2 or 3 cores are tied for most efficient.
  3. What would "perfect speedup" look like on the speedup graph?
    Answer: Perfect speedup would be a line with slope 1 (y=x) on the speedup graph.
  4. There could be various reasons why a parallel application does not exhibit perfect speedup. It could be due to the inherently serial parts of the calculation, or the overhead of copying and transferring data among the parallel tasks, or the competition among individual tasks for shared resources like memory or disk. What could you do to test whether the poor speedup of this application was the result of memory contention?
    Answer: One could test for memory contention using a code profiler.
 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement