Abstract
Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation-additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.
Original language | English |
---|---|
Pages (from-to) | 123-136 |
Number of pages | 14 |
Journal | Scientific Programming |
Volume | 21 |
Issue number | 3-4 |
DOIs | |
State | Published - 2013 |
Externally published | Yes |
Keywords
- NUMA
- OpenMP
- Task parallel programming
- affinity
- locality
- task scheduling