Arrays are mapped to processors through a two-step process-alignment followed by distribution-in data-parallel languages such as High Performance Fortran. This process of mapping creates disjoint pieces of the array that are locally owned by each processor. An HPF compiler that generates code for ar
On the Utility of Communication–Computation Overlap in Data-Parallel Programs
✍ Scribed by Michael J. Quinn; Philip J. Hatcher
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 260 KB
- Volume
- 33
- Category
- Article
- ISSN
- 0743-7315
No coin nor oath required. For personal study only.
✦ Synopsis
However, the speedup achieved through parallelism is often lower in modern systems. It is no surprise, then, that developers of compilers for data-parallel languages have hypothesized the importance of optimizations that overlap communications with computations in order to reduce execution times and improve speedups [1,3,4,8,9,12,13].
In this paper we explore the benefits to be gained through communication-computation overlap.
📜 SIMILAR VOLUMES
An important problem in heterogeneous computing (HC) is predicting task execution time. A methodology is introduced for determining the execution time distribution for a given data parallel program that is to be executed in an SIMD, MIMD (SPMD), and/or mixed-mode SIMD/MIMD (SPMD) HC environment. The
## Abstract The reasons which sometimes cause failure of convergence in the iteration of NMR spectra by LAOCOON 3 are discussed. The problem can be avoided by retaining the sequence of the eigenfunctions throughout the iterative process. In addition, assignment of the lines to start the iterative a
The past decade has seen explosive growth in database technology and the amount of data collected. Advances in data collection, use of bar codes in commercial outlets, and the computerization of business transactions have flooded us with lots of data. We have an unprecedented opportunity to analyze