High Performance Fortran (HPF) is a data-parallel language that provides a high-level interface for programming scientiยฎc applications, while delegating to the compiler the task of generating explicitly parallel message-passing programs. This paper provides an overview of HPF compilation and runtime
Compiling programs for distributed-memory multiprocessors
โ Scribed by David Callahan; Ken Kennedy
- Publisher
- Springer US
- Year
- 1988
- Tongue
- English
- Weight
- 962 KB
- Volume
- 2
- Category
- Article
- ISSN
- 0920-8542
No coin nor oath required. For personal study only.
โฆ Synopsis
We describe a new approach to programming distributed-memory computers. Rather than having each node in the system explicitly programmed, we derive an efficient message-passing program from a sequential shared-memory program annotated with directions on how elements of shared arrays are distributed to processors. This article describes one possible input language for describing distributions and then details the compilation process and the optimization necessary to generate an efficient program.
๐ SIMILAR VOLUMES
and/or devising improved routing disciplines in the case of distributed memory architecture, to reduce the expected access time for a variable. Extensive work has been done on both cache design and message routing. In this paper a new shared-data approach is taken to attack the problem. We consider
Algorithm 1 (Compiling Align directives) Input: Fortran 90D/HPF syntax tree with some alignment functions to template Output: Fortran 90D/HPF syntax tree with identical alignment functions to template Method: For each aligned array, and for each dimension of that array, carry out the following ste
data-parallelism in these languages. Array expressions involve array sections which consist of array elements from a lower index to an upper index at a fixed stride. In order to generate high-performance target code, compilers for distributed-memory machines should produce efficient code for array s
Massively parallel processors have begun using commodity operating systems that support demandpaged virtual memory. To evaluate the utility of virtual memory, we measured the behavior of seven shared-memory parallel application programs on a simulated distributed-shared-memory machine. Our results (