A study is presented of the implementation of four different parallel programming models in a code that solves the fluid flow equations on block structured meshes. Performance results obtained on a number of distributed-memory parallel computer systems are given, in particular, for a 1024 processor
A data-parallel approach to multiblock flow computations
✍ Scribed by M. L. Sawley; J. K. Tegnér
- Publisher
- John Wiley and Sons
- Year
- 1994
- Tongue
- English
- Weight
- 869 KB
- Volume
- 19
- Category
- Article
- ISSN
- 0271-2091
No coin nor oath required. For personal study only.
✦ Synopsis
Multiblock methods are often employed to compute flows in complex geometries. While such methods lend themselves in a natural way to coarse-grain parallel processing by the distribution of different blocks to different processors, in some situations a fine-grain data-parallel implementation may be more appropriate. A study is presented of the resolution of the Euler equations for compressible flow on a block-structured mesh, illustrating the advantages of the data-parallel approach. Particular emphasis is placed on a dynamic block management strategy that allows computations to be undertaken only for blocks where useful work is to be performed. In addition, appropriate choices of initial and boundary conditions that enhance solution convergence are presented. Finally, code portability between five different massively parallel computer systems is examined and an analysis of the performance results obtained on different parallel systems is presented.
📜 SIMILAR VOLUMES
Developing an ecient algorithm for solving a large linear system in a parallel computing environment is the major problem associated with the application of parallel processing to the numerical solution of large-scale engineering problems. This paper presents a new algorithm called Multiple Sequenti
## Abstract ## Purpose To exploit the capabilities of parallel processing in applying the space‐time adaptive processing (STAP) algorithm, previously explored on a small scale for functional magnetic resonance imaging (fMRI) applications, to conventional size fMRI data sets. ## Materials and Meth
Data-parallel languages allow programmers to easily express parallel computations by means of high-level constructs. To reduce overheads, the compiler partitions the computations among the processors at compile-time, on the basis of the static data distribution suggested by the programmer. When exec