๐”– Scriptorium
โœฆ   LIBER   โœฆ

๐Ÿ“

Data-Parallel Programming on MIMD Computers

โœ Scribed by Philip J. Hatcher, Michael J. Quinn


Publisher
The MIT Press
Year
1991
Tongue
English
Leaves
240
Series
Scientific and Engineering Computation
Category
Library

โฌ‡  Acquire This Volume

No coin nor oath required. For personal study only.

โœฆ Synopsis


MIMD computers are notoriously difficult to program. Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers.The authors provide enough data so that the reader can decide the feasibility of architecture-independent programming in a data-parallel language. For each benchmark program they give the source code listing, absolute execution time on both a multiprocessor and a multicomputer, and a speedup relative to a sequential program. And they often present multiple solutions to the same problem, to better illustrate the strengths and weaknesses of these compilers.The language presented is Dataparallel C, a variant of the original C* language developed by Thinking Machines Corporation for its Connection Machine processor array. Separate chapters describe the compilation of Dataparallel C programs for execution on the Sequent multiprocessor and the Intel and nCUBE hypercubes, respectively. The authors document the performance of these compilers on a variety of benchmark programs and present several case studies.Philip J. Hatcher is Assistant Professor in the Department of Computer Science at the University of New Hampshire. Michael J. Quinn is Associate Professor of Computer Science at Oregon State University.Contents: Introduction. Dataparallel C Programming Language Description. Design of a Multicomputer Dataparallel C Compiler. Design of a Multiprocessor Dataparallel C Compiler. Writing Efficient Programs. Benchmarking the Compilers. Case Studies. Conclusions.


๐Ÿ“œ SIMILAR VOLUMES


Foundations of Parallel Programming (Cam
โœ David Skillicorn ๐Ÿ“‚ Library ๐Ÿ“… 2005 ๐ŸŒ English

Using parallel machines is difficult because of their inherent complexity and because their architecture changes frequently. This book presents an integrated approach to developing software for parallel machines that addresses software issues and performance issues together. The author describes a m

Parallel Computers. Architecture and Pro
โœ V. Rajaraman, C. Siva Ram Murthy ๐Ÿ“‚ Library ๐Ÿ“… 2016 ๐Ÿ› Prentice-Hall ๐ŸŒ English

Today all computers, from tablet/desktop computers to super computers, work in parallel. A basic knowledge of the architecture of parallel computers and how to program them, is thus, essential for students of computer science and IT professionals. In its second edition, the book retains the lucidity

Data Organization in Parallel Computers
โœ Harry A. G. Wijshoff (auth.) ๐Ÿ“‚ Library ๐Ÿ“… 1988 ๐Ÿ› Springer US ๐ŸŒ English

<p>The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computa