𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Automatic Parallelization: New Approaches to Code Generation, Data Distribution, and Performance prediction

✍ Scribed by Thomas Fahringer (auth.), Christoph W. Keßler (eds.)


Publisher
Vieweg+Teubner Verlag
Year
1994
Tongue
English
Leaves
234
Edition
1
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


Distributed-memory multiprocessing systems (DMS), such as Intel's hypercubes, the Paragon, Thinking Machine's CM-5, and the Meiko Computing Surface, have rapidly gained user acceptance and promise to deliver the computing power required to solve the grand challenge problems of Science and Engineering. These machines are relatively inexpensive to build, and are potentially scalable to large numbers of processors. However, they are difficult to program: the non-uniformity of the memory which makes local accesses much faster than the transfer of non-local data via message-passing operations implies that the locality of algorithms must be exploited in order to achieve acceptable performance. The management of data, with the twin goals of both spreading the computational workload and minimizing the delays caused when a processor has to wait for non-local data, becomes of paramount importance. When a code is parallelized by hand, the programmer must distribute the program's work and data to the processors which will execute it. One of the common approaches to do so makes use of the regularity of most numerical computations. This is the so-called Single Program Multiple Data (SPMD) or data parallel model of computation. With this method, the data arrays in the original program are each distributed to the processors, establishing an ownership relation, and computations defining a data item are performed by the processors owning the data.

✦ Table of Contents


Front Matter....Pages i-6
The Weight Finder β€” An Advanced Profiler for Fortran Programs....Pages 7-31
Predicting Execution Times of Sequential Scientific Kernels....Pages 32-44
Isolating the Reasons for the Performance of Parallel Machines on Numerical Programs....Pages 45-77
Targeting Transputer Systems, Past and Future....Pages 78-83
Adaptor: A Compilation System for Data Parallel Fortran Programs....Pages 84-98
SNAP! Prototyping a Sequential and Numerical Application Parallelizer....Pages 99-109
Knowledge-Based Automatic Parallelization by Pattern Recognition....Pages 110-135
Automatic Data Layout for Distributed-Memory Machines in the D Programming Environment....Pages 136-152
Subspace Optimizations....Pages 153-176
Data and Process Alignment in Modula-2*....Pages 177-191
Automatic Parallelization for Distributed Memory Multiprocessors....Pages 192-217
Back Matter....Pages 218-224

✦ Subjects


Computer Science, general


πŸ“œ SIMILAR VOLUMES


Automatic Performance Prediction of Para
✍ Thomas Fahringer (auth.) πŸ“‚ Library πŸ“… 1996 πŸ› Springer US 🌐 English

<p><em>Automatic Performance Prediction of Parallel Programs</em> presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis c

High Performance Parallelism Pearls: Mul
✍ James Reinders, James Jeffers πŸ“‚ Library πŸ“… 2014 πŸ› Morgan Kaufmann 🌐 English

<p><i>High Performance Parallelism Pearls</i> shows how to leverage parallelism on processors and coprocessors with the same programming - illustrating the most effective ways to better tap the computational potential of systems with Intel Xeon Phi coprocessors and Intel Xeon processors or other mul

High Performance Parallelism Pearls Mul
✍ James Reinders, Jim Jeffers πŸ“‚ Library πŸ“… 2015 🌐 English

High Performance Parallelism Pearls shows how to leverage parallelism on processors and coprocessors with the same programming – illustrating the most effective ways to better tap the computational potential of systems with Intel Xeon Phi coprocessors and Intel Xeon processors or other multicore pro

High performance parallelism pearls: mul
✍ Jeffers, Jim;Reinders, James πŸ“‚ Library πŸ“… 2015 πŸ› Morgan Kaufmann Publishers 🌐 English

<i>High Performance Parallelism Pearls Volume 2</i>offers another set of examples that demonstrate how to leverage parallelism. Similar to Volume 1, the techniques included here explain how to use processors and coprocessors with the same programming - illustrating the most effective ways to combine

Musical Networks: Parallel Distributed P
✍ Niall Griffith; Peter M. Todd (eds.) πŸ“‚ Library πŸ“… 1999 πŸ› The MIT Press 🌐 English

This volume presents the most up-to-date collection of neural network models of music and creativity gathered together in one place. Chapters by leaders in the field cover new connectionist models of pitch perception, tonality, musical streaming, sequential and hierarchical melodic structure, compos