𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Special Issue on Compilation and Architectural Support for Parallel Applications: Guest Editor's Introduction

✍ Scribed by David J. Lilja


Publisher
Elsevier Science
Year
1999
Tongue
English
Weight
48 KB
Volume
58
Category
Article
ISSN
0743-7315

No coin nor oath required. For personal study only.

✦ Synopsis


The parallelizing compilation techniques and associated system architectures developed to date have proven to be very effective in improving the performance of regularly structured programs, such as array-based applications written in Fortran. Now, however, it is the irregularly structured nonnumerical applications written in such languages as C, C++, and Java that pose the greatest challenges to system architects and compiler writers.

These types of applications share several characteristics that make them very difficult to parallelize using traditional approaches. For example, many of these programs contain loops that may terminate as a function of run-time conditions, such as do-while loops; they make extensive use of pointers and recursion, which are very difficult to analyze at compile time; and they contain many small basic blocks with complex branching behavior. All of these characteristics make these types of programs very difficult, if not impossible, to parallelize using traditional architectures and compiler techniques.

Simultaneously, continual advances in VLSI technology are also influencing the design of parallel systems. Transistor densities have increased to the point where entire systems now can be integraded on a single chip. However, interconnection delays within a chip are beginning to overwhelm device delays in conventional technologies, which is limiting the rate of improvement in system clock speeds. Similar trends are expected for copper-based technologies within the next one to two semiconductor device generations. This combination of greater transistor densities with only limited improvements in processor clock speeds will force future systems to exploit higher degrees of parallelism at all levels of granularity to maintain the expected improvements in overall performance. They also will have to rely more on speculation and prediction. These changes will require even greater compiler and run-time support than is typical in existing systems.

The papers in this special issue provide some potential solutions for addressing some of these important issues. The overall focus is on parallelizing general-purpose application programs through improvements to processor and system architectures, compilers, and run-time systems. We were able to accept for publication fewer than half of the papers submitted for consideration in this special issue. Each paper was


πŸ“œ SIMILAR VOLUMES