Parallel logic programming (PLP) systems are sophisticated examples of symbolic computing systems. PLP systems address problems such as allocating dynamic memory, scheduling irregular computations, and managing different types of implicit parallelism. Most PLP systems have been developed for busbase
Program Repartitioning on Varying Communication Cost Parallel Architectures
β Scribed by Santosh Pande; Kleanthis Psarris
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 280 KB
- Volume
- 33
- Category
- Article
- ISSN
- 0743-7315
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
A key measure of the performance of a distributed memory parallel program is the communication overhead. On most current parallel systems, sending data from a local to a remote processor still takes one or two orders of magnitude longer than the time to access data on a local processor. The behavior
However, the speedup achieved through parallelism is often lower in modern systems. It is no surprise, then, that developers of compilers for data-parallel languages have hypothesized the importance of optimizations that overlap communications with computations in order to reduce execution times and
Within the framework of distributed object-oriented programming, this paper illustrates the main features of a communication micro-kernel able to perform, in a transparent way, both local and remote communications among objects located on a network of closely coupled microcomputers. The communicatio