<p><em>Scheduling in Parallel Computing Systems: Fuzzy and Annealing</em><em>Techniques</em> advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emer
Hierarchical Scheduling in Parallel and Cluster Systems
β Scribed by Sivarama Dandamudi (auth.)
- Publisher
- Springer US
- Year
- 2003
- Tongue
- English
- Leaves
- 262
- Series
- Series in Computer Science
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graphΒ ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all proΒ cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of proΒ cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message passΒ ing to facilitate communication among the processors. As a result, they do not provide single address space.
β¦ Table of Contents
Front Matter....Pages i-xxv
Front Matter....Pages 1-1
Introduction....Pages 3-11
Parallel and Cluster Systems....Pages 13-48
Parallel Job Scheduling....Pages 49-84
Front Matter....Pages 85-85
Hierarchical Task Queue Organization....Pages 87-119
Performance of Scheduling Policies....Pages 121-139
Performance with Synchronization Workloads....Pages 141-164
Front Matter....Pages 165-165
Scheduling in Shared-Memory Multiprocessors....Pages 167-191
Scheduling in Distributed-Memory Multicomputers....Pages 193-211
Scheduling in Cluster Systems....Pages 213-229
Front Matter....Pages 231-231
Conclusions....Pages 233-237
Back Matter....Pages 239-251
β¦ Subjects
Processor Architectures; Computer Systems Organization and Communication Networks; Operating Systems; Theory of Computation
π SIMILAR VOLUMES
This monograph consists of two volumes and provides a unified comprehensive presentation of a new hierarchic paradigm and discussions of various applications of hierarchical methods for nonlinear electrodynamic problems. Volume 1 is the first book, in which a new hierarchical model for dynamic
<P>This monograph consists of two volumes and provides a unified comprehensive presentation of a new hierarchic paradigm and discussions of various applications of hierarchical methods for nonlinear electrodynamic problems.</P> <P><EM>Volume 1</EM> is the first book, in which a new hierarchical mod
A new model for task scheduling that dramatically improves the efficiency of parallel systemsTask scheduling for parallel systems can become a quagmire of heuristics, models, and methods that have been developed over the past decades. The author of this innovative text cuts through the confusion and