𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Parallel programming: techniques and applications using networked workstations and parallel computers

✍ Scribed by Allen, C. Michael; Wilkinson, Barry


Publisher
Pearson/Prentice Hall
Year
2004;2005
Tongue
English
Leaves
488
Edition
2nd ed
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


This accessible text covers the techniques of parallel programming in a practical manner that enables readers to write and evaluate their parallel programs. Supported by the National Science Foundation and exhaustively class-tested, it is the first text of its kind that does not require access to a special multiprocessor system, concentrating instead on parallel programs that can be executed on networked computers using freely available parallel software tools.KEY TOPICS:The book covers the timely topic of cluster programming, interesting to many programmers due to the recent availability of low-cost computers. Uses MPI pseudocodes to describe algorithms and allows different programming tools to be implemented, and provides readers with thorough coverage of shared memory programming, including Pthreads and OpenMP.MARKET:Useful as a professional reference for programmers and system administrators.

✦ Table of Contents


Cover......Page 1
Contents......Page 14
Preface......Page 6
About the Authors......Page 12
PART I: BASIC TECHNIQUES......Page 22
1.1 The Demand for Computational Speed......Page 24
1.2 Potential for Increased Computational Speed......Page 27
1.3 Types of Parallel Computers......Page 34
1.4 Cluster Computing......Page 47
Further Reading......Page 59
Bibliography......Page 60
Problems......Page 62
2.1 Basics of Message-Passing Programming......Page 63
2.2 Using a Cluster of Computers......Page 72
2.3 Evaluating Parallel Programs......Page 83
2.4 Debugging and Evaluating Parallel Programs Empirically......Page 91
Further Reading......Page 96
Bibliography......Page 97
Problems......Page 98
3.1 Ideal Parallel Computation......Page 100
3.2 Embarrassingly Parallel Examples......Page 102
3.3 Summary......Page 119
Bibliography......Page 120
Problems......Page 121
4.1 Partitioning......Page 127
4.2 Partitioning and Divide-and-Conquer Examples......Page 138
Further Reading......Page 152
Bibliography......Page 153
Problems......Page 154
5.1 Pipeline Technique......Page 161
5.2 Computing Platform for Pipelined Applications......Page 165
5.3 Pipeline Program Examples......Page 166
5.4 Summary......Page 178
Problems......Page 179
6.1 Synchronization......Page 184
6.2 Synchronized Computations......Page 191
6.3 Synchronous Iteration Program Examples......Page 195
6.4 Partially Synchronous Methods......Page 212
Bibliography......Page 214
Problems......Page 215
7.1 Load Balancing......Page 222
7.2 Dynamic Load Balancing......Page 224
7.3 Distributed Termination Detection Algorithms......Page 231
7.4 Program Example......Page 235
Further Reading......Page 244
Bibliography......Page 245
Problems......Page 246
8.1 Shared Memory Multiprocessors......Page 251
8.2 Constructs for Specifying Parallelism......Page 253
8.3 Sharing Data......Page 260
8.4 Parallel Programming Languages and Constructs......Page 268
8.5 OpenMP......Page 274
8.6 Performance Issues......Page 279
8.7 Program Examples......Page 286
8.8 Summary......Page 292
Bibliography......Page 293
Problems......Page 294
9.1 Distributed Shared Memory......Page 300
9.2 Implementing Distributed Shared Memory......Page 302
9.3 Achieving Consistent Memory in a DSM System......Page 305
9.4 Distributed Shared Memory Programming Primitives......Page 307
9.5 Distributed Shared Memory Programming......Page 311
9.6 Implementing a Simple DSM system......Page 312
Bibliography......Page 318
Problems......Page 319
PART II: ALGORITHMS AND APPLICATIONS......Page 322
10.1 General......Page 324
10.2 Compare-and-Exchange Sorting Algorithms......Page 325
10.3 Sorting on Specific Networks......Page 341
10.4 Other Sorting Algorithms......Page 348
Further Reading......Page 356
Bibliography......Page 357
Problems......Page 358
11.1 Matricesβ€”A Review......Page 361
11.2 Implementing Matrix Multiplication......Page 363
11.3 Solving a System of Linear Equations......Page 373
11.4 Iterative Methods......Page 377
Bibliography......Page 386
Problems......Page 387
12.1 Low-level Image Processing......Page 391
12.2 Point Processing......Page 393
12.3 Histogram......Page 394
12.4 Smoothing, Sharpening, and Noise Reduction......Page 395
12.5 Edge Detection......Page 400
12.6 The Hough Transform......Page 404
12.7 Transformation into the Frequency Domain......Page 408
12.8 Summary......Page 421
Bibliography......Page 422
Problems......Page 424
13.1 Applications and Techniques......Page 427
13.2 Branch-and-Bound Search......Page 428
13.3 Genetic Algorithms......Page 432
13.4 Successive Refinement......Page 444
13.5 Hill Climbing......Page 445
Further Reading......Page 449
Bibliography......Page 450
Problems......Page 451
APPENDIX A: BASIC MPI ROUTINES......Page 458
APPENDIX B: BASIC PTHREAD ROUTINES......Page 465
APPENDIX C: OPENMP DIRECTIVES, LIBRARY FUNCTIONS, AND ENVIRONMENT VARIABLES......Page 470
B......Page 481
F......Page 482
J......Page 483
M......Page 484
P......Page 485
S......Page 486
U......Page 487
W......Page 488

✦ Subjects


Textbooks;Science;Computer Science


πŸ“œ SIMILAR VOLUMES


Parallel Programming: Techniques and App
✍ Barry Wilkinson, Michael Allen πŸ“‚ Library πŸ“… 2004 πŸ› Prentice Hall 🌐 English

This accessible text covers the techniques of parallel programming in a practical manner that enables readers to write and evaluate their parallel programs. Supported by the National Science Foundation and exhaustively class-tested, it is the first text of its kind that does not require access to a

Parallel Computers. Architecture and Pro
✍ V. Rajaraman, C. Siva Ram Murthy πŸ“‚ Library πŸ“… 2016 πŸ› Prentice-Hall 🌐 English

Today all computers, from tablet/desktop computers to super computers, work in parallel. A basic knowledge of the architecture of parallel computers and how to program them, is thus, essential for students of computer science and IT professionals. In its second edition, the book retains the lucidity

Using OpenCL: Programming Massively Par
✍ J. Kowalik, T. Puzniakowski πŸ“‚ Library πŸ“… 2012 πŸ› IOS Press 🌐 English

In 2011 many computer users were exploring the opportunities and the benefits of the massive parallelism offered by heterogeneous computing. In 2000 the Khronos Group, a not-for-profit industry consortium, was founded to create standard open APIs for parallel computing, graphics and dynamic media. A

Models for Parallel and Distributed Comp
✍ Michel Cosnard (auth.), Ricardo CorrΓͺa, InΓͺs Dutra, Mario Fiallos, Fernando Gome πŸ“‚ Library πŸ“… 2002 πŸ› Springer US 🌐 English

<p>Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technoloΒ­ gies, played a central role in the rapid growth of the inter