𝔖 Bobbio Scriptorium
✦   LIBER   ✦

SPMD programming in Java

✍ Scribed by Hummel, Susan Flynn; Ngo, Ton; Srinivasan, Harini


Publisher
John Wiley and Sons
Year
1997
Tongue
English
Weight
83 KB
Volume
9
Category
Article
ISSN
1040-3108

No coin nor oath required. For personal study only.

✦ Synopsis


We consider the suitability of the Java concurrent constructs for writing high-performance SPMD code for parallel machines. More specifically, we investigate implementing a financial application in Java on a distributed-memory parallel machine. Despite the fact that Java was not expressly targeted to such applications and architectures per se, we conclude that efficient implementations are feasible. Finally, we propose a library of Java methods to facilitate SPMD programming. Β©1997 by John Wiley & Sons, Ltd.

MOTIVATION

Although Java was not specifically designed as a high-performance parallel-computing language, it does include concurrent objects (threads), and its widespread acceptance makes it an attractive candidate for writing portable computationally-intensive parallel applications. In particular, Java has become a popular choice for numerical financial codes, an example of which is arbitrage -detecting when the buying and selling of securities is temporarily profitable. These applications involve sophisticated modeling techniques such as successive over-relaxation (SOR) and Monte Carlo methods [1]. Other numerical financial applications include data mining (pattern discovery) and cryptography (secure transactions).

In this paper, we use an SOR code for evaluating American options (see Figure 1)[1], to explore the suitability of using Java as a high-performance parallel-computing language. This work is being conducted in the context of a research effort to implement a Java runtime system (RTS) for the IBM POWERparallel System SP machine [2], which is designed to effectively scale to large numbers of processors. The RTS is being written in C with calls to MPI (message passing interface)[3] routines. Plans are to move to a Java plus MPI version when one becomes available.

The typical programming idiom for highly parallel machines is called data-parallel or single-program multiple-data (SPMD), where the data provide the parallel dimension. Parallelism is conceptually specified as a loop whose iterates operate on elements of a, perhaps multidimensional, array. Data dependences between parallel-loop iterates lead to a producer-consumer type of sharing, wherein one iterate writes variables that are later read by another, or collective communication, wherein all iterates participate. The communication pattern between iterates is often very regular, for example a bidirectional flow of variables between consecutive iterates (as in the code in Figure 1). This paper explores the suitability of the Java concurrency constructs for writing SPMD programs. In particular, the paper:

  1. identifies the differences between the parallelism supported by Java and data parallelism

πŸ“œ SIMILAR VOLUMES


Performance modeling for SPMD message-pa
✍ BREHM, JÜRGEN; WORLEY, PATRICK H.; MADHUKAR, MANISH πŸ“‚ Article πŸ“… 1998 πŸ› John Wiley and Sons 🌐 English βš– 279 KB πŸ‘ 2 views

Today's massively parallel machines are typically message-passing systems consisting of hundreds or thousands of processors. Implementing parallel applications efficiently in this environment is a challenging task, and poor parallel design decisions can be expensive to correct. Tools and techniques

Prophet: automated scheduling of SPMD pr
✍ Weissman, Jon B. πŸ“‚ Article πŸ“… 1999 πŸ› John Wiley and Sons 🌐 English βš– 209 KB πŸ‘ 1 views

Obtaining efficient execution of parallel programs in workstation networks is a difficult problem for the user. Unlike dedicated parallel computer resources, network resources are shared, heterogeneous, vary in availability, and offer communication performance that is still an order of magnitude slo