The Message Passing Interface (MPI) can be used as a portable, high-performance programming model for wide-area computing systems. The wide-area environment introduces challenging problems for the MPI implementor, due to the heterogeneity of both the underlying physical infrastructure and the softwa
An evaluation of Java implementations of message-passing
β Scribed by Nenad Stankovic; Kang Zhang
- Publisher
- John Wiley and Sons
- Year
- 2000
- Tongue
- English
- Weight
- 344 KB
- Volume
- 30
- Category
- Article
- ISSN
- 0038-0644
No coin nor oath required. For personal study only.
β¦ Synopsis
As an objected-oriented programming language and a platform-independent environment, Java has been attracting much attention. However, the trade-off between portability and performance has not spared Java. The initial performance of Java programs has been poor, due to the interpretive nature of the environment. In this paper we present the communication performance results of three different types of message-passing programs: native, Java and native communications, and pure Java. Despite concerns about performance and numerical issues, we believe the obtained results confirm that high-performance parallel computing in Java is possible, as the technology matures and the approach is pragmatic.
π SIMILAR VOLUMES
We present a new computing approach for the parallelization on message-passing computer architectures of the DNAml algorithm, one of the most powerful tools available for constructing phylogenetic trees from DNA sequences. An analysis of the data dependencies of the method gave little chances to dev
## Abstract Cluster computers applying personal computers (PC) to their processing nodes have been widely used as parallel computational platforms due to the improvements of cost/performance ratio. In order to achieve higher performance of a cluster computer, it is important to be considered not on
The inherent structure of cellular automata is trivially parallelizable and can directly benefit from massively parallel machines in computationally intensive problems. This paper presents both block synchronous and block pipeline (with asynchronous message passing) parallel implementations of cellu
Application development for distributed-computing ''Grids'' can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation o