High performance computing in lattice QCD
โ Scribed by N Cabibbo; Y Iwasaki; K Schilling
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 42 KB
- Volume
- 25
- Category
- Article
- ISSN
- 0167-8191
No coin nor oath required. For personal study only.
โฆ Synopsis
Modern research in elementary particle physics is using large accelerators, but also large computers, both as an essential part of experimental research in the simulation of detectors and in event analysis, and on the theoretical side for the evaluation of nonperturbative aspects of quantum chromodynamics (QCD), the fundamental theory of strong interactions.
We are grateful to the editors of Parallel Computing for the opportunity to act as guest editors for this special issue which is devoted to the high performance computing issues of `solving' QCD on the lattice. The timing for such a review project is well chosen since considerable progress has been achieved recently in the ยฎeld. This advance is closely related to the advent of high performance cost eective parallel systems.
It is indeed remarkable that over the last 15 years the development of lattice gauge theory has been substantially promoted by `home made' special purpose parallel systems, like the APE family of machines of the Italian research agency INFN, various systems at the Tsukuba Center for Computational Physics in Japan, and a series of QCD computers built in the Physics Department of Columbia University.
At present lattice QCD projects are approaching the threshold of teracomputing, the lead being taken in late 1996 by the Center for Computational Physics in Tsukuba, Japan, with their dedicated CP-PACS parallel system. They achieved milestone results in simulating QCD to unprecedented accuracy in the quenched approximation where internal fermion loops are neglected. This provides the stage for unraveling polarization eects in the QCD vacuum by turning to the simulation of full QCD on the lattice. Exploratory projects in this direction are being pursued by three European groups, SESAM, TvL, and UKQCD (at present these groups have computing resources which are only $20% of that available to the Japanese group) as well as by the CP-PACS collaboration itself. In the United States our colleagues at Columbia University and Brookhaven National Laboratory have made a signiยฎcant step forward last year by putting special purpose computers into operation that are based on signal processors. These computers enable them to explore in practical simulation novel ideas for putting nearly massless quarks on a lattice, the so-called domain wall fermion formulation. In Europe APEmille is the upcoming QCD teracomputer.
Sampling of QCD vacuum conยฎgurations is a major task in itself, due to the nonlocal forces from fermion loops. Hence, in parallel to the development of high performance low cost computers, much research eort has been devoted to
๐ SIMILAR VOLUMES
an Analytical Overview Of The State Of The Art, Open Problems, And Future Trends In Heterogeneous Parallel And Distributed Computing this Book Provides An Overview Of The Ongoing Academic Research, Development, And Uses Of Heterogeneous Parallel And Distributed Computing In The Context Of Scientifi
an Analytical Overview Of The State Of The Art, Open Problems, And Future Trends In Heterogeneous Parallel And Distributed Computing this Book Provides An Overview Of The Ongoing Academic Research, Development, And Uses Of Heterogeneous Parallel And Distributed Computing In The Context Of Scientifi
With the rapid growth in computing technology and the ever increasing needs of present and future computation-intensive applications, the past decade has witnessed the proliferation of high performance computing architectures and powerful parallel and distributed systems. In this special issue, the
The CP-PACS is a massively parallel MIMD computer with the theoretical peak speed of 614 GFLOPS which has been developed for computational physics applications at the University of Tsukuba, Japan. We report on the performance of the CP-PACS computer measured during recent production runs using our q