Distributed-memory message-passing machines deliver scalable performance but are difficult to program. Shared-memory machines, on the other hand, are easier to program but obtaining scalable performance with large number of processors is difficult. Recently, scalable machines based on logically shar
Parallelization and optimization of Mfold on shared memory system
β Scribed by Qiankun Miao; Guangzhong Sun; Jiulong Shan; Guoliang Chen
- Publisher
- Elsevier Science
- Year
- 2010
- Tongue
- English
- Weight
- 433 KB
- Volume
- 36
- Category
- Article
- ISSN
- 0167-8191
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
The use of ILU(0) factorization as a preconditioner is quite frequent when solving linear systems of CFD computations. This is because of its efficiency and moderate memory requirements. For a small number of processors, this preconditioner, parallelized through coloring methods, shows little saving
## Abstract OpenMP offers a highβlevel interface for parallel programming on scalable shared memory (SMP) architectures. It provides the user with simple workβsharing directives while it relies on the compiler to generate parallel programs based on thread parallelism. However, the lack of language
## Abstract The programs ESCF, EGRAD, and AOFORCE are parts of the TURBOMOLE program package and compute excitedβstate properties and groundβstate geometric hessians, respectively, for HartreeβFock and density functional methods. The range of applicability of these programs has been extended by all