The precise determination of object positions within a specimen grid is important for many applications in electron microscopy. For example, real-time position determination is necessary for current statistical approaches and the efficient mapping and relocation of objects. Unfortunately, precise re
Cluster computing for digital microscopy
β Scribed by Walter A. Carrington; Dimitri Lisin
- Publisher
- John Wiley and Sons
- Year
- 2004
- Tongue
- English
- Weight
- 163 KB
- Volume
- 64
- Category
- Article
- ISSN
- 1059-910X
No coin nor oath required. For personal study only.
β¦ Synopsis
Abstract
Microscopy is becoming increasingly digital and dependent on computation. Some of the computational tasks in microscopy are computationally intense, such as image restoration (deconvolution), some optical calculations, image segmentation, and image analysis. Several modern microscope technologies enable the acquisition of very large data sets. 3D imaging of live cells over time, multispectral imaging, very large tiled 3D images of thick samples, or images from high throughput biology all can produce extremely large images. These large data sets place a very large burden on laboratory computer resources. This combination of computationally intensive tasks and larger data sizes can easily exceed the capability of single personal computers. The large multiprocessor computers that are the traditional technology for larger tasks are too expensive for most laboratories. An alternative approach is to use a number of inexpensive personal computers as a cluster; that is, use multiple networked computers programmed to run the problem in parallel on all the computers in the cluster. By the use of relatively inexpensive overβtheβcounter hardware and open source software, this approach can be much more cost effective for many tasks. We discuss the different computer architectures available, and their advantages and disadvantages. Microsc. Res. Tech. 64:204β213, 2004. Β© 2004 WileyβLiss, Inc.
π SIMILAR VOLUMES
## Abstract Performance engineering of parallel and distributed applications is a complex task that iterates through various phases, ranging from modeling and prediction, to performance measurement, experiment management, data collection, and bottleneck analysis. There is no evidence so far that al