Competitive analysis of randomized paging algorithms
β Scribed by Dimitris Achlioptas; Marek Chrobak; John Noga
- Publisher
- Elsevier Science
- Year
- 2000
- Tongue
- English
- Weight
- 130 KB
- Volume
- 234
- Category
- Article
- ISSN
- 0304-3975
No coin nor oath required. For personal study only.
β¦ Synopsis
The paging problem is deΓΏned as follows: we are given a two-level memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p, a miss occurs if p is not present in the fast memory. In response to a miss, we need to choose an item q in the cache and replace it by p. The choice of q needs to be made on-line, without the knowledge of future requests. The objective is to design a replacement strategy with a small number of misses. In this paper we use competitive analysis to study the performance of randomized on-line paging algorithms. Our goal is to show how the concept of work functions, used previously mostly for the analysis of deterministic algorithms, can also be applied, in a systematic fashion, to the randomized case. We present two results: we ΓΏrst show that the competitive ratio of the marking algorithm is exactly 2H k -1. Previously, it was known to be between H k and 2H k . Then we provide a new, H k -competitive algorithm for paging. Our algorithm, as well as its analysis, is simpler than the known algorithm by McGeoch and Sleator. Another advantage of our algorithm is that it can be implemented with complexity bounds independent of the number of past requests: O(k 2 log k) memory and O(k 2 ) time per request.
π SIMILAR VOLUMES
In this paper we study randomized algorithms with random input. We adapt to such algorithms the notion of probability of a false positive which is common in epidemiological studies. The probability of a false positive takes into account both the (controlled) error of the randomization and the random
Nearest neighbor load balancing algorithms, like diusion, are popular due to their simplicity, Β―exibility, and robustness. We show that they are also asymptotically very ecient when a random rather than a worst case initial load distribution is considered. We show that diusion needs Hlog n 2ad balan