Tight Bound on Johnson's Algorithm for Maximum Satisfiability
β Scribed by Jianer Chen; Donald K. Friesen; Hao Zheng
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 223 KB
- Volume
- 58
- Category
- Article
- ISSN
- 0022-0000
No coin nor oath required. For personal study only.
β¦ Synopsis
We present new techniques that give a more thorough analysis on Johnson's classical algorithm for the Maximum Satisfiability problem. In contrast to the common belief for two decades that Johnson's Algorithm has performance ratio 1Γ2, we show that the performance ratio is 2Γ3 and that this bound is tight. Moreover, we show that simple generalizations of Johnson's algorithm do not improve the performance ratio bound 2Γ3.
1999 Academic Press
1. Introduction
The Maximum Satisfiability problem (Max-Sat) is one of the fundamental optimization problems. Max-Sat is known to be NP-hard. Hence, it is unlikely that there is a polynomial-time algorithm that solves Max-Sat optimally.
Approximation algorithms for Max-Sat have been considered. About two decades ago, Johnson [7] proposed his famous approximation algorithm for Max-Sat and proved that his algorithm guarantees a performance ratio at least 1Γ2. Following up investigation on approximation of Max-Sat has been a very active research area. However, there was no approximation algorithm known before 1992 that has a proven performance ratio better than 1Γ2. For the studies on the problem before 1990, readers are referred to Hansen and Jaumard's survey [5].
Recent research [1,9] has shown that Max-Sat is a generic complete problem under an approximation-preserving reduction for the class APX, which consists of Article ID jcss.1998.
π SIMILAR VOLUMES
The nearest neighbor rule or k-nearest neighbor rule is a technique of nonparametric pattern recognition. Its algorithm is simple and the error is smaller than twice the Bayes error if there are enough training samples. However, it requires an enormous amount of computation, proportional to the numb