Nearest prototype classifier designs: An experimental study
β Scribed by James C. Bezdek; Ludmila I. Kuncheva
- Book ID
- 102279852
- Publisher
- John Wiley and Sons
- Year
- 2001
- Tongue
- English
- Weight
- 374 KB
- Volume
- 16
- Category
- Article
- ISSN
- 0884-8173
- DOI
- 10.1002/int.1068
No coin nor oath required. For personal study only.
β¦ Synopsis
We compare eleven methods for finding prototypes upon which to base the nearest Ε½ prototype classifier. Four methods for prototype selection are discussed: Wilson q Hart a . condensation q error-editing method , and three types of combinatorial searchαrandom search, genetic algorithm, and tabu search. Seven methods for prototype extraction are discussed: unsupervised vector quantization, supervised learning vector quantization Ε½ . with and without training counters , decision surface mapping, a fuzzy version of vector quantization, c-means clustering, and bootstrap editing. These eleven methods can be usefully divided two other ways: by whether they employ pre-or postsupervision; and by whether the number of prototypes found is user-defined or ''automatic.'' Generalization error rates of the 11 methods are estimated on two synthetic and two real data sets. Offering the usual disclaimer that these are just a limited set of experiments, we feel confident in asserting that presupervised, extraction methods offer a better chance for success to the casual user than postsupervised, selection schemes. Finally, our calculations do not suggest that methods which find the ''best'' number of prototypes ''automatically'' are superior to methods for which the user simply specifies the number of prototypes.
π SIMILAR VOLUMES
By appropriate editing of the reference set and judicious selection of features, we can obtain an optimal nearest neighbor (NN) classifier that maximizes the accuracy of classification and saves computational time and memory resources. In this paper, we propose a new method for simultaneous referenc
In classifier combination, it is believed that diverse ensembles have a better potential for improvement on the accuracy than nondiverse ensembles. We put this hypothesis to a test for two methods for building the ensembles: Bagging and Boosting, with two linear classifier models: the nearest mean c