Improved k-nearest neighbor classification
✍ Scribed by Yingquan Wu; Krassimir Ianakiev; Venu Govindaraju
- Publisher
- Elsevier Science
- Year
- 2002
- Tongue
- English
- Weight
- 113 KB
- Volume
- 35
- Category
- Article
- ISSN
- 0031-3203
No coin nor oath required. For personal study only.
✦ Synopsis
k-nearest neighbor (k-NN) classiÿcation is a well-known decision rule that is widely used in pattern classiÿcation. However, the traditional implementation of this method is computationally expensive. In this paper we develop two e ective techniques, namely, template condensing and preprocessing, to signiÿcantly speed up k-NN classiÿcation while maintaining the level of accuracy. Our template condensing technique aims at "sparsifying" dense homogeneous clusters of prototypes of any single class. This is implemented by iteratively eliminating patterns which exhibit high attractive capacities. Our preprocessing technique ÿlters a large portion of prototypes which are unlikely to match against the unknown pattern. This again accelerates the classiÿcation procedure considerably, especially in cases where the dimensionality of the feature space is high. One of our case studies shows that the incorporation of these two techniques to k-NN rule achieves a seven-fold speed-up without sacriÿcing accuracy.
📜 SIMILAR VOLUMES
A novel neuralnet-based method of constructing optimized prototypes for nearest-neighbor classiÿers is proposed. Based on an e ective classiÿcation oriented error function containing class classiÿcation and class separation components, the corresponding prototype and feature weight update rules are
A technique is presented for adopting nearest-neighbor classification to the case of categorical variables. The set of categories is mapped onto the real line in such a way as to maximize the ratio of total sum of squares to within-class sum of squares, aggregated over classes. The resulting real va