Developing effective and efficient negotiation mechanisms for real-world applications such as e-business is challenging because negotiations in such a context are characterized by combinatorially complex negotiation spaces, tough deadlines, very limited information about the opponents, and volatile
A dynamic optimization approach for adaptive incremental learning
β Scribed by Marcelo N. Kapp; Robert Sabourin; Patrick Maupin,
- Publisher
- John Wiley and Sons
- Year
- 2011
- Tongue
- English
- Weight
- 448 KB
- Volume
- 26
- Category
- Article
- ISSN
- 0884-8173
No coin nor oath required. For personal study only.
β¦ Synopsis
A fundamental problem when performing incremental learning is that the best set of a classification system's parameters can change with the evolution of the data. Consequently, unless the system self-adapts to such changes, it will become obsolete, even if the application environment seems to be static. To address this problem, we propose a dynamic optimization approach in this paper that performs incremental learning in an adaptive fashion by tracking, evolving, and combining optimum hypotheses overtime. The approach incorporates various theories, such as dynamic particle swarm optimization, incremental support vector machine classifiers, change detection, and dynamic ensemble selection based on classifiers' confidence levels. Experiments carried out on synthetic and real-world databases demonstrate that the proposed approach actually outperforms the classification methods often used in incremental learning scenarios.
π SIMILAR VOLUMES
This paper presents an adaptive iterative learning control scheme that is applicable to a class of nonlinear systems. The control scheme guarantees system stability and boundedness by using the feedback controller coupled with the fuzzy compensator and achieves precise tracking by using the iterativ
This paper describes an on-line adaptation method that combines maximum a posteriori (MAP) estimation for intra-class training (the training scheme incorporates new training samples with prior information) with vector field smoothing (VFS) for inter-class smoothing. Results of experiments comparing