Trainable fusion rules. II. Small sample-size effects
✍ Scribed by Šarūnas Raudys
- Publisher
- Elsevier Science
- Year
- 2006
- Tongue
- English
- Weight
- 496 KB
- Volume
- 19
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
✦ Synopsis
Profound theoretical analysis is performed of small-sample properties of trainable fusion rules to determine in which situations neural network ensembles can improve or degrade classification results. We consider small sample effects, specific only to multiple classifiers system design in the two-category case of two important fusion rules: (1) linear weighted average (weighted voting), realized either by the standard Fisher classifier or by the single-layer perceptron, and (2) the non-linear Behavior-Knowledge-Space method. The small sample effects include: (i) training bias, i.e. learning sample size influence on generalization error of the base experts or of the fusion rule, (ii) optimistic biased outputs of the experts (self-boasting effect) and (iii) sample size impact on determining optimal complexity of the fusion rule. Correction terms developed to reduce the self-boasting effect are studied. It is shown that small learning sets increase classification error of the expert classifiers and damage correlation structure between their outputs. If the sizes of learning sets used to develop the expert classifiers are too small, non-trainable fusion rules can outperform more sophisticated trainable ones. A practical technique to fight sample size problems is a noise injection technique. The noise injection reduces the fusion rule's complexity and diminishes the expert's boasting bias.
📜 SIMILAR VOLUMES
The effect of small-angle scattering on the line shape of the radiofrequency size effect is calculated for the Fermi surface models discussed in Part L By considering a model scattering probability that is uniform up to a maximum angle ot and zero thereafter, it is shown that the line shape undergoe