๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Derivation of the multilayer perceptron weight constraints for direct network interpretation and knowledge discovery

โœ Scribed by M.L. Vaughn


Publisher
Elsevier Science
Year
1999
Tongue
English
Weight
171 KB
Volume
12
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

โœฆ Synopsis


This paper examines the multilayer perceptron (MLP) network from a hidden layer decision region perspective and derives the output layer and hidden layer weight constraints that the network must satisfy in performing a general classification task. This provides a foundation for direct knowledge discovery from the MLP, using a new method published by the author, which finds the key inputs that the MLP uses to classify an input case. The knowledge that the MLP network learns from the training examples is represented as ranked data relationships and induced rules, which can be used to validate the MLP network. The bounds of the network knowledge are established in the n-dimensional input space and a measure of the limit of the MLP network knowledge is proposed. An algorithm is presented for the calculation of the maximum number of hidden layer decision regions in the MLP input space.


๐Ÿ“œ SIMILAR VOLUMES


Introduction of linear constraints on th
โœ Masaki Ishii; Itsuo Kumazawa ๐Ÿ“‚ Article ๐Ÿ“… 2003 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 675 KB

## Abstract In the application of layered neural networks to practical problems, a high generalization power is required. This paper discusses a method of improving the generalization power of neural networks. The knowledge of the object to be learned is assumed to include the fact that the output