Bayesian Learning for Neural Networks
โ Scribed by Radford M. Neal (auth.)
- Publisher
- Springer-Verlag New York
- Year
- 1996
- Tongue
- English
- Leaves
- 193
- Series
- Lecture Notes in Statistics 118
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence.
โฆ Table of Contents
Front Matter....Pages i-xiv
Introduction....Pages 1-28
Priors for Infinite Networks....Pages 29-53
Monte Carlo Implementation....Pages 55-98
Evaluation of Neural Network Models....Pages 99-143
Conclusions and Further Work....Pages 145-152
Back Matter....Pages 153-185
โฆ Subjects
Statistics, general; Artificial Intelligence (incl. Robotics)
๐ SIMILAR VOLUMES
In this first edition book, methods are discussed for doing inference in Bayesian networks and inference diagrams. Hundreds of examples and problems allow readers to grasp the information. Some of the topics discussed include Pearl's message passing algorithm, Parameter Learning: 2 Alternatives, P