We performed several simulations with feed-forward neural networks using an idealized tracking apparatus with tracks invariant under translation and roto-translation transformations. Input information was provided to the networks without any preprocessing. We implemented 2 and 3 layer architectures
Invariance priors for Bayesian feed-forward neural networks
โ Scribed by Udo v. Toussaint; Silvio Gori; Volker Dose
- Publisher
- Elsevier Science
- Year
- 2006
- Tongue
- English
- Weight
- 1001 KB
- Volume
- 19
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been suggested but most of them are based on ad hoc arguments. Employing the principle of transformation invariance, we derive a general prior in accordance with the Bayesian probability theory for feed-forward networks. An optimal network is determined by Bayesian model comparison, verifying the applicability of this approach. Additionally the prior presented affords cell pruning.
๐ SIMILAR VOLUMES
In this paper we present a new algorithm, which is orders of magnitude faster than the delta rule, for training feed-forward neural networks. It provides a substantial improvement over the method of Scalero and Tepedelenlioglu (IEEE Trans. Signal Process. 40(1) (1992)) in both training time and nume
Artificial intelligence techniques involving neural networks became vital modeling tools where model dynamics are difficult to track with conventional techniques. The paper make use of the feed forward neural networks (FFNN) to model the charged multiplicity distribution of K-P interactions at high