Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been
Feed forward neural networks modeling for K–P interactions
✍ Scribed by M.Y. El-Bakry
- Publisher
- Elsevier Science
- Year
- 2003
- Tongue
- English
- Weight
- 102 KB
- Volume
- 18
- Category
- Article
- ISSN
- 0960-0779
No coin nor oath required. For personal study only.
✦ Synopsis
Artificial intelligence techniques involving neural networks became vital modeling tools where model dynamics are difficult to track with conventional techniques. The paper make use of the feed forward neural networks (FFNN) to model the charged multiplicity distribution of K-P interactions at high energies. The FFNN was trained using experimental data for the multiplicity distributions at different lab momenta. Results of the FFNN model were compared to that generated using the parton two fireball model and the experimental data. The proposed FFNN model results showed good fitting to the experimental data. The neural network model performance was also tested at non-trained space and was found to be in good agreement with the experimental data.
📜 SIMILAR VOLUMES
This paper presents the results of a blind test of the ability of a feed-forward arti®cial neural network to provide out-of-sample forecasting of rainfall run-o using real data. The results obtained are comparable with the results obtained using best methods currently available. The focus of the pap
In this paper we present a new algorithm, which is orders of magnitude faster than the delta rule, for training feed-forward neural networks. It provides a substantial improvement over the method of Scalero and Tepedelenlioglu (IEEE Trans. Signal Process. 40(1) (1992)) in both training time and nume