This paper presents a message-passing architecture simulating multilayer neural networks, adjusting its weights for each pair, consisting of an input vector and a desired output vector. First, the multilayer neural network is defined, and the difficulties arising from parallel implementation are cla
β¦ LIBER β¦
A Lamarckian Hybrid of Differential Evolution and Conjugate Gradients for Neural Network Training
β Scribed by Krzysztof Bandurski; Wojciech Kwedlo
- Publisher
- Springer US
- Year
- 2010
- Tongue
- English
- Weight
- 407 KB
- Volume
- 32
- Category
- Article
- ISSN
- 1370-4621
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
A parallel algorithm for gradient traini
β
ZdenΔk HanzΓ‘lek
π
Article
π
1998
π
Elsevier Science
π
English
β 742 KB
Differential Evolution and Levenberg Mar
β
Bidyadhar Subudhi; Debashisha Jena
π
Article
π
2008
π
Springer US
π
English
β 891 KB
Predictability and forecasting automotiv
β
M. Reza Peyghami; R. Khanduzi
π
Article
π
2011
π
Springer-Verlag
π
English
β 473 KB
Training backpropagation and CMAC neural
β
Santosh Ananthraman; Devendra P. Garg
π
Article
π
1993
π
Elsevier Science
π
English
β 674 KB
Design and training of a neural network
β
Shandar Ahmad; M. Michael Gromiha
π
Article
π
2003
π
John Wiley and Sons
π
English
β 69 KB
## Abstract A feedβforward neural network has been developed to predict the solvent accessibility/accessible surface area (ASA) of proteins using improved design and training methods. Several network issues ranging from the coding of ASA states to the problem of local minima of learning curve, have
B-spline neural network design using imp
β
Leandro dos Santos Coelho; Fabio A. Guerra
π
Article
π
2008
π
Elsevier Science
π
English
β 653 KB