This paper considers a class of online gradient learning methods for backpropagation (BP) neural networks with a single hidden layer. We assume that in each training cycle, each sample in the training set is supplied in a stochastic order to the network exactly once. It is interesting that these sto
โฆ LIBER โฆ
Linearization learning method of BP neural networks
โ Scribed by Zhou Shaoqian; Ding Lixin; Zhang Jian; Tang Xinhua
- Publisher
- Wuhan University
- Year
- 1997
- Tongue
- English
- Weight
- 297 KB
- Volume
- 2
- Category
- Article
- ISSN
- 1007-1202
No coin nor oath required. For personal study only.
๐ SIMILAR VOLUMES
Convergence analysis of online gradient
โ
Wei Wu; Jian Wang; Mingsong Cheng; Zhengxue Li
๐
Article
๐
2011
๐
Elsevier Science
๐
English
โ 297 KB
A Method of Accelerating Neural Network
โ
Sotir Sotirov
๐
Article
๐
2005
๐
Springer US
๐
English
โ 134 KB
A learning method of immune multi-agent
โ
Takumi Ichimura; Shinichi Oeda; Machi Suka; Katsumi Yoshida
๐
Article
๐
2004
๐
Springer-Verlag
๐
English
โ 953 KB
Anti-Hebbian learning in a non-linear ne
โ
A. Carlson
๐
Article
๐
1990
๐
Springer-Verlag
๐
English
โ 604 KB
A new learning method using prior inform
โ
Baiquan Lu; Kotaro Hirasawa; Junichi Murata
๐
Article
๐
2000
๐
Springer Japan
๐
English
โ 439 KB
A method of BP network learning by expan
โ
Naoki Tanaka; Toshiaki Koreyeda; Takeshi Inoue; Koji Kajitani
๐
Article
๐
1999
๐
John Wiley and Sons
๐
English
โ 300 KB
In backpropagation networks, unlearned regions are left between categories if the learning samples are comparatively small. Such unlearned regions are one of the reasons for the degradation of network generalization ability. To improve the generalization ability, it is preferable that the boundaries