๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Orthogonal considerations in the design of neural networks for function approximation

โœ Scribed by B. Francois


Publisher
Elsevier Science
Year
1996
Tongue
English
Weight
604 KB
Volume
41
Category
Article
ISSN
0378-4754

No coin nor oath required. For personal study only.

โœฆ Synopsis


Two problems occur in the design of feedforward neural networks: the choice of the optimal architecture and the initialization. Generally, input and output data of a system (or a function) are measured and recorded. Then, experimenters wish to design a neural network to map exactly these output values.

By formulating this as a continuous approximation problem, this paper shows that the use of ortbogonal functions is a partial optimization in the choice of hidden functions.

Parameter's initialization is obtained by using the knowledge of input and output data in the calculation of a discrete approximation. The hidden weights are found by constructing orthogonal directions on which the input values are represented. The pseudo-inverse is used to determine output weights such that the Euclidean distances between neural responses and output values are minimised.


๐Ÿ“œ SIMILAR VOLUMES


Properties and performance of orthogonal
โœ Chieh F. Sher; Ching-Shiow Tseng; Chen-San Chen ๐Ÿ“‚ Article ๐Ÿ“… 2001 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 216 KB

Backpropagation neural network has been applied successfully to solving uncertain problems in many fields. However, unsolved drawbacks still exist such as the problems of local minimum, slow convergence speed, and the determination of initial weights and the number of processing elements. In this pa

Performance comparison between the train
โœ Chen-San Chen; Ching-Shiow Tseng ๐Ÿ“‚ Article ๐Ÿ“… 2004 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 355 KB

The orthogonal neural network is a recently developed neural network based on the properties of orthogonal functions. It can avoid the drawbacks of traditional feedforward neural networks such as initial values of weights, number of processing elements, and slow convergence speed. Nevertheless, it n

A unified approach for neural network-li
โœ Tianping Chen ๐Ÿ“‚ Article ๐Ÿ“… 1998 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 56 KB

In this paper, we give a universal approach to approximation of non-linear functionals and so called myopic input-output maps by neural network-like architectures. Strong theorems on equi-uniform approximation to functionals in abstract spaces are given. As applications, theorems on identification o