On the complexity of approximating mappings using feedforward networks
โ Scribed by Pascal Koiran
- Publisher
- Elsevier Science
- Year
- 1993
- Tongue
- English
- Weight
- 359 KB
- Volume
- 6
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
I4~, study the approximation o.f continuous mappings and dichotomies by one-hidden-layer networks from a computational point of view Our approach is based on a new approximation method specially designed for constructing "small networks. We give upper bounds on the size o.f these networks.
๐ SIMILAR VOLUMES
Probabilistic inference and maximum a posteriori (MAP) explanation are two important and related problems on Bayesian belief networks. Both problems are known to be NP-hard for both approximation and exact solution. In 1997, Dagum and Luby showed that efficiently approximating probabilistic inferenc
We study the complexity of approximating the VC dimension of a collection of sets, when the sets are encoded succinctly by a small circuit. We show that this problem is: 3 -hard to approximate to within a factor 2 ร e for all e > 0; \* approximable in AM to within a factor 2; and \* AM-hard to appr
In this paper, we prove that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. The starting point of the proof for the one hidden layer case is an integral formula