𝔖 Bobbio Scriptorium
✦   LIBER   ✦

An approximation theory approach to learning with regularization

✍ Scribed by Wang, Hong-Yan; Xiao, Quan-Wu; Zhou, Ding-Xuan


Book ID
118257257
Publisher
Elsevier Science
Year
2013
Tongue
English
Weight
265 KB
Volume
167
Category
Article
ISSN
0021-9045

No coin nor oath required. For personal study only.


πŸ“œ SIMILAR VOLUMES


A regularization approach to continuous
✍ D. Ormoneit πŸ“‚ Article πŸ“… 1999 πŸ› Elsevier Science 🌐 English βš– 101 KB

We consider the training of neural networks in cases where the nonlinear relationship of interest gradually changes over time. One possibility to deal with this problem is by regularization where a variation penalty is added to the usual mean squared error criterion. To learn the regularized network

Learning approximately regular languages
✍ Satoshi Kobayashi; Takashi Yokomori πŸ“‚ Article πŸ“… 1997 πŸ› Elsevier Science 🌐 English βš– 490 KB

In this note, we consider the problem of learning approximately regular languages in the limit from positive data using the class of k-reversible languages. The class of k-reversible languages was introduced by Angluin (1982), and proved to be efficiently identifiable in the limit from positive data

New approach to gridding using regulariz
✍ Daniel Rosenfeld πŸ“‚ Article πŸ“… 2002 πŸ› John Wiley and Sons 🌐 English βš– 448 KB

When sampling under time-varying gradients, data is acquired over a non-equally spaced grid in k-space. The most computationally efficient method of reconstruction is first to interpolate the data onto a Cartesian grid, enabling the subsequent use of the inverse fast Fourier transform (IFFT). The mo