๐”– Scriptorium
โœฆ   LIBER   โœฆ

๐Ÿ“

Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks

โœ Scribed by Klampfl S., Maass W.


Tongue
English
Leaves
9
Category
Library

โฌ‡  Acquire This Volume

No coin nor oath required. For personal study only.

โœฆ Synopsis


It is open how neurons in the brain are able to learn without supervision to discriminate between spatio-temporal firing patterns of presynaptic neurons. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), is able to acquire the classification capability of Fisherโ€™s Linear Discriminant (FLD), a powerful algorithm for supervised learning, if temporally adjacent samples are likely to be from the same class. We also demonstrate that it enables linear readout neurons of cortical microcircuits to learn the detection of repeating firing patterns within a stream of spike trains with the same firing statistics, as well as discrimination of spoken digits, in an unsupervised manner.

Since the presence of supervision in biological learning mechanisms is rare, organisms often have
to rely on the ability of these mechanisms to extract statistical regularities from their environment.
Recent neurobiological experiments have suggested that the brain uses some type of slowness
objective to learn the categorization of external objects without a supervisor. Slow Feature Analysis (SFA) could be a possible mechanism for that. We establish a relationship between the unsupervised SFA learning method and a commonly used method for supervised classification learning: Fisherโ€™s Linear Discriminant (FLD). More precisely, we show that SFA approximates the classification capability of FLD by replacing the supervisor with the simple heuristics that two temporally adjacent samples in the input time series are likely to be from the same class. Furthermore, we demonstrate in simulations of a cortical microcircuit model that SFA could also be an important ingredient in extracting temporally stable information from trajectories of network states and that it supports the idea of anytime computing, i.e., it provides information about the stimulus identity not only at the end of a trajectory of network states, but already much earlier.
This paper is structured as follows. We start in section 2 with brief recaps of the definitions of SFA and FLD. We discuss the relationship between these methods for unsupervised and supervised learning in section 3, and investigate the application of SFA to trajectories in section
4. In section 5 we report results of computer simulations of several SFA readouts of a cortical microcircuit model. Section 6 concludes with a discussion.

โœฆ Subjects


ะ˜ะฝั„ะพั€ะผะฐั‚ะธะบะฐ ะธ ะฒั‹ั‡ะธัะปะธั‚ะตะปัŒะฝะฐั ั‚ะตั…ะฝะธะบะฐ;ะ˜ัะบัƒััั‚ะฒะตะฝะฝั‹ะน ะธะฝั‚ะตะปะปะตะบั‚;ะะตะนั€ะพะฝะฝั‹ะต ัะตั‚ะธ


๐Ÿ“œ SIMILAR VOLUMES


Neural Smithing: Supervised Learning in
โœ Russell Reed, Robert J MarksII ๐Ÿ“‚ Library ๐Ÿ“… 1999 ๐Ÿ› A Bradford Book ๐ŸŒ English

<p>Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The basic idea is that massive systems of simple units linked together in appropriate ways can generate many complex and interesting behav

Supervised Learning with Complex-valued
โœ Sundaram Suresh, Narasimhan Sundararajan, Ramasamy Savitha (auth.) ๐Ÿ“‚ Library ๐Ÿ“… 2013 ๐Ÿ› Springer-Verlag Berlin Heidelberg ๐ŸŒ English

<p><p>Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural netwo

Neural Network Learning in Humans
โœ Giselher Schalow ๐Ÿ“‚ Library ๐Ÿ“… 2015 ๐Ÿ› Nova Science ๐ŸŒ English

Based on human neurophysiology, it has been shown that the human brain and spinal cord can partly be repaired by movement-based learning. It seems that even to a very limited extent, new nerve cells can be built anew in the human central nervous system. Neural network learning starts with the knowle

On-Line Learning in Neural Networks
โœ Saad David ๐Ÿ“‚ Library ๐Ÿ“… 1999 ๐ŸŒ English

On-line learning is one of the most commonly used techniques for training neural networks. Though it has been used successfully in many real-world applications, most training methods are based on heuristic observations. The lack of theoretical support damages the credibility as well as the efficienc