Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks
โ Scribed by Klampfl S., Maass W.
- Tongue
- English
- Leaves
- 9
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
It is open how neurons in the brain are able to learn without supervision to discriminate between spatio-temporal firing patterns of presynaptic neurons. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), is able to acquire the classification capability of Fisherโs Linear Discriminant (FLD), a powerful algorithm for supervised learning, if temporally adjacent samples are likely to be from the same class. We also demonstrate that it enables linear readout neurons of cortical microcircuits to learn the detection of repeating firing patterns within a stream of spike trains with the same firing statistics, as well as discrimination of spoken digits, in an unsupervised manner.
Since the presence of supervision in biological learning mechanisms is rare, organisms often haveto rely on the ability of these mechanisms to extract statistical regularities from their environment.
Recent neurobiological experiments have suggested that the brain uses some type of slowness
objective to learn the categorization of external objects without a supervisor. Slow Feature Analysis (SFA) could be a possible mechanism for that. We establish a relationship between the unsupervised SFA learning method and a commonly used method for supervised classification learning: Fisherโs Linear Discriminant (FLD). More precisely, we show that SFA approximates the classification capability of FLD by replacing the supervisor with the simple heuristics that two temporally adjacent samples in the input time series are likely to be from the same class. Furthermore, we demonstrate in simulations of a cortical microcircuit model that SFA could also be an important ingredient in extracting temporally stable information from trajectories of network states and that it supports the idea of anytime computing, i.e., it provides information about the stimulus identity not only at the end of a trajectory of network states, but already much earlier.
This paper is structured as follows. We start in section 2 with brief recaps of the definitions of SFA and FLD. We discuss the relationship between these methods for unsupervised and supervised learning in section 3, and investigate the application of SFA to trajectories in section
4. In section 5 we report results of computer simulations of several SFA readouts of a cortical microcircuit model. Section 6 concludes with a discussion.
โฆ Subjects
ะะฝัะพัะผะฐัะธะบะฐ ะธ ะฒััะธัะปะธัะตะปัะฝะฐั ัะตั ะฝะธะบะฐ;ะัะบััััะฒะตะฝะฝัะน ะธะฝัะตะปะปะตะบั;ะะตะนัะพะฝะฝัะต ัะตัะธ
๐ SIMILAR VOLUMES
<p>Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The basic idea is that massive systems of simple units linked together in appropriate ways can generate many complex and interesting behav
<p><p>Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural netwo
Based on human neurophysiology, it has been shown that the human brain and spinal cord can partly be repaired by movement-based learning. It seems that even to a very limited extent, new nerve cells can be built anew in the human central nervous system. Neural network learning starts with the knowle
On-line learning is one of the most commonly used techniques for training neural networks. Though it has been used successfully in many real-world applications, most training methods are based on heuristic observations. The lack of theoretical support damages the credibility as well as the efficienc