𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Hebbian learning in linear–nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making

✍ Scribed by Tyler McMillen; Patrick Simen; Sam Behseta


Publisher
Elsevier Science
Year
2011
Tongue
English
Weight
672 KB
Volume
24
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


Optimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons' broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multi-hypothesis sequential probability ratio test.