Preface. NIPS committees. Reviewers. Part I. Cognitive science. Recognizing evoked potentials in a virtual environment (Jessica D. Bayliss and Dana H. Ballard). A neurodynamical approach to visual attention (Gustavo Deco and Josef Zihl). Effects of spatial and temporal contiguity on the acquisition
Advances in neuro information processing systems 11: Proceedings of the 1998 conference: Edited by Michael S. Kearns, Sara A. Solla and David A. Cohn. MIT Press, Cambridge, MA. (1999). 1090 pages. $65.00
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 363 KB
- Volume
- 38
- Category
- Article
- ISSN
- 0898-1221
No coin nor oath required. For personal study only.
β¦ Synopsis
Contents: Preface. NIPS committees. Reviewers. I. Cognitive science. Evidence for a forward dynamics model in human adaptive motor control (Nikhil Bhushan and Reza Shadmehr). Perceiving without learning: From spirals to inside/outside relations (Ke Chen and DeLiang L. Wang). A model for associative multiplication (G. BjSrn Christianson and Suzanna Becker). Facial memory is kernel density estimation (almost) (Matthew N. Dailey, Garrison W. Cottrell and Thomas A. Busey). Multiple paired forward-inverse models for human motor learning and control (Masahiko Haruno, Daniel M. Wolpert and Mitsuo Kawato). Utilizing time: Asynchronous binding (Bradley C. Love). Mechanisms of generalization in perceptual learning (Zili Liu and Daphna Weinshall). A principle for unsupervised hierarchical decomposition of visual scenes (Michael C. Mozer). Bayesian modeling of human concept learning (Joshua B. Tenenbaum). II. Neuroscience. Temporally asymmetric Hebbian learning, spike timing and neural response variability (L.F. Abbott and Seng Song). Contrast adaptation in simple cells by changing the transmitter release probability (Peter AdorjΒ£n and Klaus Obermayer). Where does the population vector of motor cortical cells point during reaching movements? (Pierre Baxaduc, Emmanuel Guigon and Yves Burdon). Recurrent cortical amplification produces complex cell responses (Frances S. Chance, Sacha B. Nelson and L.F. Abbott). Neuronal regulation implements efficient synaptic pruning (Gal Chechik, Isaac Meilijson and Eyton Ruppin). Divisive normalization, line attractor networks and ideal observers (Sophie Deneve, Alexandre Pouget and Peter E. Latham). Synergy and redundancy among brain cells of behaving monkeys (Itay Gat and Naftali Tishby). Analyzing and visualizing single-trial event-related potentials (Tzyy-Ping Jung, Scott Makeig, Marissa Westerfield, Jeanne Townsend, Eric Courchesne and Terrence J. Sejnowski). Spike-based compared to rate-based Hebbian learning (Richard Kempter, Wulfram Gerstner and J. Leo van Hemmen). Signal detection in noisy weakly-active dendrites (Amit Manwani and Christof Koch). The role of lateral cortical competition in ocular dominance development (Christian Piepenbrock and Klaus Obermayer). Multi-electrode spike sorting by clustering transfer functions (Dmitry Rinberg, Hanan Davidowitz and Naftali Tisby). Modeling surround suppression in V1 neurons with a statistically derived normalization model (Eero P. Simoncelli and Odelia Schwartz). Information maximization in single neurons (Martin Stemmler and Christof Koch). The effect of correlations on the Fisher information of population codes (Hyoungsoo Yoon and Haim Sompolinsky). Distributional population codes and multiple motion models (Richard S. Zemel and Peter Dayan). III. Theory. Tractable variational structures for approximating graphical models (David Barber and Wim Wiegerinck). Almost linear VC dimension bounds for piecewise polynomial networks (Peter L. Bartlett, Vitaly Maiorov and Ron Melt). Dynamics of supervised learning with restricted training sets (A.C.C. Coolen and David Saad). Dynamically adapting kernels in support vector machines (Nello Cristianini, Colin Campbell and John Shawe-Taylor). Phase diagram and storage capacity of sequence-storing neural networks (A. Diiring, A.C.C. Coolen and D. Sherrington).
Finite-dimensional approximation of Gaussian processes (Giancarlo Ferrari-Trecate, Christopher K.I. Williams and Manfred Opper). Linear hinge loss and average margin (Claudio Gentile and Manfred K. Warmuth). Unsupervised and supervised clustering: The mutual information between parameters and observations (Didier Herschkowitz and Jean-Pierre Nadal). Convergence of the wake-sleep algorithm (Shiro Ikeda, Shun-ichi Amari and Hiroyuki Nakahara). The belief in TAP (Yoshiyuki Kabashima and David Saad). Optimizing classifiers for imbalanced training sets (Grigoris Karakoulas and John Shawe-Taylor). Inference in multilayer networks via large deviation bounds (Michael Kerns and Lawrence Saul). Stationarity and stability of autoregressive neural network processes (Friedrich Leisch, Adrian Trapletti and Kurt Hornik). Computational differences between asymmetrical and symmetrical networks (Zhaoping Li and Peter Dayan). A precise characterization of the class of languages recognized by neural nets under Caussian and other common noise distributions (Wolfgang Maass and Eduardo D. Sontag). Direct optimization of margins improves generalization in combined classifiers (Llew Mason, Peter L. Bartlett and Jonathan Baxter). On the optimality of incremental neural network algorithms iRon Meir and Vitaly Maiorov). General bounds on Bayes errors for regression with Caussian processes (Manfred Opper and Ole Winther). On-line learning with restricted training sets: Exact solution as benchmark for general theories (H.C. Rae, Peter Sollich and A.C.C. Coolen). Tight bounds for the VC-dimension of piecewise polynomial networks (Akito Sakurai). Shrinking the tube: A new support vector regression algorithm (Bernhard SchSlkopf, Peter L. Bartlett, Alex J. Smola and Robert Williamson). Discontinuous recall transitions induced by competition between short-and long-range interactions in recurrent networks (N.S. Skantzos, C.F. Beckmann and A.C.C. Coolen). Learning curves for Gaussian processes (Peter Sollich). A theory of mean field approximation (Toshiyuki Tanaka).
IV. Algorithms and architecture. Learning a hierarchical belief network of independent factor analyzers (Hagai Attias). Semi-supervised support vector machines (Kristin Bennett and Ayhan Demiriz). Lazy learning meets the recursive least squares algorithm (Mauro Birattari, Gianluca Bontempi and Hugues Bersini). Bayesian PCA (Christopher M. Bishop). Learning multi-class dynamics (Andrew Blake, Ben North and Michael Isard). Approximate learning of dynamic models (Xavier Boyen and Daphne Koller). Fisher scoring and a mixture of modes approach for approximate inference and learning in nonlinear state space models (Thomas Briegel and Volker Tresp). Global optimisation of neural network models via sequential sampling (Jog~o F.G. de Freitas, Mahesan Niranjan, Arnaud Doucet and Andrew H. Gee). Efficient Bayesian parameter estimation in large discrete domains (Nir Friedman and Yoram Singer). A randomized algorithm for pairwise clustering (Yoram Gdalyahu, Daphna Weinshall and Michael Werman). Learning nonlinear dynamical systems using an EM algorithm (Zoubin Chahramani and Sam T. Rowels). Classification on pairwise proximity data (Thore Graepel, Ralf Herbrich,
π SIMILAR VOLUMES
Contents: Preface. Committees. I. Cognitive science. Learning the structure of similarity (J.B. Tenenbaum). A model of spatial representations in parietal cortex explains hemineglect (A. Pouget and T.J. Sejnowski). Human reading and the curse of dimensionality (G.L. Martin). Extracting tree-structur