<P style="MARGIN: 0px"> <B>For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science.</B> </P> <P style="MARGIN: 0px"> </P> <P style="MARGIN: 0px"> <I>Neural Networks and Learning Machines, Third Edition</I> is renown
Neural networks and learning machines
✍ Scribed by Haykin, Simon
- Publisher
- Pearson Education
- Year
- 2008;2009
- Tongue
- English
- Leaves
- 937
- Edition
- 3rd ed
- Category
- Library
No coin nor oath required. For personal study only.
✦ Synopsis
Fluid and authoritative, this well-organized book represents the first comprehensive treatment of neural networks and learning machines from an engineering perspective, providing extensive, state-of-the-art coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technology's origin, capabilities, and potential applications.KEY TOPICS:Examines all the important aspects of this emerging technology, covering the learning process, back propogation, radial basis functions, recurrent networks, self-organizing systems, modular networks, temporal processing, neurodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed and perform in practice. Chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary all reinforce concepts throughout. New chapters delve into such areas as support vector machines, and reinforcement learning/neurodynamic programming, Rosenblatt's Perceptron, Least-Mean-Square Algorithm, Regularization Theory, Kernel Methods and Radial-Basis function networks (RBF), and Bayseian Filtering for State Estimation of Dynamic Systems. An entire chapter of case studies illustrates the real-life, practical applications of neural networks. A highly detailed bibliography is included for easy reference.MARKET:For professional engineers and research scientists. Matlab codes used for the computer experiments in the text are available for download at: http: //www.pearsonhighered.com/haykin/
✦ Table of Contents
Cover......Page 1
Contents......Page 6
Preface......Page 12
1. What is a Neural Network?......Page 32
2. The Human Brain......Page 37
3. Models of a Neuron......Page 41
4. Neural Networks Viewed As Directed Graphs......Page 46
5. Feedback......Page 49
6. Network Architectures......Page 52
7. Knowledge Representation......Page 55
8. Learning Processes......Page 65
9. Learning Tasks......Page 69
10. Concluding Remarks......Page 76
Notes and References......Page 77
1.1 Introduction......Page 78
1.2. Perceptron......Page 79
1.3. The Perceptron Convergence Theorem......Page 81
1.4. Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment......Page 86
1.5. Computer Experiment: Pattern Classification......Page 91
1.6. The Batch Perceptron Algorithm......Page 93
1.7. Summary and Discussion......Page 96
Problems......Page 97
2.1 Introduction......Page 99
2.2 Linear Regression Model: Preliminary Considerations......Page 100
2.3 Maximum a Posteriori Estimation of the Parameter Vector......Page 102
2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation......Page 107
2.5 Computer Experiment: Pattern Classification......Page 108
2.6 The Minimum-Description-Length Principle......Page 110
2.7 Finite Sample-Size Considerations......Page 113
2.8 The Instrumental-Variables Method......Page 117
2.9 Summary and Discussion......Page 119
Problems......Page 120
3.1 Introduction......Page 122
3.2 Filtering Structure of the LMS Algorithm......Page 123
3.3 Unconstrained Optimization: a Review......Page 125
3.4 The Wiener Filter......Page 131
3.5 The Least-Mean-Square Algorithm......Page 133
3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter......Page 135
3.7 The Langevin Equation: Characterization of Brownian Motion......Page 137
3.8 Kushner's Direct-Averaging Method......Page 138
3.9 Statistical LMS Learning Theory for Small Learning-Rate Parameter......Page 139
3.10 Computer Experiment I: Linear Prediction......Page 141
3.11 Computer Experiment II: Pattern Classification......Page 143
3.12 Virtues and Limitations of the LMS Algorithm......Page 144
3.13 Learning-Rate Annealing Schedules......Page 146
3.14 Summary and Discussion......Page 148
Notes and References......Page 149
Problems......Page 150
Chapter 4 Multilayer Perceptrons......Page 153
4.1 Introduction......Page 154
4.2 Some Preliminaries......Page 155
4.3 Batch Learning and On-Line Learning......Page 157
4.4 The Back-Propagation Algorithm......Page 160
4.5 XOR Problem......Page 172
4.6 Heuristics for Making the Back-Propagation Algorithm Perform Better......Page 175
4.7 Computer Experiment: Pattern Classification......Page 181
4.8 Back Propagation and Differentiation......Page 184
4.9 The Hessian and Its Role in On-Line Learning......Page 186
4.10 Optimal Annealing and Adaptive Control of the Learning Rate......Page 188
4.11 Generalization......Page 195
4.12 Approximations of Functions......Page 197
4.13 Cross-Validation......Page 202
4.14 Complexity Regularization and Network Pruning......Page 206
4.15 Virtues and Limitations of Back-Propagation Learning......Page 211
4.16 Supervised Learning Viewed as an Optimization Problem......Page 217
4.17 Convolutional Networks......Page 232
4.18 Nonlinear Filtering......Page 234
4.19 Small-Scale Versus Large-Scale Learning Problems......Page 240
4.20 Summary and Discussion......Page 248
Notes and References......Page 250
Problems......Page 252
5.1 Introduction......Page 261
5.2 Cover's Theorem on the Separability of Patterns......Page 262
5.3 The Interpolation Problem......Page 267
5.4 Radial-Basis-Function Networks......Page 270
5.5 K-Means Clustering......Page 273
5.6 Recursive Least-Squares Estimation of the Weight Vector......Page 276
5.7 Hybrid Learning Procedure for RBF Networks......Page 280
5.8 Computer Experiment: Pattern Classification......Page 281
5.9 Interpretations of the Gaussian Hidden Units......Page 283
5.10 Kernel Regression and Its Relation to RBF Networks......Page 286
5.11 Summary and Discussion......Page 290
Notes and References......Page 292
Problems......Page 294
6.1 Introduction......Page 299
6.2 Optimal Hyperplane for Linearly Separable Patterns......Page 300
6.3 Optimal Hyperplane for Nonseparable Patterns......Page 307
6.4 The Support Vector Machine Viewed as a Kernel Machine......Page 312
6.5 Design of Support Vector Machines......Page 315
6.6 XOR Problem......Page 317
6.8 Regression: Robustness Considerations......Page 320
6.9 Optimal Solution of the Linear Regression Problem......Page 324
6.10 The Representer Theorem and Related Issues......Page 327
6.11 Summary and Discussion......Page 333
Notes and References......Page 335
Problems......Page 338
7.1 Introduction......Page 344
7.2 Hadamard's Conditions for Well-Posedness......Page 345
7.3 Tikhonov's Regularization Theory......Page 346
7.4 Regularization Networks......Page 357
7.5 Generalized Radial-Basis-Function Networks......Page 358
7.6 The Regularized Least-Squares Estimator: Revisited......Page 362
7.7 Additional Notes of Interest on Regularization......Page 366
7.8 Estimation of the Regularization Parameter......Page 367
7.9 Semisupervised Learning......Page 373
7.10 Manifold Regularization: Preliminary Considerations......Page 374
7.11 Differentiable Manifolds......Page 376
7.12 Generalized Regularization Theory......Page 379
7.13 Spectral Graph Theory......Page 381
7.14 Generalized Representer Theorem......Page 383
7.15 Laplacian Regularized Least-Squares Algorithm......Page 385
7.16 Experiments on Pattern Classification Using Semisupervised Learning......Page 387
7.17 Summary and Discussion......Page 390
Notes and References......Page 392
Problems......Page 394
8.1 Introduction......Page 398
8.2 Principles of Self-Organization......Page 399
8.3 Self-Organized Feature Analysis......Page 403
8.4 Principal-Components Analysis: Perturbation Theory......Page 404
8.5 Hebbian-Based Maximum Eigenfilter......Page 414
8.6 Hebbian-Based Principal-Components Analysis......Page 423
8.7 Case Study: Image Coding......Page 429
8.8 Kernel Principal-Components Analysis......Page 432
8.9 Basic Issues Involved in the Coding of Natural Images......Page 437
8.10 Kernel Hebbian Algorithm......Page 438
8.11 Summary and Discussion......Page 443
Notes and References......Page 446
Problems......Page 449
9.1 Introduction......Page 456
9.2 Two Basic Feature-Mapping Models......Page 457
9.3 Self-Organizing Map......Page 459
9.4 Properties of the Feature Map......Page 468
9.5 Computer Experiments I: Disentangling Lattice Dynamics Using SOM......Page 476
9.6 Contextual Maps......Page 478
9.7 Hierarchical Vector Quantization......Page 481
9.8 Kernel Self-Organizing Map......Page 485
9.9 Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOM......Page 493
9.10 Relationship Between Kernel SOM and Kullback–Leibler Divergence......Page 495
9.11 Summary and Discussion......Page 497
Notes and References......Page 499
Problems......Page 501
Chapter 10 Information-Theoretic Learning Models......Page 506
10.1 Introduction......Page 507
10.2 Entropy......Page 508
10.3 Maximum-Entropy Principle......Page 512
10.4 Mutual Information......Page 515
10.5 Kullback–Leibler Divergence......Page 517
10.6 Copulas......Page 520
10.7 Mutual Information as an Objective Function to be Optimized......Page 524
10.8 Maximum Mutual Information Principle......Page 525
10.9 Infomax and Redundancy Reduction......Page 530
10.10 Spatially Coherent Features......Page 532
10.11 Spatially Incoherent Features......Page 535
10.12 Independent-Components Analysis......Page 539
10.13 Sparse Coding of Natural Images and Comparison with ICA Coding......Page 545
10.14 Natural-Gradient Learning for Independent-Components Analysis......Page 547
10.15 Maximum-Likelihood Estimation for Independent-Components Analysis......Page 557
10.16 Maximum-Entropy Learning for Blind Source Separation......Page 560
10.17 Maximization of Negentropy for Independent-Components Analysis......Page 565
10.18 Coherent Independent-Components Analysis......Page 572
10.19 Rate Distortion Theory and Information Bottleneck......Page 580
10.20 Optimal Manifold Representation of Data......Page 584
10.21 Computer Experiment: Pattern Classification......Page 591
10.22 Summary and Discussion......Page 592
Notes and References......Page 595
Problems......Page 603
Chapter 11 Stochastic Methods Rooted in Statistical Mechanics......Page 610
11.2 Statistical Mechanics......Page 611
11.3 Markov Chains......Page 613
11.4 Metropolis Algorithm......Page 622
11.5 Simulated Annealing......Page 625
11.6 Gibbs Sampling......Page 627
11.7 Boltzmann Machine......Page 629
11.8 Logistic Belief Nets......Page 635
11.9 Deep Belief Nets......Page 637
11.10 Deterministic Annealing......Page 641
11.11 Analogy of Deterministic Annealing with Expectation-Maximization Algorithm......Page 647
11.12 Summary and Discussion......Page 648
Notes and References......Page 650
Problems......Page 652
12.1 Introduction......Page 658
12.2 Markov Decision Process......Page 660
12.3 Bellman's Optimality Criterion......Page 662
12.4 Policy Iteration......Page 666
12.5 Value Iteration......Page 668
12.6 Approximate Dynamic Programming: Direct Methods......Page 673
12.7 Temporal-Difference Learning......Page 674
12.8 Q-Learning......Page 679
12.9 Approximate Dynamic Programming: Indirect Methods......Page 683
12.10 Least-Squares Policy Evaluation......Page 686
12.11 Approximate Policy Iteration......Page 691
12.12 Summary and Discussion......Page 694
Notes and References......Page 696
Problems......Page 699
13.1 Introduction......Page 703
13.2 Dynamic Systems......Page 705
13.3 Stability of Equilibrium States......Page 709
13.4 Attractors......Page 715
13.5 Neurodynamic Models......Page 717
13.6 Manipulation of Attractors as a Recurrent Network Paradigm......Page 720
13.7 Hopfield Model......Page 721
13.8 The Cohen–Grossberg Theorem......Page 734
13.9 Brain-State-In-A-Box Model......Page 736
13.10 Strange Attractors and Chaos......Page 742
13.11 Dynamic Reconstruction of a Chaotic Process......Page 747
13.12 Summary and Discussion......Page 753
Notes and References......Page 755
Problems......Page 758
14.1 Introduction......Page 762
14.2 State-Space Models......Page 763
14.3 Kalman Filters......Page 767
14.4 The Divergence-Phenomenon and Square-Root Filtering......Page 775
14.5 The Extended Kalman Filter......Page 781
14.6 The Bayesian Filter......Page 786
14.7 Cubature Kalman Filter: Building on the Kalman Filter......Page 790
14.8 Particle Filters......Page 796
14.9 Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filters......Page 806
14.10 Kalman Filtering in Modeling of Brain Functions......Page 808
14.11 Summary and Discussion......Page 811
Notes and References......Page 813
Problems......Page 815
15.1 Introduction......Page 821
15.2 Recurrent Network Architectures......Page 822
15.3 Universal Approximation Theorem......Page 828
15.4 Controllability and Observability......Page 830
15.5 Computational Power of Recurrent Networks......Page 835
15.6 Learning Algorithms......Page 837
15.7 Back Propagation Through Time......Page 839
15.8 Real-Time Recurrent Learning......Page 843
15.9 Vanishing Gradients in Recurrent Networks......Page 849
15.10 Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators......Page 853
15.11 Computer Experiment: Dynamic Reconstruction of Mackay–Glass Attractor......Page 860
15.12 Adaptivity Considerations......Page 862
15.13 Case Study: Model Reference Applied to Neurocontrol......Page 864
15.14 Summary and Discussion......Page 866
Notes and References......Page 870
Problems......Page 873
Bibliography......Page 878
A......Page 919
B......Page 920
C......Page 921
D......Page 922
E......Page 923
G......Page 924
I......Page 925
K......Page 926
L......Page 927
M......Page 929
N......Page 930
P......Page 931
Q......Page 932
R......Page 933
S......Page 934
U......Page 936
Z......Page 937
✦ Subjects
Science;Computer Science;Artificial Intelligence;Nonfiction;Textbooks;Programming;Engineering;Mathematics
📜 SIMILAR VOLUMES
For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science. Neural Networks and Learning Machines, Third Edition is renowned for its thoroughness and readability. This well-organized and completely up-to-date text remain
<b>Would you achieve more if you could envision your success?</b> <br /> A neural network is a computing ѕуѕtеm made uр оf a numbеr of ѕimрlе, highlу intеrсоnnесtеd рrосеѕѕing elements, which рrосеѕѕ infоrmаtiоn bу thеir dуnаmiс ѕtаtе response to еxtеrnаl inputs. All of this sounds fancy, but what d
Are you ready to embark on an exhilarating journey into the world of artificial intelligence, deep learning, and computer vision? Look no further! Our carefully curated book bundle, "DEEP LEARNING: COMPUTER VISION, PYTHON MACHINE LEARNING AND NEURAL NETWORKS," offers you a comprehensive roadmap to A