𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Machine learning: a constraint-based approach /

✍ Scribed by Marco Gori.


Year
2017
Tongue
English
Leaves
569
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Table of Contents


Cover
Half-Title Page
Machine Learning:
A Constraint-Based Approach
Copyright
Dedication
Contents
Preface
Acknowledgments
Reading guidelines
Notes on the Exercises
1 The Big Picture
1.1 Why Do Machines Need to Learn?
1.1.1 Learning Tasks
1.1.2 Symbolic and Subsymbolic Representations of the Environment
1.1.3 Biological and Artificial Neural Networks
1.1.4 Protocols of Learning
1.1.5 Constraint-Based Learning
1.2 Principles and Practice
1.2.1 The Puzzling Nature of Induction
1.2.2 Learning Principles
1.2.3 The Role of Time in Learning Processes
1.2.4 Focus of Attention
1.3 Hands-on Experience
1.3.1 Measuring the Success of Experiments
1.3.2 Handwritten Character Recognition
1.3.3 Setting up a Machine Learning Experiment
1.3.4 Test and Experimental Remarks
1.4 Challenges in Machine Learning
1.4.1 Learning to See
1.4.2 Speech Understanding
1.4.3 Agents Living in Their Own Environment
1.5 Scholia
2 Learning Principles
2.1 Environmental Constraints
2.1.1 Loss and Risk Functions
2.1.2 Ill-Position of Constraint-Induced Risk Functions
2.1.3 Risk Minimization
2.1.4 The Bias-Variance Dilemma
2.2 Statistical Learning
2.2.1 Maximum Likelihood Estimation
2.2.2 Bayesian Inference
2.2.3 Bayesian Learning
2.2.4 Graphical Modes
2.2.5 Frequentist and Bayesian Approach
2.3 Information-Based Learning
2.3.1 A Motivating Example
2.3.2 Principle of Maximum Entropy
2.3.3 Maximum Mutual Information
2.4 Learning Under the Parsimony Principle
2.4.1 The Parsimony Principle
2.4.2 Minimum Description Length
2.4.3 MDL and Regularization
2.4.4 Statistical Interpretation of Regularization
2.5 Scholia
3 Linear Threshold Machines
3.1 Linear Machines
3.1.1 Normal Equations
3.1.2 Undetermined Problems and Pseudoinversion
3.1.3 Ridge Regression
3.1.4 Primal and Dual Representations
3.2 Linear Machines With Threshold Units
3.2.1 Predicate-Order and Representational Issues
3.2.2 Optimality for Linearly-Separable Examples
3.2.2.1 Sigmoidal Unit and Quadratic Loss
3.2.2.2 Linear Unit and Hinge Function
3.2.3 Failing to Separate
3.3 Statistical View
3.3.1 Bayesian Decision and Linear Discrimination
3.3.2 Logistic Regression
3.3.3 The Parsimony Principle Meets the Bayesian Decision
3.3.4 LMS in the Statistical Framework
3.4 Algorithmic Issues
3.4.1 Gradient Descent
3.4.2 Stochastic Gradient Descent
3.4.3 The Perceptron Algorithm
3.4.4 Complexity Issues
3.5 Scholia
4 Kernel Machines
4.1 Feature Space
4.1.1 Polynomial Preprocessing
4.1.2 Boolean Enrichment
4.1.3 Invariant Feature Maps
4.1.4 Linear-Separability in High-Dimensional Spaces
4.2 Maximum Margin Problem
4.2.1 Classification Under Linear-Separability
4.2.2 Dealing With Soft-Constraints
4.2.3 Regression
4.3 Kernel Functions
4.3.1 Similarity and Kernel Trick
4.3.2 Characterization of Kernels
4.3.3 The Reproducing Kernel Map
4.3.4 Types of Kernels
4.4 Regularization
4.4.1 Regularized Risks
4.4.2 Regularization in RKHS
4.4.3 Minimization of Regularized Risks
4.4.4 Regularization Operators
4.5 Scholia
5 Deep Architectures
5.1 Architectural Issues
5.1.1 Digraphs and Feedforward Networks
5.1.2 Deep Paths
5.1.2.1 Heaviside Function
5.1.2.2 Rectifier
5.1.2.3 Polynomial Functions
5.1.2.4 Squash Functions
5.1.2.5 Exponential Functions
5.1.3 From Deep to Relaxation-Based Architectures
5.1.4 Classifiers, Regressors, and Auto-Encoders
5.2 Realization of Boolean Functions
5.2.1 Canonical Realizations by and-or Gates
5.2.2 Universal nand Realization
5.2.3 Shallow vs Deep Realizations
5.2.4 LTU-Based Realizations and Complexity Issues
5.3 Realization of Real-Valued Functions
5.3.1 Computational Geometry-Based Realizations
5.3.2 Universal Approximation
5.3.3 Solution Space and Separation Surfaces
5.3.4 Deep Networks and Representational Issues
5.4 Convolutional Networks
5.4.1 Kernels, Convolutions, and Receptive Fields
5.4.2 Incorporating Invariance
5.4.3 Deep Convolutional Networks
5.5 Learning in Feedforward Networks
5.5.1 Supervised Learning
5.5.2 Backpropagation
5.5.3 Symbolic and Automatic Differentiation
5.5.4 Regularization Issues
5.6 Complexity Issues
5.6.1 On the Problem of Local Minima
5.6.2 Facing Saturation
5.6.3 Complexity and Numerical Issues
5.7 Scholia
6 Learning and Reasoning With Constraints
6.1 Constraint Machines
6.1.1 Walking Through Learning and Inference
6.1.2 A Unified View of Constrained Environments
6.1.3 Functional Representation of Learning Tasks
6.1.4 Reasoning With Constraints
6.2 Logic Constraints in the Environment
6.2.1 Formal Logic and Complexity of Reasoning
6.2.2 Environments With Symbols and Subsymbols
6.2.3 T-Norms
6.2.4 Łukasiewicz Propositional Logic
6.3 Diffusion Machines
6.3.1 Data Models
6.3.2 Diffusion in Spatiotemporal Environments
6.3.3 Recurrent Neural Networks
6.4 Algorithmic Issues
6.4.1 Pointwise Content-Based Constraints
6.4.2 Propositional Constraints in the Input Space
6.4.3 Supervised Learning With Linear Constraints
6.4.4 Learning Under Diffusion Constraints
6.5 Life-Long Learning Agents
6.5.1 Cognitive Action and Temporal Manifolds
6.5.2 Energy Balance
6.5.3 Focus of Attention, Teaching, and Active Learning
6.5.4 Developmental Learning
6.6 Scholia
7 Epilogue
8 Answers to Exercises
Section 1.1
Section 1.2
Section 1.3
Section 2.1
Section 2.2
Section 3.1
Section 3.2
Section 3.3
Section 3.4
Section 4.1
Section 4.2
Section 4.3
Section 4.4
Section 5.1
Section 5.2
Section 5.3
Section 5.4
Section 5.5
Section 5.7
Section 6.1
Section 6.2
Section 6.3
Section 6.4
A. Constrained Optimization in
Finite Dimensions
B. Regularization Operators
C. Calculus of Variations
C.1 Functionals and Variations
C.2 Basic Notion on Variations
C.3 Euler-Lagrange Equations
C.4 Variational Problems With Subsidiary Conditions
D. Index to Notation
Bibliography
Index
Back COver


πŸ“œ SIMILAR VOLUMES


Machine Learning: A Constraint-Based App
✍ Marco Gori Ph.D. πŸ“‚ Library πŸ“… 2017 πŸ› Morgan Kaufmann 🌐 English

<p><i>Machine Learning: A Constraint-Based Approach</i> provides readers with a refreshing look at the basic models and algorithms of machine learning, with an emphasis on current topics of interest that includes neural networks and kernel machines. </p> <p>The book presents the information in a tru

Learning Teleneurology Basics: A Case-Ba
✍ Swathi Beladakere Ramaswamy (editor), Sachin M. Bhagavan (editor), Raghav Govind πŸ“‚ Library πŸ“… 2021 πŸ› Springer 🌐 English

<p><span>This book focuses on the basics of teleneurology and provides an outline of curriculum and practice with the help of clinical vignettes. It fills the gap for a text that reflects the rapidly evolving nature of the teleneurology field, with specific attention paid to examining how this can b

Learning Strabismus Surgery: A Case-Base
✍ Dean M. Cestari, David G. Hunter πŸ“‚ Library πŸ“… 2012 πŸ› LWW 🌐 English

<div>Ideal for both the student seeking a firmer understanding of strabismus surgery and the experienced surgeon looking to improve clinical decision-making, this practical resource uses a case-based approach to help readers conceptualize, plan, and perform complex strabismus procedures at every dif

Machine Vision Inspection Systems, Machi
✍ Muthukumaran Malarvel; Soumya Ranjan Nayak; Prasant Kumar Pattnaik; Surya Naraya πŸ“‚ Library πŸ“… 2021 πŸ› John Wiley & Sons 🌐 English

<p><i>Machine Vision Inspection Systems</i> (MVIS) is a multidisciplinary research field that emphasizes image processing, machine vision and, pattern recognition for industrial applications. Inspection techniques are generally used in destructive and non-destructive evaluation industry. Now a day's