𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

From Motor Learning to Interaction Learning in Robots (Studies in Computational Intelligence, 264)

✍ Scribed by Olivier Sigaud (editor), Jan Peters (editor)


Publisher
Springer
Year
2010
Tongue
English
Leaves
534
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


From an engineering standpoint, the increasing complexity of robotic systems and the increasing demand for more autonomously learning robots, has become essential. This book is largely based on the successful workshop β€œFrom motor to interaction learning in robots” held at the IEEE/RSJ International Conference on Intelligent Robot Systems. The major aim of the book is to give students interested the topics described above a chance to get started faster and researchers a helpful compandium.

✦ Table of Contents


Title Page
Preface
Contents
From Motor Learning to Interaction Learning in Robots
Introduction
The Need for Robot Learning Approaches
Motor Learning, Imitation Learning and Interaction Learning
Overview of the Book
Biologically Inspired Models for Learning in Robots
Learning Models and Policies for Motor Control
Imitation and Interaction Learning
Conclusion and Perspectives
References
Part I: Biologically Inspired Models for Motor Learning
Distributed Adaptive Control: A Proposal on the Neuronal Organization of Adaptive Goal Oriented Behavior
Introduction
Formal Description of DAC
Reactive and Adaptive Layer
Contextual Layer
Results
Behavioral Feedback
DAC as an Approximation of an Optimal Bayesian Decision Making System
The Reactive Layer and the Construction of a Synthetic Insect
The `Two–Stage' Theory of Classical Conditioning
General Principals for Perceptual Learning
Discussion
References
Proprioception and Imitation: On the Road to Agent Individuation
Introduction
From Visuo-motor Learning to Low Level Imitation
Robot Response to an Expressive Human
Learning a Path as a Sequence of Sensori-Motor Associations
Place-Movement Associations
The Role of Internal Dynamic for More Complex Behavior Learning
Discussion
References
Adaptive Optimal Feedback Control with Learned Internal Dynamics Models
Introduction
Optimal Feedback Control
Adaptive Optimal Feedback Control
ILQG with Learned Dynamics (ILQG–LD)
Learning the Dynamics
Reducing the Computational Cost
Evaluation
Planar Arm with 2 Torque-Controlled Joints
Anthropomorphic 6 DoF Robot Arm
Antagonistic Planar Arm
Discussion
References
The SURE REACH Model for Motor Learning and Control of a Redundant Arm: From Modeling Human Behavior to Applications in Robotics
Introduction
Theories of Motor Learning, Movement Preparation, and Control
Description of the Model
Neural Network Implementation
Simulation of Reaching Movements
Posture and Trajectory Redundancy
Multimodal Feedback Control
Theoretical, Biological and Psychological Implications
Application in Robotics
Conclusion
References
Intrinsically Motivated Exploration for Developmental and Active Sensorimotor Learning
Intrinsically Motivated Exploration and Learning
The Problem of Exploration in Open-Ended Learning
Intrinsic Motivations
IAC and R-IAC for Intrinsically Motivated Active Learning
Developmental Active Learning
Prediction Machine and Analysis of Error Rate
The Split Machine
Action Selection Machine
Pseudo-code of R-IAC
Software
Remarks
The Prediction Machine: Incremental Regression Algorithms for Learning Forward and Inverse Models
Self-organizing Developmental Trajectories with IAC and Motor Primitives in the Playground Experiment
Motor Primitives
Perceptual Primitives
The Sensorimotor Loop
Results
Experimenting and Comparing R-IAC and IAC with a Simple Simulated Robot
Robotics Configuration
Environment Configuration
Results: Exploration Areas
Results: Active Learning
The Hand-Eye-Clouds Experiment
Robotics Configuration
Results
Conclusion
References
Learning to Exploit Proximal Force Sensing: A Comparison Approach
Introduction
Robot Setup and Problem Formulation
Proposed Approaches
Model-Based Approach
Least Squares Support Vector Machines for Regression
Neural Networks
Results and Discussion
Number of Training Samples
Contribution of Velocity and Acceleration on the Estimation
Selective Subsampling
Conclusions
References
Learning Forward Models for the Operational Space Control of Redundant Robots
Introduction
Background in Operational Space Control
Joint Space to Task Space Mappings
Model-Based Control at the Velocity Level
Redundancy Resolution Schemes
Learning Forward and Inverse Velocity Kinematics Models
An Overview of Neural Networks Function Approximation
Locally Weighted Projection Regression
Learning Inverse Kinematics
Learning Forward Kinematics
Experimental Study
Control Architecture
Choice of Parameters for the lwpr Algorithm
Experiments
Results
Babbling Phase
Under-Constrained Case
Fully Constrained Case
Over-Constrained Case
Discussion
Conclusion
References
Part II: Learning Policies for Motor Control
Real-Time Local GP Model Learning
Introduction
Model-Based Control
Regression with Standard GPR
Local Gaussian Process Regression
Partitioning of Training Data
Incremental Update of Local Models
Prediction Using LocalModels
Learning Inverse Dynamics for Model-Based Control
Learning Accuracy Comparison
Online Learning for Model-Based Control
Performance on a Complex Test Setting
Conclusion
References
Imitation and Reinforcement Learning for Motor Primitives with Perceptual Coupling
Introduction
Augmented Motor Primitives with Perceptual Coupling
Perceptual Coupling for Motor Primitives
Realization for Discrete Movements
Learning for Perceptually Coupled Motor Primitives
Imitation Learning with Perceptual Coupling
Reinforcement Learning for Perceptually Coupled Motor Primitives
Reinforcement Learning Setup
Policy Learning by Weighting Exploration with the Returns (PoWER)
Evaluation and Application
Robot Application: Ball-in-a-Cup
Conclusion
References
A Bayesian View on Motor Control and Planning
Introduction
A Bayesian View on Classical Control
Kinematic Case
Multiple Task Variables
Dynamic Case
A Bayesian View on Motion Planning
Kinematic Case
Experiments
Kinematic Control
Kinematic Motion Planning
Planning in More Structured Models
Coupling with Collision Constraints
Discussion
Conclusion
References
Methods for Learning Control Policies from Variable-Constraint Demonstrations
Introduction
Effect of Variable Dynamic Constraints on Learning
Learning Control Policies from Data
Formal Constraint Model
Learning from Constrained Motion Data
Constraint-Consistent Learning of Potential-Based Policies
Potential-Based Policies
Learning from Constrained Potential-Based Policies
Learning the Potential through Local Model Alignment
Constraint-Consistent Learning of Generic Policies
Optimisation of the Standard Risk, UPE and CPE
Learning Generic Policies by Minimising Inconsistency
Parametric Policy Models
Locally Linear Policy Models
Constraint-Consistent Learning Performance
Two-Dimensional Constrained System
Reaching for a Ball
Washing a Car
Discussion
References
Motor Learning at Intermediate Reynolds Number: Experiments with Policy Gradient on the Flapping Flight of a RigidWing
Introduction
Experimental Setup
Optimal Control Formulation
Reward Function
Policy Parameterization
The Learning Algorithm
The Weight Perturbation Update
The Shell Distribution
Signal-to-Noise Ratio Analysis
Definition of the Signal-to-Noise Ratio
Weight Perturbation with Gaussian Distributions
SNR with Parameter-Independent Additive Noise
Non-Gaussian Distributions
Experimental Evaluation of Shell Distributions
Implications for Learning at Intermediate Reynolds Numbers
Learning Results
Policy Representation Viewed through SNR
Reward Function Viewed through SNR
Implementation of Online Learning
Performance of Learning
Interpretation of Optimal Solution
Conclusion
References
Abstraction Levels for Robotic Imitation: Overview and Computational Approaches
Introduction
Imitation in Natural Systems
Neurophysiology
Psychology
Remarks
Imitation in Artificial Systems
Imitating by Motor Resonance
Visual Transformations
Mimicking Behaviors and Automatic Imitation
Imitation through Motor Primitives
Learning of New Task Solutions
Object Mediated Imitation
Bayesian Networks as Models for Affordances
Experiments
Imitating by Inferring the Goal
Goal Inference from Demonstration
Inverse Reinforcement Learning as Goal Inference
Experiments
Other Imitation Settings
Discussion
References
Part III: Imitation and Interaction Learning
Learning to Imitate Human Actions through Eigenposes
Related Work
3-D Eigenposes
Eigenposes as Low-Dimensional Representation of Postures
Action Subspace Embedding
Action Subspace Scaling
Learning to Predict Sensory Consequences of Actions
Predictive Model Motion Optimization
One-Dimensional Motion Optimization
Three-Dimensional Motion Optimization
Learning Human Motion through Imitation
Optimization of Motion Capture Data
Lossless Motion Optimization
Human Motion Capture of Sidestepping Motion
Large-Dimensional Cylindrical Coordinate System Transformation
Motion-Phase Optimization of Hyperdimensional Eigenposes
Conclusion
References
Incremental Learning of Full Body Motion Primitives
Introduction
Related Work
Factorial Hidden Markov Models
Human Motion Pattern Representation Using FHMMs
Incremental Behavior Learning and Hierarchy Formation
Step 1: Observation Sequence Encoding
Steps 2 and 3: Intra-model Distance Calculation
Steps 4 and 5: Clustering and New Group Formation
Step 6: New Behavior Instantiation
Deterministic Motion Generation
Interactive Motion Learning Environment
Experiments
Incremental Clustering Experiments
Experiments with the Interactive Training System
Conclusions
References
CanWe Learn Finite State Machine Robot Controllers from Interactive Demonstration?
Introduction
Subtasks
Tutelage Based Robot Learning
Interactivity
Dogged Learning
Regression-Based Learning
SOGP
Experiments
Platform
Learned Behaviors
Perceptual Aliasing from Subtask Switching
Analysis
Finite State Machines from Demonstration
Conclusion
References
Mobile Robot Motion Control from Demonstration and Corrective Feedback
Introduction
Mobile Robot Motion Control
Learning from Demonstration
Policy Refinement within LfD
Corrective Feedback for Policy Refinement
Policy Corrections
Advice-Operators
Focused Feedback for Mobile Robot Policies
Algorithm Advice-Operator Policy Improvement
Empirical Validation of A-OPI
Experimental Setup
Results
Conclusion
References
Learning Continuous Grasp Affordances by Sensorimotor Exploration
Introduction
Related Work
Methods
Early Cognitive Vision
Grasp Reflex From Co-planar ECV Descriptors
Feature Hierarchies for 3D Visual Object Representation
Pose Estimation
Representing Grasp Densities
Learning Grasp Densities
Hypothesis Densities from Examples
Empirical Densities through Familiarization
Updating Empirical Densities in Long-Term Interaction
Results
Conclusion and Future Work
References
Multimodal Language Acquisition Based on Motor Learning and Interaction
Introduction
The Role of Interaction
Infant Directed Speech
Multimodal Interaction
Babbling and Imitation
Embodiment
Speech Production Unit
Sensing Units
Sensorimotor Maps
Short Term Memory
Long Term Memory
Humanoid Robot Experiments
Experiment 1: Word Learning
Learning Target Positions
Conclusions
References
Human-Robot Cooperation Based on Interaction Learning
Introduction
Linking Words to Actions – Spoken Language Programming
A Scenario for Human-Robot Cooperation
The Robotic Platforms
The Development of Spoken Language Programming
Macro style Programming
Vision Based Procedures with Arguments
Flexible Common Ground
Discussion
Beyond SLP – Anticipation and Cooperation
Anticipation – Extraction Behavior Regularities
Shared Plans – Cooperation towards a Common Goal
Discussion
Conclusion
References
Author Index


πŸ“œ SIMILAR VOLUMES


From Motor Learning to Interaction Learn
✍ Olivier Sigaud, Jan Peters (auth.), Olivier Sigaud, Jan Peters (eds.) πŸ“‚ Library πŸ“… 2010 πŸ› Springer-Verlag Berlin Heidelberg 🌐 English

<p><P>From an engineering standpoint, the increasing complexity of robotic systems and the increasing demand for more autonomously learning robots, has become essential. This book is largely based on the successful workshop β€œFrom motor to interaction learning in robots” held at the IEEE/RSJ Internat

Machine Learning for Robotics Applicatio
✍ Monica Bianchini (editor), Milan Simic (editor), Ankush Ghosh (editor), Rabindra πŸ“‚ Library πŸ“… 2021 πŸ› Springer 🌐 English

<span>Machine learning has become one of the most prevalent topics in recent years. The application of machine learning we see today is a tip of the iceberg. The machine learning revolution has just started to roll out. It is becoming an integral part of all modern electronic devices. Applications i

Learning Motor Skills: From Algorithms t
✍ Jens Kober, Jan Peters (auth.) πŸ“‚ Library πŸ“… 2014 πŸ› Springer International Publishing 🌐 English

<p><p>This book presents the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. It discusses recent approaches that allow robots to learn motor.</p><p>skills and presents tasks that need to take into account the dynamic behavior of the

Meta-Learning in Computational Intellige
✍ Norbert Jankowski, Krzysztof GrΔ…bczewski (auth.), Norbert Jankowski, WΕ‚odzisΕ‚aw πŸ“‚ Library πŸ“… 2011 πŸ› Springer-Verlag Berlin Heidelberg 🌐 English

<p><p>Computational Intelligence (CI) community has developed hundreds of algorithms for intelligent data analysis, but still many hard problems in computer vision, signal processing or text and multimedia understanding, problems that require deep learning techniques, are open. <br>Modern data minin

Meta-Learning in Computational Intellige
✍ Norbert Jankowski, Krzysztof GrΔ…bczewski (auth.), Norbert Jankowski, WΕ‚odzisΕ‚aw πŸ“‚ Library πŸ“… 2011 πŸ› Springer-Verlag Berlin Heidelberg 🌐 English

<p><p>Computational Intelligence (CI) community has developed hundreds of algorithms for intelligent data analysis, but still many hard problems in computer vision, signal processing or text and multimedia understanding, problems that require deep learning techniques, are open. <br>Modern data minin

Machine Learning Approaches for Urban Co
✍ Mainak Bandyopadhyay (editor), Minakhi Rout (editor), Suresh Chandra Satapathy ( πŸ“‚ Library πŸ› Springer 🌐 English

<p><span>This book discusses various machine learning applications and models, developed using heterogeneous data, which helps in a comprehensive prediction, optimization, association analysis, cluster analysis and classification-related applications for various activities in urban area. It details