Mixture and Hidden Markov Models with R (Use R!)
β Scribed by Ingmar Visser, Maarten Speekenbrink
- Publisher
- Springer
- Year
- 2022
- Tongue
- English
- Leaves
- 277
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
This book discusses mixture and hidden Markov models for modeling behavioral data. Mixture and hidden Markov models are statistical models which are useful when an observed system occupies a number of distinct βregimesβ or unobserved (hidden) states. These models are widely used in a variety of fields, including artificial intelligence, biology, finance, and psychology. Hidden Markov models can be viewed as an extension of mixture models, to model transitions between states over time. Covering both mixture and hidden Markov models in a single book allows main concepts and issues to be introduced in the relatively simpler context of mixture models. After a thorough treatment of the theory and practice of mixture modeling, the conceptual leap towards hidden Markov models is relatively straightforward.
This book provides many practical examples illustrating the wide variety of uses of the models. These examples are drawn from our own work in psychology, as well as other areas such as financial time series and climate data. Most examples illustrate the use of the authorsβ depmixS4 package, which provides a flexible framework to construct and estimate mixture and hidden Markov models. All examples are fully reproducible and the accompanying hmmR package provides all the datasets used, as well as additional functionality. This book is suitable for advanced students and researchers with an applied background.
β¦ Table of Contents
Preface
Chapter Outlines and Reading Guide
Acknowledgments
Settings, Appearance, and Notation
Contents
1 Introduction and Preliminaries
1.1 What Are Mixture and Hidden Markov Models?
1.1.1 Outline
1.2 Getting Started with R
1.2.1 Help!
1.2.2 Loading Packages and Data
1.2.3 Object Types and Manipulation
1.2.3.1 Functions
1.2.3.2 S4 Objects
1.2.4 Visualizing Data
1.2.5 Summarizing Data
1.2.6 Linear and Generalized Linear Models
1.2.7 Multinomial Logistic Regression
1.2.8 Time-Series
1.3 Datasets Used in the Book
1.3.1 Speed-Accuracy Data
1.3.1.1 Description
1.3.2 S&P 500
1.3.2.1 Description
1.3.3 Perth Dams Data
1.3.3.1 Description
1.3.4 Discrimination Learning Data
1.3.4.1 Description
1.3.5 Balance Data
1.3.5.1 Description
1.3.6 Repeated Measures on the Balance Scale Task
1.3.6.1 Description
1.3.7 Dimensional Change Card Sorting Task Data
1.3.7.1 Description
1.3.8 Weather Prediction Task Data
1.3.8.1 Description
1.3.9 Conservation of Liquid Data
1.3.9.1 Description
1.3.10 Iowa Gambling Task Data
1.3.10.1 Description
2 Mixture and Latent Class Models
2.1 Introduction and Motivating Example
2.2 Definitions and Notation
2.2.1 Mixture Distribution
2.2.2 Example: Generating Data from a Mixture Distribution
2.2.3 Parameters of the Mixture Model
2.2.4 Mixture Likelihood
2.2.5 Posterior Probabilities
2.3 Parameter Estimation
2.3.1 Maximum Likelihood Estimation
2.3.2 Numerical Optimization of the Likelihood
2.3.3 Expectation Maximization (EM)
2.3.3.1 EM for a Gaussian Mixture
2.3.3.2 Mixtures of Generalized Linear Models
2.3.3.3 Why Does the EM Algorithm Work?
2.3.4 Optimizing Parameters Subject to Constraints
2.3.5 EM or Numerical Optimization?
2.3.6 Starting Values for Parameters in Mixture Models
2.4 Parameter Inference: Likelihood Ratio Tests
2.4.1 Example: Equality Constraint on Standard Deviations
2.5 Parameter Inference: Standard Errors and Confidence Intervals
2.5.1 Finite Difference Approximation of the Hessian
2.5.2 Parametric Bootstrap
2.5.3 Correcting the Hessian for Linear Constraints
2.6 Model Selection
2.6.1 Likelihood-Ratio Tests
2.6.2 Information Criteria
2.6.2.1 Akaike Information Criterion
2.6.2.2 Bayesian Information Criterion
2.6.2.3 Which to Use?
2.6.3 Example: Model Selection for the Speed1 RT Data
2.7 Covariates on the Prior Probabilities
2.8 Identifiability of Mixture Models
2.9 Further Reading
3 Mixture and Latent Class Models: Applications
3.1 Gaussian Mixture for the S&P500 Data
3.2 Gaussian Mixture Model for Conservation Data
3.3 Bivariate Gaussian Mixture Model for Conservation Data
3.4 Latent Class Model for Balance Scale Data
3.4.1 Model Selection and Checking
3.4.2 Testing Item Homogeneity Using Parameter Constraints
3.5 Binomial Mixture Model for Balance Scale Data
3.5.1 Binomial Logistic Regression
3.5.2 Mixture Models
3.5.3 Model Selection Model Checking
3.6 Model Selection with the Bootstrap Likelihood Ratio
3.7 Further Reading
4 Hidden Markov Models
4.1 Preliminaries: Markov Models
4.1.1 Definitions
4.1.1.1 Markov Property
4.1.1.2 Transition Matrix
4.1.1.3 Homogeneity
4.1.1.4 Initial State Probabilities
4.1.2 Properties of Markov Models
4.1.2.1 Stationary Distribution
4.1.2.2 Ergodicity
4.1.2.3 Transient and Absorbing States
4.1.2.4 Dwell Time Distribution
4.1.2.5 Markov Models and Grammars: The Golden Mean Model
4.2 Introducing the Hidden Markov Model
4.2.1 Definitions
4.2.2 Relation Between Hidden Markov and Mixture Model
4.2.3 Example: Bernoulli Hidden Markov Model
4.2.4 Likelihood and Inference Problems
4.3 Filtering, Likelihood, Smoothing and Prediction
4.3.1 Filtering
4.3.2 Likelihood
4.3.3 Smoothing
4.3.4 Scaling
4.3.5 The Likelihood Revisited
4.3.6 Multiple Timeseries
4.3.7 Prediction
4.4 Parameter Estimation
4.4.1 Numerical Optimization of the Likelihood
4.4.2 Expectation Maximization (EM)
4.5 Decoding
4.5.1 Local Decoding
4.5.2 Global Decoding
4.6 Parameter Inference
4.6.1 Standard Errors
4.7 Covariates on Initial and Transition Probabilities
4.8 Missing Data
4.8.1 Missing Data in Hidden Markov Models
4.8.2 Missing at Random
4.8.3 State-Dependent Missingness
4.8.3.1 Simulation Study
5 Univariate Hidden Markov Models
5.1 Gaussian Hidden Markov Model for Financial Time Series
5.2 Bernoulli HMM for the DCCS Data
5.3 Accounting for Autocorrelation Between Response Times
5.3.1 Response Times
5.3.2 Models for Response Times
5.3.3 Model Assessment and Selection of RT Models
5.4 Change Point HMM for Climate Data
5.5 Generalized Linear Hidden Markov Models for Multiple Cue Learning
6 Multivariate Hidden Markov Models
6.1 Latent Transition Model for Balance Scale Data
6.1.1 Learning and Regression
6.2 Switching Between Speed and Accuracy
6.2.1 Modeling Hysteresis
6.2.2 Testing Conditional Independence and FurtherExtensions
6.3 Dependency Between Binomial and Multinomial Responses: The IGT Data
7 Extensions
7.1 Higher-Order Markov Models
7.1.1 Reformulating a Higher-Order HMM as a First-Order HMM
7.1.2 Example: A Two-State Second-Order HMM for Discrimination Learning
7.2 Models with a Distributed State Representation
7.3 Dealing with Practical Issues in Estimation
7.3.1 Unbounded Likelihood
7.4 The Classification Likelihood
7.4.1 Mixture Models
7.4.2 Hidden Markov Models
7.5 Bayesian Estimation
7.5.1 Sampling States and Model Parameters
7.5.1.1 FFBS for the Speed Data
7.5.2 Sampling Model Parameters by Marginalizing Over Hidden States
References
Epilogue
The Production of the Book
Index
π SIMILAR VOLUMES
<P><U><EM>Reveals How HMMs Can Be Used as General-Purpose Time Series Models</EM></U></P> <P><EM>Implements all methods in R </EM><STRONG>Hidden Markov Models for Time Series: An Introduction Using R applies hidden Markov models (HMMs) to a wide range of time series types, from continuous-valued, c
This book offers an introduction to graphical modeling using R and the main features of some of these packages. It provides examples of how more advanced aspects of graphical modeling can be represented and handled within R.
This book offers an introduction to graphical modeling using R and the main features of some of these packages. It provides examples of how more advanced aspects of graphical modeling can be represented and handled within R.
<p>In the current age of information technology, the issues of distributing and utilizing images efficiently and effectively are of substantial concern. Solutions to many of the problems arising from these issues are provided by techniques of image processing, among which segmentation and compressio