Let I + x(r) denote a Mark& process evolving on some state-space X and consider observations given by .Y( r) = h( x( I)), where h is a real-valued function on X. If $I is another real-valued function on X, the best estimate (in the mean square sense) of $(x( 1)) given y(r), OGTC t, is the conditiona
Estimation for mixtures of Markov processes
β Scribed by Jeong-gun Park; I.V. Basawa
- Publisher
- Elsevier Science
- Year
- 2002
- Tongue
- English
- Weight
- 121 KB
- Volume
- 59
- Category
- Article
- ISSN
- 0167-7152
No coin nor oath required. For personal study only.
β¦ Synopsis
Finite mixtures of Markov processes with densities belonging to exponential families are introduced. Quasilikelihood and maximum likelihood methods are used to estimate the parameters of the mixing distributions and of the component distributions. The E-M algorithm is used to compute the ML estimates. Mixture of Autoregressive processes and of two-state Markov chains are discussed as speciΓΏc examples. Simulation results on the comparison of quasi-likelihood and ML estimates are reported.
π SIMILAR VOLUMES
Mixtures of recurrent semi-Markov processes are characterized through a partial exchangeability condition of the array of successor states and holding times. A stronger invariance condition on the joint law of successor states and holding times leads to mixtures of Markov laws.