It is well-known that the EM algorithm generally converges to a local maximum likelihood estimate. However, there have been many evidences to show that the EM algorithm can converge correctly to the true parameters as long as the overlap of Gaussians in the sample data is small enough. This paper st
β¦ LIBER β¦
On the choice of the number of blocks with the incremental EM algorithm for the fitting of normal mixtures
β Scribed by S. K. Ng; G. J. McLachlan
- Book ID
- 110413251
- Publisher
- Springer US
- Year
- 2003
- Tongue
- English
- Weight
- 111 KB
- Volume
- 13
- Category
- Article
- ISSN
- 0960-3174
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
On the correct convergence of the EM alg
β
Jinwen Ma; Shuqun Fu
π
Article
π
2005
π
Elsevier Science
π
English
β 235 KB
On Convergence Properties of the EM Algo
β
Xu, Lei; Jordan, Michael I.
π
Article
π
1996
π
MIT Press
π
English
β 929 KB
Automatic determination of the number of
β
D. P. Vetrov; D. A. Kropotov; A. A. Osokin
π
Article
π
2010
π
SP MAIK Nauka/Interperiodica
π
English
β 303 KB
The EM algorithm with gradient function
β
Dankmar BΓΆhning
π
Article
π
2003
π
Springer US
π
English
β 100 KB
Another interpretation of the EM algorit
β
Richard J. Hathaway
π
Article
π
1986
π
Elsevier Science
π
English
β 241 KB
Learning mixtures of point distribution
β
Abdullah A. Al-Shaher; Edwin R. Hancock
π
Article
π
2003
π
Elsevier Science
π
English
β 409 KB
This paper demonstrates how the EM algorithm can be used for learning and matching mixtures of point distribution models. We make two contributions. First, we show how shape-classes can be learned in an unsupervised manner. We present a fast procedure for training point distribution models using the