On computing the largest fraction of missing information for the EM algorithm and the worst linear function for data augmentation
✍ Scribed by Chris Fraley
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 172 KB
- Volume
- 31
- Category
- Article
- ISSN
- 0167-9473
No coin nor oath required. For personal study only.
✦ Synopsis
We address the problem of computing the largest fraction of missing information for the EM algorithm and the worst linear function for data augmentation. These are the largest eigenvalue and its associated eigenvector for the Jacobian of the EM operator at a maximum likelihood estimate, which are important for assessing convergence in iterative simulation. An estimate of the largest fraction of missing information is available from the EM iterates; this is often adequate since only a few ÿgures of accuracy are needed. In some instances the EM iteration also gives an estimate of the worst linear function. We show that improved estimates can be essential for proper inference. In order to obtain improved estimates e ciently, we use the power method for eigencomputation. Unlike eigenvalue decomposition, the power method computes only the largest eigenvalue and eigenvector of a matrix, it can take advantage of a good eigenvector estimate as an initial value and it can be terminated after only a few ÿgures of accuracy are achieved. Moreover, the matrix products needed in the power method can be computed by extrapolation, obviating the need to form the Jacobian of the EM operator. We give results of simulation studies on multivariate normal data showing that this approach becomes more e cient as the data dimension increases than methods that use a ÿnite-di erence approximation to the Jacobian, which is the only general-purpose alternative available.