𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Improved Nonnegative Estimation of Variance Components in Balanced Multivariate Mixed Models

✍ Scribed by T. Mathew; A. Niyogi; B.K. Sinha


Publisher
Elsevier Science
Year
1994
Tongue
English
Weight
704 KB
Volume
51
Category
Article
ISSN
0047-259X

No coin nor oath required. For personal study only.

✦ Synopsis


Consider the independent Wishart matrices (S_{1} \sim W\left(\Sigma+\lambda \theta, q_{1}\right)) and (S_{2} \sim) (W\left(\Sigma, q_{2}\right)), where (\Sigma) is an unknown positive definite (p.d.) matrix, (\theta) is an unknown nonnegative definite (n.n.d.) matrix, and (\lambda) is a known positive scalar. For the estimation of (\theta), a class of estimators of the form (\hat{\theta}{\left(r{1}\right)}=(c / \lambda)\left{S_{1} / q_{1}-\varepsilon\left(S_{2} / q_{2}\right)\right}) ((c \geqslant 0, \varepsilon \leqslant 1)), uniformly better than the unbiased estimator (\hat{\theta}{U}=(1 / \lambda)\left{S{1} / q_{1}-\right.) (\left.S_{2} / q_{2}\right}), is derived (for the squared error loss function). Necessary and sufficient conditions are obtained for the existence of an n.n.d. estimator of the form (\hat{\theta}{(r, \varepsilon)}) uniformly better than (\hat{\theta}{U}). It turns out that such an n.n.d. estimator exists only under restrictive conditions. However, for a suitable choice of (c>0, \varepsilon>0), the estimator obtained by taking the positive part of (\hat{\theta}{(c, s)}) results in an n.n.d. estimator, say (\hat{\theta}{(c, f)+}), that is uniformly better than (\hat{\theta}{U}). Numerical results indicate that in terms of mean squared error, (\hat{\theta}{{c, c}}) performs much better than both (\hat{\theta}{v}) and the restricted maximum likelihood estimator (\hat{\theta}{\text {REML }}) of (\theta). Similar results are also obtained for the nonnegative estimation of (\operatorname{tr} \theta) and (\mathbf{a}^{\prime} \Theta \mathbf{a}), where (\mathbf{a}) is an arbitrary nonzero vector. For estimating (\Sigma), we have derived estimators that are claimed to be uniformly better than the unbiased estimator (\hat{E}{U}=S{2} / q_{2}) under the squared error loss function and the entropy loss function. We have been able to establish the claim only in the bivariate case. Numerical results are reported showing the risk improvement of our proposed estimators of (\Sigma). C 1994 Academic Press. Inc.


πŸ“œ SIMILAR VOLUMES


Estimation of Variance Components in Mix
✍ T. Kubokawa πŸ“‚ Article πŸ“… 1995 πŸ› Elsevier Science 🌐 English βš– 775 KB

In mixed linear models with two variance components, classes of estimators improving on ANOVA estimators for the variance components and the ratio of variances are constructed on the basis of the invariant statistics. Out of the classes, consistent, improved and positive estimators are singled out.

A New Algorithm for Explicit Determinati
✍ D. Holomek πŸ“‚ Article πŸ“… 1978 πŸ› John Wiley and Sons 🌐 English βš– 662 KB

## Abstract This paper deals with the balanced case of the analysis of variance. The use of a classification function leads to an easy determination of all possible sources of variation of any mixed classification. For mixed models a new method is derived, which allows to represent explicit the ANO

Variance Component Estimation for Mixed
✍ Barbara Sarholz; Hans-Peter Piepho πŸ“‚ Article πŸ“… 2008 πŸ› John Wiley and Sons 🌐 English βš– 406 KB πŸ‘ 2 views

## Abstract Microarrays provide a valuable tool for the quantification of gene expression. Usually, however, there is a limited number of replicates leading to unsatisfying variance estimates in a gene‐wise mixed model analysis. As thousands of genes are available, it is desirable to combine inform