In this paper, we address the problem of estimating Â1 when Yj ∼ ind N(Âj; 2 j ); j = 1; 2, are observed, the j are known and |Â1 -Â2|6c for a known constant c. Assuming the loss is squared error, we derive a generalized Bayes estimator which is admissible. It uses Y2 to achieve a uniformly smaller
Combining the data from two normal populations to estimate the mean of one when their means difference is bounded
✍ Scribed by Constance van Eeden; James V. Zidek
- Publisher
- Elsevier Science
- Year
- 2004
- Tongue
- English
- Weight
- 357 KB
- Volume
- 88
- Category
- Article
- ISSN
- 0047-259X
No coin nor oath required. For personal study only.
✦ Synopsis
In this paper we address the problem of estimating y 1 when Y i B ind Nðy i ; s 2 i Þ; i ¼ 1; 2; are observed and jy 1 À y 2 jpc for a known constant c: Clearly Y 2 contains information about y 1 : We show how the so-called weighted likelihood function may be used to generate a class of estimators that exploit that information. We discuss how the weights in the weighted likelihood may be selected to successfully trade bias for precision and thus use the information effectively. In particular, we consider adaptively weighted likelihood estimators where the weights are selected using the data. One approach selects such weights in accord with Akaike's entropy maximization criterion. We describe several estimators obtained in this way. However, the maximum likelihood estimator is investigated as a competitor to these estimators along with a Bayes estimator, a class of robust Bayes estimators and (when c is sufficiently small), a minimax estimator. Moreover we will assess their properties both numerically and theoretically. Finally, we will see how all of these estimators may be viewed as adaptively weighted likelihood estimators. In fact, an over-riding theme of the paper is that the adaptively weighted likelihood method provides a powerful extension of its classical counterpart.
📜 SIMILAR VOLUMES