ANALYSIS OF MULTIVARIATE RELIABILITY STRUCTURES AND THE INDUCED BIAS IN LINEAR MODEL ESTIMATION
β Scribed by MIKEL AICKIN; CHERYL RITENBAUGH
- Publisher
- John Wiley and Sons
- Year
- 1996
- Tongue
- English
- Weight
- 884 KB
- Volume
- 15
- Category
- Article
- ISSN
- 0277-6715
No coin nor oath required. For personal study only.
β¦ Synopsis
Least squares provides consistent estimates of the regression coefficients pin the model E [ Y 1 x] = P x when fully accurate measurements of x are available. However, in biomedical studies one must frequently substitute unreliable measurements X in place of x. This induces bias in the least squares coefficient estimates. In the univariate case, the bias manifests itself as a shrinkage toward zero, but this result does not generalize. When x is multivariate, then there are no predictable relationships between the signs or magnitudes of actual and estimated regression coefficients. In this article, we characterize the estimation bias, and review a relatively simple adjustment procedure to correct it. We also show that several natural conjectures about the bias are false. We present three definitions of reliability coefficient matrices that generalize the univariate case, and we illustrate their application to dietary intake data from a cancer prevention study.
π SIMILAR VOLUMES
## Abstract Volatility models such as GARCH, although misspecified with respect to the dataβgenerating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However,
In the literature, there are basically two kinds of resampling methods for least squares estimation in linear models; the E-type (the efficient ones like the classical bootstrap), which is more efficient when error variables are homogeneous, and the R-type (the robust ones like the jackknife), which