For systems described by algebraic or differential equation models where all variables are subject to error, the error-in-variables method (EVM) for parameter estimation has been shown to be superior to standard least-squares techniques. Previous EVM algorithms were developed assuming linear (or lin
A robust regression technique using compound estimation
β Scribed by James R. Simpson; Douglas C. Montgomery
- Publisher
- John Wiley and Sons
- Year
- 1998
- Tongue
- English
- Weight
- 92 KB
- Volume
- 45
- Category
- Article
- ISSN
- 0894-069X
No coin nor oath required. For personal study only.
β¦ Synopsis
Least squares fitting of regression models is a widely used technique. The presence of outliers in the data can have an adverse effect on the method of least squares, resulting in a model that does not adequately fit to the bulk of the data. For this situation, robust regression techniques have been proposed as an improvement to the method of least squares. We propose a robust regression procedure that performs well relative to the current robust methods against a variety of dataset types. Evaluations are performed using datasets without outliers (testing efficiency), with a large percentage of outliers (testing breakdown), and with high leverage outliers (testing bounded influence). The datasets are based on 2-level factorial designs that include axial points to evaluate leverage effects. A Monte Carlo simulation approach is used to evaluate the estimating capability of the proposed procedure relative to several competing methods. We also provide an application to estimating costs for government satellites.
π SIMILAR VOLUMES
## Abstract A diagnostic procedure for detecting additive and innovation outliers as well as level shifts in a regression model with ARIMA errors is introduced. The procedure is based on a robust estimate of the model parameters and on innovation residuals computed by means of robust filtering. A M
In computing eigenvalues for a large finite element system, it has been observed that the eigenvalue extractors produce eigenvectors that are in some sense more accurate than their corresponding eigenvalues. In this paper, computational examples are presented to validate this observation. From this