Does automation bias decision-making?
β Scribed by LINDA J. SKITKA; KATHLEEN L. MOSIER; MARK BURDICK
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 183 KB
- Volume
- 51
- Category
- Article
- ISSN
- 1071-5819
No coin nor oath required. For personal study only.
β¦ Synopsis
Computerized system monitors and decision aids are increasingly common additions to critical decision-making contexts such as intensive care units, nuclear power plants and aircraft cockpits. These aids are introduced with the ubiquitous goal of &&reducing human error''. The present study compared error rates in a simulated #ight task with and without a computer that monitored system states and made decision recommendations. Participants in non-automated settings out-performed their counterparts with a very but not perfectly reliable automated aid on a monitoring task. Participants with an aid made errors of omission (missed events when not explicitly prompted about them by the aid) and commission (did what an automated aid recommended, even when it contradicted their training and other 100% valid and available indicators). Possible causes and consequences of automation bias are discussed 1999 Academic Press
π SIMILAR VOLUMES
## Abstract The authors discuss important institutional changes that they view as probably permanennt for reasons of cost and that may have farβreaching implications for the future of higher education.
Although generally introduced to guard against human error, automated devices can fundamentally change how people approach their work, which in turn can lead to new and di!erent kinds of error. The present study explored the extent to which errors of omission (failures to respond to system irregular