Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Benchmarking Automation-Aided Signal Detection

Authors
Prof. Jason McCarley
Oregon State University ~ School of Psychological Science
Abstract

Human operators often perform signal detection tasks with assistance from automated aids. Unfortunately, users tend to disuse aids that are less than perfectly accurate (Parasuraman & Riley, 1997), disregarding the aids' advice even when it might be helpful. To facilitate cost-benefit analyses of automated signal detection aids, we benchmarked the performance of human-automation teams against the predictions of various models of information integration. Participants performed a binary signal detection task, with and without assistance from an automated aid. Each trial, the aid provided the participant a binary judgment along with an estimate of certainty. Models chosen for comparison varied from perfectly efficient to highly inefficient. Even with an automated aid of fairly high sensitivity (d' = 3), performance of the human-automation teams was poor, approaching the predictions of the least efficient comparison models, and efficiency of the human-automation teams was substantially lower than that achieved by pairs of human collaborators. Data indicate strong automation disuse, and provide guidance for estimating the benefits of automated detection aids.

Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

McCarley, J. (2020, November). Benchmarking Automation-Aided Signal Detection. Paper presented at MathPsych at Virtual Psychonomics 2020. Via mathpsych.org/presentation/303.