Benchmarking Automation-Aided Signal Detection
Human operators often perform signal detection tasks with assistance from automated aids. Unfortunately, users tend to disuse aids that are less than perfectly accurate (Parasuraman & Riley, 1997), disregarding the aids' advice even when it might be helpful. To facilitate cost-benefit analyses of automated signal detection aids, we benchmarked the performance of human-automation teams against the predictions of various models of information integration. Participants performed a binary signal detection task, with and without assistance from an automated aid. Each trial, the aid provided the participant a binary judgment along with an estimate of certainty. Models chosen for comparison varied from perfectly efficient to highly inefficient. Even with an automated aid of fairly high sensitivity (d' = 3), performance of the human-automation teams was poor, approaching the predictions of the least efficient comparison models, and efficiency of the human-automation teams was substantially lower than that achieved by pairs of human collaborators. Data indicate strong automation disuse, and provide guidance for estimating the benefits of automated detection aids.
There is nothing here yet. Be the first to create a thread.
Cite this as: