A Statistical Foundation for Derived Attention
According to the derived attention theory, organisms attend to cues with strong associations (Le Pelley, Mitchell, Beesley, George, & Wills, 2016). Combined with a Rescorla-Wagner style learning mechanism, derived attention explains phenomena such as learned predictiveness (Lochmann & Wills, 2003), inattention to blocked cues (Beesley & Le Pelley, 2011) and value-based salience (Le Pelley, Mitchell, & Johnson, 2013). However, existing derived attention models cannot explain the inverse base rate effect (Medin & Edelson, 1988) or retrospective revaluation (Shanks, 1985). We have developed a Bayesian derived attention model that explains a wider array of results and gives further insight into the principle of derived attention. Our approach is Bayesian linear regression combined with the assumption that the associations of any cue with various outcomes share the same prior variance. The new model simultaneously estimates cue-outcome associations and prior variance through approximate Bayesian learning. A significant cue will develop large associations, leading the model to estimate a high prior variance and hence develop larger associations from that cue to novel outcome: this provides a normative, statistical explanation for derived attention.Through simulation, we show that this Bayesian derived attention model not only explains the same phenomena as existing derived attention models, but also retrospective revaluation and the inverse base rate effect. We hope that further development of the Bayesian derived attention model will shed light on the complex relationship between uncertainty and predictiveness effects on attention (Pearce & Mackintosh, 2010).