How good can an individual's conclusion endorsement be predicted?
Reasoning about conditional statements is relevant in science, culture, and our everyday life. It has been shown that humans do deviate from a classical logical interpretation of conditionals. Consequently, in the past years a number of cognitive models based on Bayesian or mental model approaches have been developed, whose performance is normally judged based on their ability to fit aggregate data of participants. Here, we diverge by focusing on the individual instead. Moreover, we propose a different model testing paradigm by analyzing on an existing large data set, how good current models are in predicting an endorsement of an individual reasoner on a scale from 0 to 100%. Towards this goal we reanalyze the data by rigorously distinguishing between test and training data set, by making existing models for conditional reasoning predictable such as the Dual Source Model (Singmann, Klauer, & Beller, 2016) and a model by Oaksford, Chater, and Larkin (2000). We also implement a modeling idea of Pearl based on possible worlds. We can show that all three models perform equally good in predicting an individual reasoner’s endorsement and that they meet an empirical baseline (the median of the most frequent answer). A discussion on the gained insights in understanding conditional reasoning concludes the paper.