Dr. Peter Lane
Dr. Laura Bartlett
Dr. Noman Javed
Dr. Angelo Pirrone
Dr. Fernand Gobet
Cognitive models for explaining and predicting human performance in experimental settings are often challenging to develop and verify. We describe a process to automatically generate the programs for cognitive models from a user-supplied specification, using genetic programming (GP). We first construct a suitable fitness function, taking into account observed error and reaction times. Then we introduce post-processing techniques to transform the large number of candidate models produced by GP into a smaller set of models, whose diversity can be depicted graphically and can be individually studied through pseudo-code. These techniques are demonstrated on a typical neuro-scientific task, the Delayed Match to Sample Task, with the final set of symbolic models separated into two types, each employing a different attentional strategy.
Previous research using goal-directed computational models has demonstrated that microlapses, or brief disruptions in effortful cognitive processing, are related to decreases in vigilance as a function of time-on-task in the psychomotor vigilance test (PVT) (Veksler and Gunzelmann, 2018). We extended these computational accounts of fatigue to model performance in two vigilance tasks that differ with respect to demands on working memory, i.e., successive vs. simultaneous discrimination (Davies and Parasuraman, 1982). While task performance was not affected by working memory demands, simulations show that fatigue moderators successfully capture decreases in vigilance over time. Additionally, participants showed greater individual differences in model parameters related to task performance, but not in the effects of fatigue across time. These results highlight the importance of fatigue moderators in computational accounts of vigilance tasks.
Ms. Nicole Tan
Dr. Yiyun Shou
Dr. Junwen Chen
To date, little is known about the role of social anxiety in the assignment of evidence weights which could contribute to the jumping-to-conclusion bias. The present study used a Bayesian computational method to understand the mechanism of jumping-to-conclusion bias in social anxiety, specifically through the assignment of weights to information sampled. The present study also investigated the specificity of the jumping-to-conclusion bias in social anxiety using three variations of beads tasks that consisted of neutral and socially threatening situations. A sample of 210 participants was recruited from online communities to complete the beads tasks and a set of questionnaires measuring the trait variables including social anxiety and the fears of positive and negative evaluation. The Bayesian model estimations indicated that social anxiety and fears of evaluation did not significantly bias the assignment of evidence weights to information received, except when mostly positive feedback was shown. Our results did not support a significant association between the jumping-to-conclusion bias and social anxiety/fears of evaluation.
Argumentation is a widely studied topic in A.I., philosophy and psychology. In this paper we are particularly interested in its psychological implications. After having conducted several experiments, Mercier and Sperber stated that argumentation is the means for human reasoning. Yet, how can a cognitively plausible argumentation process be implemented such that it accounts for the lower levels of cognition? Taking as theoretical foundation Cognitive Argumentation, we propose two models for conditional reasoning implemented into the cognitive architecture, ACT-R and evaluate them with the responses of a famous reasoning task.