ICCM: Logic & Learning
André Brechmann
Mr. Marcel Lommerzheim
During learning humans often test new hypotheses to infer causal relations between objects and actions. One very common example of learning is category learning in which humans learn to differentiate between different stimuli based on their features. The rational aspects of category learning in form of hypotheses testing need to be taken into consideration for improving computational models. Compared to reinforcement learning models that assume gradual learning, cognitive modeling allows to implement hypotheses testing and thus enabling steep transitions in learning. Here we extend our previously developed ACT-R model in a systematic way to further improve its fit to an auditory category learning and reversal learning experiment. For the initial category learning phase we optimized the model by enabling it two use two stimulus features right from the start. For improving the model's performance in the reversal phase, we introduced an additional mechanism of switching the motor-response for a given categorization. With these two changes we significantly increased the model's performance in our task. By comparing the backward learning curves of the participants to those of our model we observed that our model exhibits steep transitions during the initial category learning phase, a feature that reinforcement learning models have difficulties to reproduce.
This is an in-person presentation on July 19, 2023 (15:20 ~ 16:00 UTC).
Michael Collins
Michael Krusmark
Tiffany (Jastrzembski) Myers
Computational models of human memory have largely been developed in laboratory settings, using data from tightly controlled experiments that were designed to test specific assumptions of a small set of models. This approach has resulted in a range of models that explain experimental data very well. Over the last decade, more and more large-scale data sets from outside the laboratory have been made available and researchers have been extending their model comparisons to include such real-life data. We follow this example and conduct a simulation study in which we compare a number of model variants across a range of eight data sets that include both experimental and naturalistic data. Specifically, we test the Predictive Performance Equation (PPE)---a lab-grown model---and its ability to predict performance across the entire range of data sets depending on whether one or both of its crucial components are included in the model. These components were specifically designed to account for spacing effects in learning and are theory-inspired summaries of the entire learning history for a given user-item pair. By replacing these terms with a simple lag times (rather than full histories) or a single free parameter, we reduce the PPE's complexity. The results, broadly speaking, suggest that the full PPE performs best in experimental data but that not much predictive accuracy is lost if the terms are omitted from the model when naturalistic data are concerned. A possible reason is that spacing effects are not very important in real-life data but very important in spacing experiments.
This is an in-person presentation on July 19, 2023 (15:40 ~ 16:00 UTC).
In contrast to rationalist accounts, people do not always have consistent goals nor do they always explain other people's behaviour as driven by rational goal pursuit. Elsewhere, counterfactual accounts have shown how a situation model can be perturbed to measure the explanatory power of different causes. We take this approach to explore how people explain others' behaviour in two online experiments and a computational model. First, 90 UK-based adults rated the likelihood of various scenarios combining short biographies with trajectories through a gridworld. Then 49 others saw each scenario and outcome, and verbally gave their best explanations for why the character moved the way they did. Participants generated a range of explanations for even the most incongruous behaviour. We present an expanded version of a counterfactual effect size model which uses innovative features (crowdsourced parameters and free text responses) that not only can generalise to human situations and handle a range of surprising behaviours, but also performs better than the existing model it is based on.
This is an in-person presentation on July 19, 2023 (16:00 ~ 16:20 UTC).
Florian Sense
Michael Krusmark
Tiffany (Jastrzembski) Myers
To explain the performance history of individuals over time, particular features of memories are posited, such as the power law of learning, power law of decay, and the spacing effect. When these features of memory are integrated together into a model of learning and retention, they have been able to account for human performance across a wide range of both applied and laboratory domains. However, these models of learning and retention assume that performance is best accounted for by a continuous performance curve. In contrast to this standard assumption of models of learning and retention, other researcher have argued that ,over time, individuals display sudden discrete shifts in their performance due to changes in strategy and/or memory representation. To compare these two accounts of memory, the standard Predictive Performance Equation (PPE; (Walsh, Gluck, Gunzelmann, Jastrzembski, & Krusmark, 2018)) and was compared to a Change PPE on fits to human performance in a naturalistic data set. We make several hypotheses about the expected characteristics of individual learning curves and the different abilities of the models to account for human performance. Our results show that performance that Change PPE was not only able to be better fit the data compared to the Standard PPE, but that inferred changes in the participant’s performance was associated with greater learning outcomes.
This is an in-person presentation on July 19, 2023 (16:20 ~ 16:40 UTC).
Mr. Nicolas Riesterer
Marco Ragni
Syllogistic reasoning is one of the core domains of human reasoning research. Over its century of being actively researched, various theories have been proposed attempting to disentangle and explain the various strategies human reasoners are relying on. In this article we propose a data-driven approach to behaviorally cluster reasoners into archetypal groups based on non-negative matrix factorization. The identified clusters are interpreted in the context of state-of-the-art theories in the field and analyzed based on the posited key assumptions, e.g., the dual-processing account. We show interesting contradictions that add to a growing body of evidence suggesting shortcomings of the current state of the art in syllogistic reasoning research and discuss possibilities of overcoming them.
This is an in-person presentation on July 19, 2023 (16:40 ~ 17:00 UTC).
Submitting author
Author