VMP Fast Talks
Mr. Sicheng Liu
Dr. Michael Frank
Alexander Fengler
Classical versions of sequential sampling models (SSMs) assume that the rate of accumulation is constant over a given trial. Empirical evidence however suggests that instead, moment by moment attention, indicated for example by eye gaze patterns, can shift the rate of accumulation such that it vacillates over the course of single trials. These dynamics are captured by models such as the attentional Drift Diffusion Model (aDDM). However, parameter inference for such models, in a way that faithfully tracks the generative process, remains a challenge. Specifically, the attention process, captured as arbitrary saccades and gaze times, forms a time-point-wise covariate which can’t be reduced to a fixed dimensional summary statistic, and thus poses a challenge even for likelihood-free methods on the research frontier. We propose a method for fast computation of likelihoods for a class of models which subsumes the aDDM. The method divides each trial into discrete time stages with fixed attention, uses fast analytical methods to assess stage-wise likelihoods and integrates these to calculate overall trial-wise likelihoods. Operationalizing this method we characterize parameter recovery in a variety of settings and compare to widely used approximations to the aDDM, which instead only use fixation proportions to maintain tractable likelihoods. We characterize the space of experiments in which such approximations may be appropriate and point out which settings drive model formulations apart. Our method will be made available to the community as a small python package, which will integrate seamlessly into a wider probabilistic programming ecosystem around the PyMC python library.
Dr. Matthew Nassar
Mr. An Vo
Alexander Fengler
Sequential Sampling Models (SSMs) are ubiquitously applied to empirical data of two or more alternative choice tasks, subsuming a large variety of task paradigms. Nevertheless the space of models typically considered is often limited to those that are analytically tractable for inference. More recently the field of simulation based inference has enabled the development and evaluation of a much broader class of models. Here we leverage developments in likelihood free inference using artificial neural networks in order to evaluate a range of models applied to a hierarchical decision making task. Participants were presented with stimuli, in the form of lines that varied across three dimensions: movement direction, line orientation and color. These three features imply three potential decisions (dominant motion direction etc.) on a given trial. One feature was considered the ‘high-dimension’, and determined which of the remaining two ‘low-dimensional’ features were relevant for a given choice scenario. The task is therefore hierarchical, in that the high dimensional features acts as a filter on which one of two remaining tasks a subject needs to solve. To investigate the corresponding cognitive strategies used by participants to solve these tasks, we developed a range of diffusion model variants to assess whether participants accumulate evidence strictly hierarchically and therefore sequentially, in parallel, or via a hybrid resource rational approach. We will assess model fits and posterior predictive simulations to arbitrate between these accounts and to link them to trial-by-trial neural dynamics (via EEG) associated with encoding of higher and lower dimensional features.
Prof. Joe Houpt
Dr. Othalia Larue
Kevin Schmidt
Cognitive architectures (CAs) are unified theories of cognition which describe invariant properties in the structure and function of cognition, and how sub-systems (e.g., memory, vision) interact as a coherent system. An important role of CAs is integrating findings across many domains into a unified theory and preventing research silos. One downside of CAs is that their breadth and complexity create challenges for deriving critical tests of core architectural assumptions. Consequentially, it is often unclear to what extent empirical tests of CAs are driven by core architectural vs. auxiliary assumptions. To address this issue, we developed a methodology for deriving critical tests of CAs which combines systems factorial technology (SFT; Townsend & Nozawa, 1995) and global model analysis (GMA), forming what we call SFT-GMA. In SFT-GMA, GMA is performed within an SFT model space of qualitative model classes spanning four dimensions: architecture, stopping rule, dependence, and workload capacity. Constraints on the model space are derived from core architectural assumptions which may provide a basis for critical tests. To demonstrate the utility of SFT-GMA, we applied it to the ACT-R cognitive architecture (Anderson et al., 2004). Despite many degrees of freedom in the specification of parameters values, production rules, and declarative memory representations, SFT-GMA revealed that ACT-R’s core architectural assumptions impose testable constraints on the SFT model space. In particular, ACT-R is incompatible with most parallel SFT models of perceptual processing. We believe that the use of theorem-based methods such as SFT-GMA have the potential to stimulate theoretical progress for CAs. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the Department of Defense or the US Government. This work was supported by the Air Force Research Laboratory (FA8650-22-C-1046). Approved for public release; distribution unlimited. Cleared 12/21/2023; Case Number: AFRL-2023-6387.
Mrs. Svetlana Korobova
Mr. Evgeniy Koltunov
Project-based learning at the university is considered to be one of the best models for training future specialists. Students demonstrate different performance level under the same external learning conditions. Correlations between the effectiveness and psychological resources have been studied, separately. It is relevant to study complex contribution of student’s psychological resources to the effectiveness of project-based learning. The aim of research is to develop a model for differentiation of students with different levels of effectiveness in project-based learning based of their psychological resources. The research design: the study was conducted in three stages (determination of the themes and conditions of the projects; presentation and evaluation of the project’s effectiveness). Methods: psychological stress scale Lemyre-Tessier-Fillion, Social Readjustment Rating Scale by T.H. Holmes and R.H. Rahe, methodology "Typology of life path personal choice” V.G. Gryazeva-Dobshinskaya, A.S. Maltseva, S.R. Maddi's Hardiness test, The Rorschach inkblot test, Role Relations between Social Subjects and Creative Personalities by V.G. Gryazeva-Dobshinskaya et al. Sample: 139 students, including students with low (45 people) and high (50 people) effectiveness level. Results. The development of a differentiation model for students with different effectiveness levels was carried out using discriminant analysis by stepwise selection method. The accuracy of differentiation amounted to 75.8%. Significant contribution is made by peculiarities of stress experience, psychodynamic indicators (endurance, tempo, emotionality), personal (personal choice, hardiness, productivity, compositionality) and socio-psychological (reflection of social roles, value bases and attitude to them) levels of integral individuality. The research was carried out under the RSF grant No. 23-28-10216.
Claire E Stevenson
Dr. Michael D. Nunez
Analogical reasoning is a cornerstone of human cognition that serves as one of the foundations for learning, creativity, and problem-solving. Recent progress in artificial intelligence (AI) points towards the potential emergence of models that could soon reach human performance in their reasoning capabilities. Such models could, in turn, offer unprecedented opportunities for exploring the mechanisms behind human cognition. The present study leverages this interplay between AI and cognitive neuroscience to dissect the processes underpinning abstract reasoning, using behavioral and electroencephalography (EEG) data from a task designed to distinguish between perceptual and higher-order cognitive processes. In this task, participants are presented with series of icons that follow a logical order and are asked to predict the next icon in the series, out of a set of four options. The icons are first briefly displayed one by one in random order at their respective locations, before being displayed all together until the participant responds. This design allows us to first capture the EEG activity corresponding to the perception of the stimuli before any form of reasoning can take place. This data will then be compared with the performance and embeddings of current AI models’, both during and after training on the same task. The primary goal is to identify candidate AI models whose reasoning capabilities closely match those of humans, and whose embeddings correlate with cortical activity patterns. Anticipated directions include refining these AI models based on our findings, aiming to more closely align them with human cognitive processes and behavior.
Daniel W. Heck
The illusory truth effect refers to the phenomenon that repeated exposure to a statement increases its perceived truthfulness. In truth-effect studies, binary judgments are usually aggregated within subjects, yielding proportions between 0 and 1. These values are then used as the dependent variable in an analysis of variance (ANOVA). However, this procedure has several limitations. First, it assumes that all statements in the study are homogeneous, even though they vary in terms of many properties. Second, proportions are subject to floor and ceiling effects, causing violations of model assumptions such as heteroscedasticity and impossible predictions beyond 0 and 1. Third, the ANOVA approach does not allow to add trial-level predictors. A solution to these issues is generalized linear mixed-effects models (GLMM). The random-effect structure can account for differences both in persons and statements, the use of a link function prevents the model from making impossible predictions, and trial-level predictors can easily be included. GLMM also offers theoretical benefits since the estimated regression coefficients can be interpreted as response bias and discrimination sensitivity in terms of signal detection theory. To compare the results of ANOVA and different GLMM specifications, we re-analyzed 22 openly available datasets from 2018 to 2024. The preliminary results show that GLMMs with random intercepts for subjects only do not solve the problems; conversely, they lead to higher rates of finding significant effects. However, once random intercepts for statements are added, p-values become more conservative.
Submitting author
Author