Symposium: Deep Learning for Cognitive Modeling
Dr. Jamal Amani Rad
Dr. Michael D. Nunez
Cognitive neuroscience studies routinely concentrate on calculating the correlation coefficient between trial-averaged Event-Related Potentials (ERPs) and behavioral performance such as response time and accuracy. However there are some disadvantages in this traditional approach: 1) ignoring the variance of EEG data across trials, 2) requiring a large number of participants to find robust inferences, and 3) a lack of formal cognitive models to explain cognition. In this work, we used the drift-diffusion model to decompose perceptual decision making to underlying latent variables to explain behavioral performance. This method assumes that participants make decisions based on the accumulation of evidence during the time until continuously hits one of two alternative bounds. We introduce new integrative neurocognitive models to predict and constrain both behavioral and electroencephalographic (EEG) data at the single-trial level concurrently. Our framework shows how N200 latencies and Centro-parietal Positivites (CPPs) can be used for the prediction of visual encoding time and drift rate parameters sequentially. Moreover, we quantified what proportion of EEG variance across trials is related to cognition and what proportion is related to measurement noise. We used a likelihood-free (simulation-based) approach in the context of deep learning to approximate the distribution of latent parameters. We showed the robustness of the models to model assumptions and contaminant processes as well as applied parameter recovery assessment to explore how well the models' parameters are identifiable. We fit models to three different datasets including EEG and behavioral data to test their applicability and reliability. This framework can conveniently be used for multimodal data simultaneously (e.g. single-trial fMRI, EEG, and behavioral data) to study perceptual decision making in the future.
This is an in-person presentation on July 19, 2023 (09:00 ~ 09:20 UTC).
Dr. Martin Schnuerch
Paul-Christian Bürkner
Stefan Radev
Bayesian model comparison permits principled evidence assessment but is challenging for hierarchical models (HMs) due to their complex multi-level structure. In this talk, we present a deep learning method for comparing HMs via Bayes factors or posterior model probabilities. As a simulation-based approach, its application is not limited to HMs with explicitly tractable likelihood functions, but also includes implicit likelihoods. Further, the computational cost of our method amortizes over multiple applications, providing new opportunities for method validation, robustness checks, and simulation studies. We demonstrate the ability of our method to accurately discriminate between non-nested HMs of cognition in a benchmark against bridge sampling. In addition, we present a comparison of four partly intractable evidence accumulation models that examines the utility of the recently proposed Lévy flight model of decision-making.
This is an in-person presentation on July 19, 2023 (09:20 ~ 09:40 UTC).
Dr. Mischa von Krause
Eva Marie Wieschen
Lasse Elsemüller
Veronika Lerche
The diffusion model (DM; Ratcliff, 1978) assumes that decisions originate from a continuous evidence accumulation process that is subject to Gaussian noise. The Lévy flight model (LFM; Voss et al., 2019) provides a modification thereof. Specifically, the LFM assumes accumulation noise to follow a more heavy-tailed distribution which allows for sudden large changes in the amount of accumulated evidence (i.e., jumps in evidence accumulation). The heavy-tailedness of the noise distribution is governed by the additional free parameter α. A previous study found α to be lower and, thus, jumps in evidence accumulation to be more prevalent under speed instructions. Building upon this finding, we also compared speed versus accuracy conditions using a letter-number discrimination task. However, aiming to contribute to a deeper understanding of the behavior of α under different levels of time pressure, we further intensified time pressure by imposing a response deadline of 500 ms in one condition. Because the altered noise distribution renders the LFM’s likelihood intractable, we used the simulation-based deep learning framework BayesFlow for our analyses. We found that, for most participants in the accuracy condition, accumulation noise was (nearly) normally distributed. By contrast, for most participants under intensified time pressure, accumulation noise was best described by distributions with remarkably heavy tails. Accordingly, the prevalence of jumps in evidence accumulation increased with time pressure. Importantly, corresponding α-values were clearly lower than those reported in all previous studies. Comparisons of the fit of different variants of the DM and LFM alongside implications for modeling decision processes under (deadline-based) time pressure are discussed.
This is an in-person presentation on July 19, 2023 (09:40 ~ 10:00 UTC).
Stefan Radev
Andreas Voss
Paul-Christian Bürkner
Mathematical models of cognition are often memoryless and ignore potential fluctuations of their parameters. However, human cognition is inherently dynamic. Thus, we propose to augment mechanistic cognitive models with a temporal dimension and estimate the resulting dynamics from a superstatistics perspective. Such a model entails a hierarchy between a low-level observation model and a high-level transition model. The observation model describes the local behavior of a system, and the transition model specifies how the parameters of the observation model evolve over time. To overcome the estimation challenges resulting from the complexity of superstatistical models, we develop and validate a simulation-based deep learning method for Bayesian inference, which can recover both time-varying and time-invariant parameters. We first benchmark our method against two existing frameworks capable of estimating time-varying parameters. We then apply our method to fit a dynamic version of the diffusion decision model to long time series of human response times data. Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model. Furthermore, we show that the erroneous assumption of static or homogeneous parameters will hide important temporal information.
This is an in-person presentation on July 19, 2023 (10:00 ~ 10:20 UTC).
Selina Zajdler
Mr. Lukas Schumacher
A ubiquitous finding in memory research is that over the course of a recall or recognition test, memory performance declines. This phenomenon is referred to as output interference, reflecting the notion that it results from the interference of information recalled or encountered during test with subsequent retrieval. Indeed, there is a large body of experimental evidence indicating that the decline in memory performance is not simply due to a longer study-test gap or increasing fatigue. However, a limitation of previous studies is that the influence of interference versus attentional processes is typically inferred from the experimental context rather than measured directly. Moreover, performance is usually assessed across blocks of trials rather than single trials. Thus, the relative contribution and the exact trajectories of memory processes and attention in output interference remain unclear. We propose to address this open question with a dynamic diffusion model: The diffusion model is a popular cognitive model for the analysis of reaction times in binary decision tasks. In the context of recognition memory, it allows researchers to disentangle retrieval processes – such as the speed of information uptake as measured by the drift rate – from attention-related processes – such as the response criterion as measured by the boundary-separation parameter. By implementing the diffusion model in a recently proposed deep-learning based superstatistics framework, we can assess the dynamics of these parameters over the course of the memory test and, thus, directly measure the relative contribution of the associated processes to output interference. Applying this dynamic approach to empirical data, we show that both drift rate and boundary separation decline over the course of the test. Thus, the finding emphasizes the role of both interference and attention in the emergence of output interference in recognition memory. Moreover, it highlights the usefulness of the neural superstatistics framework for dynamic cognitive models.
This is an in-person presentation on July 19, 2023 (10:20 ~ 10:40 UTC).
Amortized deep learning methods are transforming the field of simulation-based inference (SBI). However, most amortized methods rely solely on simulated data to refine their global approximations. We investigate a method to jointly compress both simulated and actually observed exchangeable sequences with varying sizes and use the compressed representations for downstream Bayesian tasks. We employ information maximizing variational autoencoders (VAEs) which we augment with normalizing flows for more expressive representation learning. We showcase the ability of our method to learn informative embeddings on toy examples and two real world modeling scenarios.
This is an in-person presentation on July 19, 2023 (10:40 ~ 11:00 UTC).
Submitting author
Author