#
Evidence-Accumulation Models: Methods

J. Manuel Villarreal

Michael Lee

Joachim Vandekerckhove

The circular drift diffusion model (CDDM; Smith, 2016, Psychological Review) is a sequential-sampling decision-making model used to describe the choices and response times observed in scenarios where participants have to make decisions on a circular space (i.e., the decision space is a bounded continuum that can be mapped onto a circle). Much like in Ratcliff’s (1978, Psychological Review) diffusion model, a core assumption is that evidence is accumulated over time until a response threshold is reached. The parameters of the CDDM can be mapped to relevant psychological processes such as response caution and information processing speed. We developed a custom JAGS module to facilitate working with the CDDM in a Bayesian framework. We present results from a parameter recovery study showing that the module is well suited to infer the parameter values used to generate bivariate datasets. The implementation in JAGS facilitates a number of useful model extensions: hierarchical models that capture different levels of variation across parameters (e.g., per individual, condition, experimental manipulation, etc.); latent variable models that identify their underlying factorial structure; mixture models that discern responses attributable to different simultaneously active processes; explanatory models that consider exogenous predictors; and so on. We present an application of our CDDM JAGS module to data collected by Kvam (2019, Journal of Experimental Psychology: Human Perception and Performance) in a continuous orientation judgment task. In this study, participants were asked to indicate the mean orientation of a rapid sequence of Gabor patches shown on every trial. The task design included manipulations of boundary distance through speed vs. accuracy instructions, and manipulations of drift magnitude and drift angle variability through different difficulty conditions. We built a hierarchical Bayesian model with a latent mixture structure to test four hypotheses: (1) The response boundary was higher when instructions prompted participants to favor accuracy rather than speed; (2) The drift magnitude decreased with task difficulty; (3) The variability in drift angle increased with task difficulty; and (4) Positive and negative deflections of the cue with respect to the true mean orientation had equivalent effects on the responses observed. We found evidence in support of all four hypotheses. We will present results and discuss further extensions of the model.

This is an in-person presentation on **July 20, 2023**
(11:00 ~
11:20 UTC).

Dr. Constantin Meyer-Grant

Prof. Christoph Klauer

The Wiener diffusion model (and its extensions in terms of trial-by-trial variability in drift rate, starting point, and non-decision time) is one of the most frequently used cognitive models for binary response tasks. A key advantage of this model framework is that it allows for jointly modeling response frequency and latency. In Hartmann and Klauer (2021) we derived the partial derivatives of th diffusion-model density with respect to up to seven model parameters as well as with respect to the response time itself. Moreover, we developed an R package (WienR) that can be used to calculate these partial derivatives (as well as the PDFs and CDFs) of the response time distribution conditional on one of the two possible responses. In Hartmann, Meyer-Grant, and Klauer (2022) we further extended the WienR package by developing and implementing an efficient adaptive rejection sampler (ARS) that builds on the above-mentioned partial derivatives. In the present talk, the partial derivatives, the ARS method, and the WienR package will be introduced.

This is an in-person presentation on **July 20, 2023**
(11:20 ~
11:40 UTC).

Caroline Kuhne

Niek Stevenson

Mr. Gavin Cooper

Guy Hawkins

Jon-Paul Cavallaro

Reilly Innes

Estimating quantitative cognitive models from data is a staple of modern psychological science, but can be difficult and inefficient. Particle Metropolis within Gibbs (PMwG) is a robust and efficient sampling algorithm which supports model estimation in a hierarchical Bayesian framework. This talk will provide an overview of how cognitive modelling can proceed efficiently using PMwG, a new open-source package for the R language. PMwG, and the PMwG package, has the potential to move the field of psychology ahead in new and interesting directions, and to resolve questions that were once too hard to answer with previously available sampling methods.

This is an in-person presentation on **July 20, 2023**
(11:40 ~
12:00 UTC).

Chris Donkin

The most popular models of perceptual decision making, such as the diffusion model, make relatively simple assumptions about the psychological mechanisms involved. Other models implement more plausible neural mechanisms, such as the Ising Decision Maker (IDM), which builds from the assumption that two pools of neurons with self-excitation and mutual inhibition receive perceptual input from external excitatory fields. In this study, we explore the consequences of using simple models to model more complex data with higher neural plausibility. To do this, we simulate data from the IDM and fit it with the diffusion model, looking at the relationship between the parameters that overlap in the two models. Results have shown that changes in stimulus distinctness and non-decision time in IDM corresponds exclusively to changes in drift rate and non-decision time in DDM. Though the result appears less linear, the detection box size in IDM has a selective influence on boundary separation in DDM, with smaller detection box sizes influencing boundary separation less than larger box sizes. In other simulations, we look at whether assumptions such as inhibition or evidence leakage, as they are implemented in different models, have a similar impact on predicted behavior. Similarly, results have also shown that changes in stimulus distinctness and non-decision time in IDM corresponds exclusively to changes in drift rate and non-decision time in OUM, while the negative relationship between detection box size in IDM and the boundary separation in OUM is quite noisy. In terms of the more ‘complex’ assumptions, we see a clear linear relationship between self-excitation in the IDM and inhibition in the OUM. This study provides preliminary evidence that the simplifying assumptions of models like the DDM do not compromise their ability to estimate their core parameters. We also found that some of the more complex assumptions also share the ‘construct validity’ across different models, with the leakage parameter of the OUM and self-excitation parameter of the IDM having a similar effect on predicted data.

This is an in-person presentation on **July 20, 2023**
(12:00 ~
12:20 UTC).

Matt Murrow

Many decision-making theories are encoded in evidence accumulation models (EAM). These assume that noisy evidence stochastically accumulates until a set threshold is reached, triggering a decision. One of the most successful and widely used of this class is the Diffusion Decision Model (DDM). The DDM however is limited in scope and does not account for processes such as evidence leakage, changes of evidence, or time varying caution. More complex EAMs can encode a wider array of hypotheses, but are currently limited by computational challenges. In this work, we develop the python package PyBEAM (Bayesian Evidence Accumulation Models) to fill this gap. Toward this end, we develop a general probabilistic framework for predicting the choice and response time distributions for a general class of binary decision models. In addition, we have heavily computationally optimized this modeling process and integrated it with PyMC, a widely used Python package for Bayesian parameter estimation. This 1) substantially expands the class of EAM models to which Bayesian methods can be applied, 2) reduces the computational time to do so, and 3) lowers the entry fee for working with these models. Here we demonstrate the concepts behind this methodology, its application to parameter recovery for a variety of models, and apply it to a recently published data set to demonstrate its practical use.

This is an in-person presentation on **July 20, 2023**
(12:20 ~
12:40 UTC).

Submitting author

Author