Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

PyBEAM: A Bayesian approach to parameter inference for a wide class of binary evidence accumulation models.

Authors
Prof. Bill Holmes
Indiana University ~ Cognitive Science
Matt Murrow
Vanderbilt ~ Physics & Astronomy
Abstract

Many decision-making theories are encoded in evidence accumulation models (EAM). These assume that noisy evidence stochastically accumulates until a set threshold is reached, triggering a decision. One of the most successful and widely used of this class is the Diffusion Decision Model (DDM). The DDM however is limited in scope and does not account for processes such as evidence leakage, changes of evidence, or time varying caution. More complex EAMs can encode a wider array of hypotheses, but are currently limited by computational challenges. In this work, we develop the python package PyBEAM (Bayesian Evidence Accumulation Models) to fill this gap. Toward this end, we develop a general probabilistic framework for predicting the choice and response time distributions for a general class of binary decision models. In addition, we have heavily computationally optimized this modeling process and integrated it with PyMC, a widely used Python package for Bayesian parameter estimation. This 1) substantially expands the class of EAM models to which Bayesian methods can be applied, 2) reduces the computational time to do so, and 3) lowers the entry fee for working with these models. Here we demonstrate the concepts behind this methodology, its application to parameter recovery for a variety of models, and apply it to a recently published data set to demonstrate its practical use.

Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Holmes, W., & Murrow, M. (2023, July). PyBEAM: A Bayesian approach to parameter inference for a wide class of binary evidence accumulation models. Abstract published at MathPsych/ICCM/EMPG 2023. Via mathpsych.org/presentation/1055.