#
Bayesian Analysis

Frank Jäkel

We propose a hierarchical Bayesian model that connects the counts of elementary processing steps from a process model with response times of individual participants in an experiment. We see our approach as bridging between the two fields of mathematical psychology and cognitive architectures. For models that are a bit simpler than GOMS (they need to be broken down into a count of one kind of processing step) we can make detailed response time analyses. We model each processing step as a draw from a Gamma distribution, so that for more elementary processing steps we expect both mean response time as well as variance to increase. We present two extensions of the basic model. We first extend the model to account for cases in which the number of processing steps is stochastic and unobserved. The second extension allows to work with several possible processing tactics and we don't know which tactics the participants use. From the distribution of response times it can thus be distinguished what kind of tactic was most likely used to which degree by each participant. We hope that our model will be a useful starting point for many similar analyses, allowing process models to be fit to and tested through detailed response time data.

This is an in-person presentation on **July 19, 2023**
(15:20 ~
15:40 UTC).

Gaussian signal detection models with equal variance are commonly used in simple yes-no detection and discrimination experiments whereas more flexible models with unequal variance require additional data and/or conditions. Here, a hierarchical Bayesian model with equal-variance is extended to an unequal-variance model so that it becomes applicable to binary responses from a random sample of participants. This appears to be at odds with conventional wisdom whereby parameters of an unequal-variance model are not identifiable if only binary responses are observed in a single condition. Although this holds true for non-hierarchical models, the present model assumes randomly and independently sampled discriminability and criterion values and approximately constant signal variance across participants. This novel unequal-variance model is investigated analytically, in simulations and in applications to existing data sets. The results indicate that the five population parameters correspond to five observable parameters of a bivariate sampling distribution and that model parameters can be reliably and accurately recovered or estimated if the sample size is sufficiently large. It is concluded that this approach provides a promising alternative to the ubiquitous equal-variance model.

This is an in-person presentation on **July 19, 2023**
(15:40 ~
16:00 UTC).

Mr. Valentin Pratz

Anna-Lena Schubert

Dynamic Structural Equation Models (DSEMs) can be used to model complex multilevel relationships between multiple variables over time and have thus a wide applicability in many fields of psychological science. Mplus is a widely used and powerful software program for estimating DSEMs, but it has some limitations in terms of flexibility and scalability. To overcome these limitations, we have implemented the DSEM framework in Stan, a Bayesian modeling language which provides a flexible and efficient platform for developing complex models. Here we highlight the most important aspects from our upcoming tutorial paper: A theoretical introduction to DSEM, fitting a base-model (i.e., a bivariate lag-1 model) and some possible model extensions (i.e., latent variable modeling, mediation analysis), and finally a comparison between Mplus and Stan in functionality and parameter recovery. Overall, we want to present our tutorial as a clear and practical guide for researchers who want to take advantage of Stan as a powerful toolbox to specify and fit DSEMs.

This is an in-person presentation on **July 19, 2023**
(16:00 ~
16:20 UTC).

Ørjan Røkkum Brandtzæg

Prior experience can help resolve ambiguity. Quantitative models of this process represent both prior experience and sensory information as probability distributions over suitable parameters. Such prior distributions are core features of models of perception, learning, and reasoning, and thus their properties are important. If the problem to be solved is the estimation of an underlying cause that can be represented as a point value, then the Bayesian estimate of that point value involves multiplying the prior and sensory probability distributions. If the distributions are Gaussian, the precision of the resulting posterior is the sum of the precisions of the prior and the sensory distributions. If the posterior becomes the new prior, precision keeps adding up across iterations (this is known as the Kalman filter, and therefore we will call this the Kalman prior). That precision describes how precisely the mean of the underlying distribution is known. A fundamentally different problem is predicting the distribution of future sensory data, useful for risk sensitivity and change point detection. In the long term, the variance of that prior should be the sum of the sensory variance and the variance of the generating process. That could be achieved by adding to memory a point value that represents the most recent sensory stimulus, then constructing a prior distribution from those point values. If instead it is assumed that each stimulus is represented as a distribution with sensory variance, and the prior is constructed by adding up all the distributions, then the variance of that prior will be the environmental variance plus twice the sensory variance. We call these priors the additive priors. Note that what is added to the prior is the sensory information or likelihood. It is logically possible to derive a third family of priors by delaying storage in memory until after a posterior has been created through Bayesian cue integration of prior and sensory data to predict the distribution of future subjective experience (assuming that all subjective experience occurs after Bayesian cue integration). We call this the subjective prior. Again, it is possible to generate that prior either by adding (and renormalising) posterior distributions, or else by adding the central tendencies of posterior distributions. Because these posteriors are often skewed or multimodal, it matters whether central tendency is represented as mean, mode, or median, and we must examine all possibilities. Generating this family of priors alternates two operations: multiply prior and sensory data, then add the resulting posterior or its central tendency to the prior (and renormalise). Consequently, this family of priors is sensitive to the order of inputs, and it is impossible to know either the shape of the distribution or its variance without knowing in which order stimuli were presented. We note that these priors seem to have no statistically desirable properties whatsoever, but wish to examine them in case unknown constraints force organisms to use them. If so, their undesirable properties may have interesting implications. We explain the properties of these different priors, and we are fitting models that use these priors to existing data from a study of memory for linear and angular displacement. Preliminary analysis indicates that the worst performing prior is the Kalman prior, even though, in the papers we have found so far that explicitly state how the prior is updated, the Kalman prior is favoured 11 to 1.

This is an in-person presentation on **July 19, 2023**
(16:20 ~
16:40 UTC).

Dr. Quentin Gronau

Reilly Innes

Prof. Andrew Heathcote

Prof. Birte Forstmann

Dr. Dora Matzke

Cognitive models are more and more frequently applied to test both within- and between-subject hypotheses, however, the latter has generally suffered from the lack of statistical methods to answer such questions. A common approach to testing between-subject hypotheses is to perform a second step of analysis on the estimated parameters of the model to answer whether, for example, drift rate differs with age, or between people with schizophrenia and controls. However, a lot of statistical power is lost in such two-step analyses. Here we propose to include linear models such as ANOVA, regression, and by extension mixed effect models, in the hierarchical framework in which cognitive models are usually estimated. With such a hierarchical linear model we omit the two-step analysis. Furthermore, we supply methods with which we can estimate Bayes factors between the null and the proposed model. Our work gives researchers the option to formalize different types of hypotheses for between-subject research, with the added benefit of maintaining a more parsimonious parameter space.

This is an in-person presentation on **July 19, 2023**
(16:40 ~
17:00 UTC).

Submitting author

Author