Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Nothing and the seven priors. Re-analysis of data on Bayesian priors

Authors
Ørjan Røkkum Brandtzæg
Norwegian University of Science and Technology ~ Psychology
Prof. Robert Biegler
NTNU ~ Psychology
Abstract

Prior experience can help resolve ambiguity. Quantitative models of this process represent both prior experience and sensory information as probability distributions over suitable parameters. Such prior distributions are core features of models of perception, learning, and reasoning, and thus their properties are important. We define three (families of ) priors, and fit them to existing data. The iterative Kalman prior involves multiplying the prior and sensory probability distributions. If the distributions are Gaussian, the precision (inverse variance) of the resulting posterior is the sum of the precisions of the prior and the sensory distributions. The posterior becomes the new prior, precision keeps adding up across iterations and describes how precisely the mean of the underlying distribution is known. A second family of priors can be generated by either creating a distribution from the central tendencies of past sensory inputs, producing a prior whose variance is the sum of process and sensory variance, or else by averaging past sensory distributions, producing a prior whose variance is the sum of process and twice the sensory variance. Such a prior is useful for risk sensitivity and change point detection. A third family of priors can be generated by delaying storage in memory until after a posterior has been created through Bayesian cue integration of prior and sensory data to predict the distribution of future subjective experience. This family of priors is sensitive to the order of inputs, and it is impossible to know either the shape of the distribution or its variance without knowing in which order stimuli were presented. Fitting priors to existing data indicates that the worst performing prior is the Kalman prior, even though, in the papers we have found so far that explicitly state how the prior is updated, the iterative Kalman prior is favoured 11 to 1.

Tags

Keywords

prior
posterior
likelihood
variance
precision
model selection
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Brandtzæg, Ã., & Biegler, R. (2023, June). Nothing and the seven priors. Re-analysis of data on Bayesian priors. Paper presented at Virtual MathPsych/ICCM 2023. Via mathpsych.org/presentation/1291.