This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Nothing and the seven priors. Re-analysis of data on Bayesian priors.

Prof. Robert Biegler
NTNU ~ Psychology
Ørjan Røkkum Brandtzæg
Norwegian University of Science and Technology ~ Psychology

Prior experience can help resolve ambiguity. Quantitative models of this process represent both prior experience and sensory information as probability distributions over suitable parameters. Such prior distributions are core features of models of perception, learning, and reasoning, and thus their properties are important. If the problem to be solved is the estimation of an underlying cause that can be represented as a point value, then the Bayesian estimate of that point value involves multiplying the prior and sensory probability distributions. If the distributions are Gaussian, the precision of the resulting posterior is the sum of the precisions of the prior and the sensory distributions. If the posterior becomes the new prior, precision keeps adding up across iterations (this is known as the Kalman filter, and therefore we will call this the Kalman prior). That precision describes how precisely the mean of the underlying distribution is known. A fundamentally different problem is predicting the distribution of future sensory data, useful for risk sensitivity and change point detection. In the long term, the variance of that prior should be the sum of the sensory variance and the variance of the generating process. That could be achieved by adding to memory a point value that represents the most recent sensory stimulus, then constructing a prior distribution from those point values. If instead it is assumed that each stimulus is represented as a distribution with sensory variance, and the prior is constructed by adding up all the distributions, then the variance of that prior will be the environmental variance plus twice the sensory variance. We call these priors the additive priors. Note that what is added to the prior is the sensory information or likelihood. It is logically possible to derive a third family of priors by delaying storage in memory until after a posterior has been created through Bayesian cue integration of prior and sensory data to predict the distribution of future subjective experience (assuming that all subjective experience occurs after Bayesian cue integration). We call this the subjective prior. Again, it is possible to generate that prior either by adding (and renormalising) posterior distributions, or else by adding the central tendencies of posterior distributions. Because these posteriors are often skewed or multimodal, it matters whether central tendency is represented as mean, mode, or median, and we must examine all possibilities. Generating this family of priors alternates two operations: multiply prior and sensory data, then add the resulting posterior or its central tendency to the prior (and renormalise). Consequently, this family of priors is sensitive to the order of inputs, and it is impossible to know either the shape of the distribution or its variance without knowing in which order stimuli were presented. We note that these priors seem to have no statistically desirable properties whatsoever, but wish to examine them in case unknown constraints force organisms to use them. If so, their undesirable properties may have interesting implications. We explain the properties of these different priors, and we are fitting models that use these priors to existing data from a study of memory for linear and angular displacement. Preliminary analysis indicates that the worst performing prior is the Kalman prior, even though, in the papers we have found so far that explicitly state how the prior is updated, the Kalman prior is favoured 11 to 1.



Bayesian inference
programmatic approach

There is nothing here yet. Be the first to create a thread.

Cite this as:

Biegler, R., & Brandtzæg, . (2023, July). Nothing and the seven priors. Re-analysis of data on Bayesian priors. Abstract published at MathPsych/ICCM/EMPG 2023. Via