Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Prior predictive entropy as a measure of model complexity

Authors
J. Manuel Villarreal
University of California Irvine ~ Cognitive Sciences
Michael Lee
University of California, Irvine ~ Cognitive Sciences
Alexander John Etz
University of California, Irvine
Abstract

In science, when we are faced with the problem of choosing between two different accounts of a phenomenon, we are told to choose the simplest one. However, it is not always clear what a “simple” model is. Model selection criteria (\emph{e.g.,} the BIC) typically define model complexity as a function of the number of parameters in a model, or some other function of the parameter space. Here we present an alternative based on the prior predictive distribution. We argue that we can measure the complexity of a model by the entropy of its predictions before looking at the data. This can lead to surprising findings that are not well explained by thinking of model complexity in terms of parameter spaces. In particular, we use a simple choice rule as an example to show that the predictions of a nested model can have a higher entropy in comparison to its more general counterpart. Finally, we show that the complexity in a model’s predictions is a function of the experimental design.

Tags

Keywords

Complexity
Prior Predictive Distribution
Entropy
Choice Rule

Topics

Bayesian Modeling
Probabilistic Models
Model Analysis and Comparison
Discussion
New
Effect of the stimulus space Last updated 3 years ago

I really enjoyed your talk; it’s made me think a lot about the relationship between model complexity and experimental design. One thing I’m still trying to wrap my head around is the effect of the stimulus space on PPD. Do you have any results or thoughts on whether a different specification of the stimulus space—different choice outcomes or probab...

Sabina J. Sloman 1 comment

Hi Manuel, Very interesting project! I was wondering if you have considered the case of an infinite number of repetitions. That means you'll need to measure entropy over the continuous choice probability rather than the discrete choice frequency. In experiments, we only collect a given number of repetitions, but many models make probabilistic pred...

Dr. Lisheng He 1 comment
Cite this as:

Villarreal, J., Lee, M., & Etz, A. (2020, July). Prior predictive entropy as a measure of model complexity. Paper presented at Virtual MathPsych/ICCM 2020. Via mathpsych.org/presentation/105.