How useful is posterior-predictive model assessment: Insights from ordinal constraints
The importance of good model specification — having models that accurately capture differing theoretical position — cannot be understated. With this in mind, we submit that methods of inference that force scientists to use certain models that may not be appropriate for the context are not as desirable as methods with no such constraint. Here we ask how posterior-predictive model assessment methods such as wAIC and LOO-CV perform when theoretical positions are different space restrictions on a common parameter space. One of the main theoretical relations is nesting — where the parameter space of one model is a subset of that for another. A good example is a general model that admits any set of preferences; a nested model is one that admits only preferences that obey transitivity. We find however, that posterior-predictive methods fail in these cases providing no advantage to more constrained models even when data are compatible with the constraint. Researchers who use posterior predictive methods are forced to use non-overlapping partitions of parameter spaces even some of subspaces have no theoretical interpretation. Fortunately, there is no constraint of prior predictive methods such as Bayes factors. Because these model appropriately account for model complexity, models need not be a proper partitioning of parameter spaces and inference with desirable properties nonetheless results. We argue given that posterior predictive approaches forces certain specifications that may not be ideal for scientific questions, they are less desirable in these contexts.
Cite this as:
Haaf, J. M., &