Order-constrained Inference: A Nuanced Approach to Hypothesis Testing
Many statistical analyses performed in psychological studies add extraneous assumptions that are not part of the theory. These added assumptions could adversely influence the conclusions one derives from the analyses. Order-constrained inference allows researchers to avoid unnecessary assumptions, translate conceptual theories into direct testable hypotheses, and run competitions among competing hypotheses. On top of these advantages, this reanalysis highlights how one can use order-constrained modeling to formulate more nuanced hypotheses at the item level and test them jointly as one single model. The data set comes from Pennycook, Bear, Collins and Rand (2020). The authors hypothesized that attaching warnings to a subset of fake news headlines increases the perceived accuracy of other headlines that are unmarked. Moreover, they also expected this effect to disappear when attaching verifications to true headlines. Using the QTEST software (Regenwetter et al., 2014; Zwilling et al., 2019), we assessed these hypotheses jointly across all individual headlines. To further leverage order-constrained inference, we ran a competition among competing hypotheses using Bayesian model selection methods. We observe that order-constrained inference not only provides us with a coarse view of all the hypotheses at the aggregate level, it also offers a fine-grained perspective of all the hypotheses at the item level.
There is nothing here yet. Be the first to create a thread.
Cite this as:
Regenwetter, M., &