Robust Bayesian Meta-Regression: Publication bias adjusted moderator analysis
Publication bias is a well-recognized threat to research synthesis. Although a variety of models have been proposed to adjust for publication, no single model performs well across different meta-analytic conditions (Carter et al., 2019; Hong & Reed, 2021). One possible remedy to this problem lies in Bayesian model averaging with Robust Bayesian Meta-Analysis (RoBMA; Maier et al., 2022). RoBMA addressed publication bias by averaging over 36 candidate models of the publication process and was shown to perform well under diverse conditions (Bartoš et al, 2022). In this talk, we extend RoBMA to meta-regression settings. The newly introduced moderator analyses enable testing for the presence as well as the absence of continuous and categorical moderators using Bayes factors. This advances existing frequentist methodologies by allowing researchers to also make claims about evidence for the absence of a moderator (rather than the mere absence of evidence as implied by a nonsignificant p-value). Furthermore, RoBMA's meta-regression does not only model average over the different publication process models, but also over the included moderators. Consequently, researchers can draw inferences about each moderator while accounting for the uncertainty in the remaining moderators. We evaluate the performance of the developed methodology in a simulation study and illustrate it with an example.