Measures of metacognitive efficiency across cognitive models of decision confidence
Meta-d’/d’ has become the quasi-gold standard to quantify metacognitive efficiency in the field of metacognition research because it has been assumed that meta-d’/d’ provides control for discrimination performance, discrimination criteria, and confidence criteria even without the explicit assumption of a specific generative model underlying confidence judgments. Here, I show that only under a very specific generative models of confidence, meta-d’/d’ provides any control over discrimination performance, discrimination criteria and confidence criteria. Simulations using a variety of different generative models of confidence showed that for most generative models of confidence, there exists at least some combinations of parameters where meta-d’/d’ is affected by discrimination performance, discrimination task criteria, and confidence criteria. The single exception is a generative model of confidence according to which the evidence underlying confidence judgements is sampled independently from the evidence utilized in discrimination decision process from a Gaussian distribution truncated at the discrimination criterion. These simulations imply that previously reported associations with meta-d’/d’ do not necessarily reflect associations with metacognitive efficiency but can also be caused by associations with discrimination performance, discrimination criterion, or confidence criteria. It is argued that decent measures of metacognition require explicit generative model of confidence with decent fits to the empirical data.