Dr. Arkady Zgonnikov
When a person makes a decision, it is automatically accompanied by a subjective probability judgment of the decision being correct, in other words, a (local) confidence judgment. Confidence judgments have among other things an effect on justifications of future decisions and behaviour. A better understanding of the metacognitive processes responsible for these confidence judgements could improve behaviour models. However, to date there is little to no applied research done into confidence in dynamic environments as for example driving. Confidence judgments are mostly studied in a fundamental manner, focusing on confidence in simplistic perceptual or preferential tasks. At the same time, cognitive models of decision making of drivers have not accounted for confidence judgments yet. In this study, we made a first attempt of connecting these two fields of research by investigating the confidence of human drivers in left-turn gap acceptance decisions in a driver simulator experiment (N=17). The study aimed to, firstly, investigate if confidence can be properly measured in a dynamic task. Secondly, it sought to establish the relationship between confidence and the characteristics of a traffic situation, in this study constituting the gap size described by the time to arrival of and distance to oncoming traffic. Thirdly, we aimed to model the dynamics of the underlying cognitive process using the evidence accumulation approach. We found that self-reported confidence judgements displayed a similar pattern as expected based on the earlier fundamental studies into confidence. Specifically, confidence increased with the gap size when participants decided to accept the gap, and decreased with gap size when the gap was rejected. Moreover, we found that confidence judgments can be captured through an extended dynamic drift diffusion decision model. In our model, the drift rate of the evidence accumulator as well as the decision boundaries are functions of the dynamic perceptual information perceived by the decision-maker. The model assumes that confidence ratings are based on the state of the accumulator after post-decision evidence accumulation. Overall, the study confirms that principles known from the fundamental research into confidence also hold for dynamic applied tasks.
This is an in-person presentation on July 21, 2023 (11:00 ~ 11:20 UTC).
Dr. Manuel Rausch
Magnitude-sensitivity refers to the effect that decisions between two alternatives tend to be faster when the intensities of both alternatives (e.g., luminance, size, or preference) are increased even if their difference is kept constant. Previous studies proposed several computational models to describe decision and response time distributions in experimental paradigms with changes of stimulus magnitude. However, with only responses and response times as dependent variables, there was a high degree of model mimicry. We suggest to include confidence judgments as an additional dependent variable in experiments and models. We present three experiments, two brightness discrimination tasks and a motion discrimination task, in which the intensities of both alternatives were varied and confidence judgments were recorded. Under some stimulus manipulations, confidence increased with stimulus magnitude while accuracy remained constant. We generalized several previously proposed dynamical models of confidence and response time to account for magnitude-sensitivity by adding intensity-dependent noise parameters. We fitted each model to the data and compared models quantitatively. The intensity-dependent dynamical weighted evidence, visibility and time model (iddWEVT) was best in fitting the joint distribution of response times, choice and confidence judgments for the different experimental manipulations. Previous studies explained increasing confidence but constant accuracy with stimulus magnitude by a positive evidence bias, i.e. for the computation of confidence, people might rely only on the evidence supporting their decision and ignore evidence for the alternative. However, sequential sampling models offer an alternative explanation for these effects by considering the dynamics of a decision and by taking response times into account in the computation of confidence. We suggest that identification of computational models of decision making but also models of confidence can be improved by considering decisions, reaction times, and confidence at the same time.
This is an in-person presentation on July 21, 2023 (11:20 ~ 11:40 UTC).
Simon P. Kelly
Dr. Nathan Faivre
Dr. Michael Pereira
M. Clément Sauvage
Evidence accumulation is a fundamental process whereby noisy sensory information is accumulated over time up to a threshold. Although numerous studies have explored the link between evidence accumulation and decision formation, its contribution to perceptual consciousness remains unclear. Here, we propose a leaky evidence accumulation model that accounts for qualitative aspects of perceptual experience such as its perceived onset and duration, as well as confidence in perceptual judgments. Our model assumes that the onset of perceptual experience (i.e., stimulus detection) is triggered by the crossing of a perceptual bound by the accumulation process. Crucially, we hypothesized that perceptual experience lasts as long as accumulated evidence remains above the threshold, and that confidence is read out from the maximum reached by accumulated evidence over time. We tested these predictions in a pre-registered computational modelling study. Four healthy participants were asked to detect 3500 faces with different intensities and durations and either report their confidence in having perceived a face or no face or reproduce the duration of their perceptual experience of a face. As predicted, participants detected better and faster stimuli with high intensity or longer physical durations. Similarly, faces presented at high intensity or long duration were perceived with longer subjective durations and higher confidence. We fitted our computational model to response times and detection performance using the Variational Bayesian Monte Carlo toolbox. Using this model, we could parsimoniously reproduce effects of stimulus intensity and duration on perceived duration and confidence better than with alternative models that were not based on leaky evidence accumulation. Together, these results support leaky evidence accumulation as a mechanism explaining stimulus detection, but also some phenomenal aspects of perceptual experience such as subjective duration and confidence.
This is an in-person presentation on July 21, 2023 (11:40 ~ 12:00 UTC).
Dr. David Sewell
Dr. Natasha Matthews
Prof. Jason Mattingley
Decision confidence plays a crucial role in humans’ capacity to make adaptive decisions in a noisy perceptual world. Often, our perceptual decisions, and the associated confidence judgements, require integrating sensory information from multiple modalities. Empirical investigation of the cognitive processes used for decision confidence under these conditions, however, has been severely limited. To bridge this gap, in this study we investigated the computations used to generate confidence when a decision requires integrating sensory information from both vision and audition and the extent to which these computations are the same when sensory information is solely visual or auditory. Participants (N = 10) completed three versions of a categorisation task with visual, auditory or audio-visual stimuli and made confidence judgements about their category decisions. In each version of the task, we varied both evidence strength, (i.e., the strength of the evidence for a particular category) and sensory uncertainty (i.e., the intensity of the sensory signal). We evaluated several classes of models which formalise the mapping of evidence strength and sensory uncertainty to confidence in different ways: 1) unscaled evidence strength models, 2) scaled evidence strength models, and 3) Bayesian models. Our model comparison approach therefore, provides a compelling specification of the class of algorithms used for decision confidence both when a signal has multiple perceptual dimensions and a single perceptual dimension. Where the signal had multiple perceptual dimensions, we were able to specifically quantify how both evidence strength and sensory uncertainty are integrated across modalities and the extent to which this integration was biased towards a particular modality. Furthermore, by generating predictions from the unidimensional signals and comparing these predictions to behaviour from the multidimensional signals, we determine the extent to which the computations used for decision confidence directly generalise across different decisional contexts.
This is an in-person presentation on July 21, 2023 (12:00 ~ 12:20 UTC).
Meta-d’/d’ has become the quasi-gold standard to quantify metacognitive efficiency in the field of metacognition research because it has been assumed that meta-d’/d’ provides control for discrimination performance, discrimination criteria, and confidence criteria even without the explicit assumption of a specific generative model underlying confidence judgments. Here, I show that only under a very specific generative models of confidence, meta-d’/d’ provides any control over discrimination performance, discrimination criteria and confidence criteria. Simulations using a variety of different generative models of confidence showed that for most generative models of confidence, there exists at least some combinations of parameters where meta-d’/d’ is affected by discrimination performance, discrimination task criteria, and confidence criteria. The single exception is a generative model of confidence according to which the evidence underlying confidence judgements is sampled independently from the evidence utilized in discrimination decision process from a Gaussian distribution truncated at the discrimination criterion. These simulations imply that previously reported associations with meta-d’/d’ do not necessarily reflect associations with metacognitive efficiency but can also be caused by associations with discrimination performance, discrimination criterion, or confidence criteria. It is argued that decent measures of metacognition require explicit generative model of confidence with decent fits to the empirical data.
This is an in-person presentation on July 21, 2023 (12:20 ~ 12:40 UTC).