Session 5: Thursday 11 February, 2pm-3pm
I present a first-pass at modeling belief change in a probabilistic learning task involving healthy controls and patients with schizophrenia, based on data collected by Pfuhl and colleagues (The Arctic University of Norway). The model is heavily inspired by the stochastic learning models of Bush & Mosteller (1955) by relating the updating of confidence to the learning of the likelihoods of each hypothesis. The model is a linear operator model (LOM) that affects how each piece of evidence inflates and deflates confidence in each hypothesis over time. There are two versions of the LOM tested: (a) a model that allows for inflation and deflation at each step; (b) a “Bayesian-like” model that only allows for inflation. A novel feature of this model is how the history of evidence decides which hypothesis is inflated and which is deflated on each trial. I compare the models by testing how each makes out-of-sample predictions based on parameter estimates derived from minimizing sum of squared error. I also present comparisons of the parameter estimates between the healthy control and schizophrenia groups. I conclude with further directions for improving the models.
Dr. Timothy Ballard
Interruptions in healthcare have frequently been studied as they are associated with increased medical errors, which can be detrimental for patient safety. However, attempts to reduce interruptions have not been widely successful as there is still much to learn about the complexity of interruptions. While interruptions can be problematic for the interruptee, they may be necessary for the interrupter to maintain patient safety, so both perspectives must be considered. We developed a computational model representing the role that interruptions play for clinicians within the hospital system. The processes of deciding to interrupt and deciding to respond to interruptions were represented. We ran simulations of the model to see how different decisions affect the efficiency of the interrupter, the interruptee and the team. The simulation predictions suggest that deciding to interrupt detracts from the efficiency of the interruptee but maintains the efficiency of the interrupter. However, deciding to not immediately interrupt detracts from the efficiency of the interrupter but maintains the efficiency of the interruptee. Future research will involve running experimental studies to test these predictions and update the model so that we can accurately represent the complexity of interruptions in healthcare and provide well-informed suggestions for interventions.
Dr. Ami Eidels
Many safety-critical jobs are conducted by teams to improve task performance and minimize risk of error by sharing task requirements. Cognitive workload is also acutely related to task performance whereby increased workload is associated with poorer task performance and low levels of workload associate with greater performance. Where the relationship between workload and performance is well understood at the individual level, less research has focused on the workload of individuals within team environments. Our experiment investigated whether (i) groups benefit individual performance via group interaction or statistical facilitation, and (ii) how teamwork affects cognitive workload. We designed a dual task that required participant dyads (n=50) to collaborate or compete together to prevent a set of virtual balls from hitting the ground whilst concurrently completing the detection response task. We found that group type had little effect on primary measures of player performance or of cognitive load in both collaborative and competitive groups. Assessment of behavioral data indicated differences in load sharing strategy between group types. Finally, we utilized Systems Factorial Technology to describe group performance and found that, although collaborative and competitive dyads outperformed individuals, both groups demonstrated limited performance capacity.
Dr. Ami Eidels
Prof. Shayne Loft
Understanding human performance is a fundamental aim of psychologists. Cognitive workload has been assumed to influence performance by changing the cognitive resources available for tasks. However, there is a lack of evidence for a direct relationship between changes in workload within an individual over time and changes in that individual’s performance. We collected performance data using a Multiple Object Tracking task in which we measured workload objectively in real-time using a modified Detection Response Task. Using a multi-level Bayesian model controlling for task difficulty and past performance, we found strong evidence that workload both during and preceding a tracking trial was predictive of performance, such that higher workload led to poorer performance. These negative workload-performance relationships were remarkably consistent across individuals. The outcomes have significant implications for designing real-time adaptive systems to proactively mitigate human performance decrements, but also highlight the pervasive influence of cognitive workload more generally.
Prof. Jason McCarley
When human monitors are tasked with detecting rare signals among noise for prolonged periods, they typically exhibit a decline in correct detections over time. This so-called vigilance decrement is usually attributed to losses in the monitor's ability to distinguish signal from noise (i.e., sensitivity) in high event rate, memory-loading tasks (Parasuraman & Davies, 1977). Recent work, however, suggests that shifts in observers’ willingness to respond (i.e., response bias) can masquerade as sensitivity losses (Thomson et al. 2016), prompting reconsideration of the mechanisms underlying the vigilance decrement. The current experiment examined the extent to which observed vigilance decrements reflect changes in sensitivity, response bias, and attentional lapses, using a computational modeling approach. One-hundred twenty-nine participants completed an online, visual signal detection task, judging whether the separation between two probes exceeded a criterion value. Separation was varied across trials using the method of single stimuli and data were fit with logistic psychometric curves. Parameters representing sensitivity, response bias, and attentional lapse rate were compared across the first and last four minutes of the vigil. A hierarchical Bayesian analysis gave decisive evidence of increased attentional lapse rate, strong evidence of conservative shifts in response bias, and anecdotal evidence of decreased sensitivity. These results suggest that the vigilance decrement primarily reflects lapses in operator attention and a decreased willingness to respond ‘signal’ with time-on-task. Understanding the mechanisms underlying the vigilance decrement is important for effectively mitigating it.
Dr. Matthias Mittner
Prof. Andrew Heathcote
<div>Mind wandering is ubiquitous in everyday life yet unlike many cognitive activities it cannot be directly manipulated in the lab. Instead, mind wandering is typically assessed as a dependent variable with 'thought probes', infrequent self-report items enquiring about the focus of attention interspersed with items from a primary cognitive task. Despite measurement as a dependent variable, many mind wandering investigations treat thought probes as an independent variable: performance in the primary task is analysed as a function of thought probe responses. This approach violates assumptions of many conventional statistical analyses and fails to explain how people generate responses to thought probes. Here, we treat both streams of data - behavioural and self-report - as the observable outcomes of an integrated latent cognitive process. Choices and response times in two decision making studies were modelled with the Timed Racing Diffusion Model (TRDM). We structurally linked TRDM parameters to a latent 'mind wandering' continuum of a Thurstonian "strengths" model, which generated self-report responses to thought probes. The model captured all key quantitative trends in accuracy, RT and self-report data at the individual participant level. From a set of competing models, the best explanation of the data assumed that sensitivity to non-target stimuli is negatively associated with the propensity to mind wander during ongoing performance. This goes against the oft-stated though rarely modelled conclusion that mind wandering is associated with greater processing variability.</div>