Subjects viewed a brief flash of 24 dots of three colors, interleaved and randomly arrayed. Their task was to move a mouse cursor to the centroid of the dots of each color--the three centers-of-gravity, three statistical summary judgments . An ideal-detector analysis showed that subjects accurately judged all three centroids utilizing at least 13/24 stimulus dots. This is an astoundingly efficient pre-conscious computation, less than three dots are remembered consciously. A more detailed analysis quantitatively measured five sources of subject error variance, four independent, additive sources of error variance: imperfect color-attention filters; a Bayesian-like bias towards a central tendency; storage, retrieval, and cursor misplacement error; a large residual error due mostly to inefficient encoding; and a fifth interactive source – error in all four components increased when three centroid judgments versus a single centroid judgment were required on each trial.
Everybody seems to agree that for an empirical observation to provide support for a theory, the theory should make risky predictions. There is considerably less agreement, though, about what exactly makes up a risky prediction. This is unfortunate, since in one way or another, all model selection measures that go beyond fit take an implicit or explicit stance on riskiness or its close cousin test severity. In this poster, I discuss three proposals on what counts as a risky prediction. I illustrate how each proposal leads to a different conclusion about the falsifiability of a theory. In particular, I will argue that the most common implementation of riskiness --- the notion that predictions should be precise, which implies that complexity plays a crucial role in theory testing --- is misguided.
Recent work on the cognitive effects of psychedelics proposes these substances act to weaken the impact of prior expectations, thus increasing the subject’s ability to flexibly accommodate patterns in new experiential data (RElaxed Beliefs Under pSychedelics [REBUS]; Carhart-Harris & Friston, 2019, doi 10.1124/pr.118.017160). Whereas this theory has previously been applied to perception and belief, here we apply it to learning. We develop a Bayesian model of reinforcement learning based on the Kalman filter, in which psychedelics increase the variance of the random walk or equivalently decrease observation noise. Thus as dosage increases, the impact of new observations increases relative to previous experience. The model is applied to data on reversal learning in rats (King et al., 1974, doi 10.1111/j.1476-5381.1974.tb08611.x) and is found to provide a good account for the positive effects of LSD on learning rates following reversal.
Interference to belief and memory is common. People persist in believing prior misinformation over later encountered correction (continued influence effect, CIE) and remembering previously learned materials over recently ones (proactive interference, PI). We studied potential dis(association) between belief and memory via an experiment combining these two paradigms. We found evidence for both interaction (a u-shaped relationship where extreme belief improved change recollection and recall) and disassociation (participants maintained their inferior belief in correction regardless of successful recall). Referring to the Knowledge Revision Components framework and a scaffolded encoding model built on retrieving effectively from memory, we assume the misinformation memory is active, encoded with correction, and then is scaffolded to memory to the degree of belief (association). It also has a belief tag that may remain (CIE) or be adjusted by the correction. As memories compete to be sampled in recall (PI), the belief tag is accessed upon recovery (disassociation).
Jeremy B. Caplan
Much of human learning is incremental through feedback. Brain activity that tracks such learning at the item level could inform our understanding of basic neural processes and guide memory training protocols. We sought to identify learning-relevant spectral EEG features in a task where 45 participants learned stimulus-response mappings for each of 48 words through trial-and-error learning with feedback. Frontal midline theta activity, an established univariate marker of feedback processing, was not predictive of subsequent item-specific knowledge. However, multivariate classifiers (LDA, SVM) incorporating a broad range of frequencies and electrodes predicted whether an item was learned to a substantial degree (AUC ~0.7). Interestingly, classifiers only succeeded when classifying correct but not error trials. These findings validate the classifier approach to tracking feedback-guided learning following positive outcomes, and suggest that highly replicated univariate EEG features are not as relevant for learning as multivariate activity.
Understanding human performance is a fundamental aim of psychologists. Cognitive workload has been assumed to influence performance by changing the cognitive resources available for tasks. However, there is a lack of evidence for a direct relationship between changes in workload within an individual over time and changes in that individual’s performance. We collected performance data using a Multiple Object Tracking task in which we measured workload objectively in real-time using a modified Detection Response Task. Using a multi-level Bayesian model controlling for task difficulty and past performance, we found strong evidence that workload both during and preceding a tracking trial was predictive of performance, such that higher workload led to poorer performance. These negative workload-performance relationships were remarkably consistent across individuals. The outcomes have significant implications for designing real-time adaptive systems to proactively mitigate human performance decrements, but also highlight the pervasive influence of cognitive workload more generally.