Memory
Dr. Jerome Busemeyer
Mr. Adam Huang
Sampling is pivotal in existing models of probability judgments, yet it harbors two unresolved questions: (1) the specific factors that affect sampling errors have not been thoroughly investigated; (2) whether there are judgment errors beyond sampling's reach remains unknown. Our study aims to tackle these gaps with two approaches. First, we suggest that sampling errors inversely correlate with cognitive reflection scores, indicating that intuitive thinkers are more susceptible to such errors than analytical thinkers. Second, we posit that an increased cognitive load could limit the ability to collect samples, thereby increasing sampling errors. Our exploration focuses on the impact of these two factors on probability and normality identities as presented by Costello and Watts (2014) and Huang et al. (2024). We evaluate how well current sampling models predict the relationship between the two factors and the identities. Additionally, we investigate whether some identities exhibit different responses to the two factors compared to others.
This is an in-person presentation on July 20, 2024 (10:00 ~ 10:20 CEST).
Dr. Mirko Thalmann
Dr. Eric Schulz
We all know the feeling of searching our memory for that one particular piece of information. However, if long-term memory (LTM) retrieval is indeed a search process, the time it takes to remember a specific memory should be strongly affected by two factors: 1. The number of memories and 2. the organization of these memories. We tested these assumptions and used retrieval times (RTs) to investigate how LTM is organized. Specifically, participants learned word pairs and we tested how LTM RTs for cued words are affected by the number of learned word pairs. Additionally, we also manipulated the semantic similarity of the words in a word pair using Word2Vec embeddings to test whether semantic similarity decreases search times in LTM. The validity of the Word2Vec embeddings was confirmed in a separate study, where we showed a high correlation (r = 0.81) with human pairwise similarity ratings. We found that RTs were indeed longer after learning more word pairs and that semantically similar word pairs could be retrieved faster. In a second study, we tested whether additional context cues during encoding and retrieval speed up RTs. Preliminary results suggest, that this is the case. However, in line with cue-overload theory, the benefit of the context cue depended on how many items were originally associated with a context cue. These findings are consistent with a search-based model of retrieval, illustrating its sensitivity to the number of memory candidates, while highlighting the role of the specificity of the cue in optimizing search performance.
This is an in-person presentation on July 20, 2024 (10:20 ~ 10:40 CEST).
Prof. Sanne Schagen
Dr. Joost Agelink van Rentergem
A considerable number of non-central nervous system (non-CNS) cancer survivors face long-term cognitive impairments after successful treatment, which affects various domains of cognition. Two tests used to measure working memory and attention are the digit span forward and digit span backwards, which were computerized to assess cognitive deficits in cancer survivors. These tests are generally analyzed through all-or-nothing scoring, discarding potentially useful information from input data. We aim to construct a novel model to separate various processes measured in the digit span tests. We investigate which cognitive processes are impaired in cancer survivors. We use a computerized testing battery to gather input data from the digit span tests, and use partial-credit scoring based on Damerau-Levenshtein distance as the primary outcome measure. We formulate a hierarchical Bayesian cognitive process model which uses these data to identify three separate processes: Working memory capacity, i.e., the maximum span length an individual is able to reproduce and influences both forward and backwards performance; attentional control, which modulates forwards and backwards performance; and executive control, which exclusively modulates backwards performance. We compare these process outcomes between non-CNS cancer survivors and healthy controls, to investigate whether our model is more informative than traditional clinical measures. The digit span tests can be separated into three distinct cognitive processes, which can then be used to compare patient populations to healthy controls. More generally, formal modeling allows for the extraction of more precise information in describing the cognitive deficits faced by patients.
This is an in-person presentation on July 20, 2024 (10:40 ~ 11:00 CEST).
Dr. Daniel Schneider
Anna-Lena Schubert
Measuring individual differences in working memory processes is challenging, particularly if one is interested in the question of which specific aspect of working memory capacity is most relevant for individual differences in cognitive abilities. Mathematical models can address this issue, as they are capable of mapping processes of interest with parameters that are mathematically derived from hypotheses about the nature of these latent processes. The Memory Measurement Model framework (M3; Oberauer & Lewandowsky, 2018) consists of a collection of such cognitive measurement models that isolate parameters associated with distinct working memory processes, such as the formation of bindings or the filtering of irrelevant distractors, in widely used paradigms like simple or complex span tasks. Based on simulations, we developed a series of experiments for different stimulus modalities, tailored to estimate the parameters of the M3 complex span models and to be concurrently used for electrophysiological research. We demonstrate that the estimated parameters are related to specific neurocognitive processes such as the P300 and the contingent negative variation and are capable of mapping individual differences in these processes. We will discuss how this neurocognitive psychometric approach enables more precise measurement of latent working memory processes and whether model parameters can be mapped to neurocognitive correlates.
This is an in-person presentation on July 20, 2024 (11:00 ~ 11:20 CEST).
Dr. Constantin Meyer-Grant
Dr. Rich Shiffrin
After study of a list of items, recognition memory is usually tested with a single item, half from the list (targets, or OLD) and half not from the list (foils, or NEW). The present research tests the ability of existing models to generalize to new situations by using a novel paradigm: testing with two items, both OLD, both NEW, or one of each. Some tests used Two-Alternative Forced Choice (2AFC) in which Ss were asked to choose the item more likely OLD; other tests used four-way classification (4WC) in which Ss were asked to classify the two items as 1) both old, 2) both new, 3) left old, right new, or 4) left new, right old. Both choice probabilities (accuracy) and response time were measured. Each S studied lists containing 12 words, 24 words, 12 pictures, 24 pictures, or lists of 12 words randomly mixed with 12 pictures. After study of mixed lists, some tests were two words, some were two pictures, and some were one picture with one word. The choice probabilities in all these conditions were predicted well by the Retrieving Effectively from Memory model (REM) of Shiffrin and Steyvers (1997) using the 1997 parameter values. Signal-detection modeling (unequal variance Gaussian strength distributions) predicted the choice probabilities using different parameters for different conditions, but suggested that decisions are based on the ratio of strengths rather than raw values, similar to the way that REM uses odds based on likelihood ratios. Initial analysis and modeling gave support to the idea that REM can be extended successfully to predict response times.
This is an in-person presentation on July 20, 2024 (11:20 ~ 11:40 CEST).
Submitting author
Author