Dr. Adam Osth
Prof. Simon Dennis
The list-length effect (LLE) in recognition memory refers to the phenomenon where performance decreases as the length of the to-be-remembered list increases. The phenomenon has been theoretically important in the literature concerning the sources of forgetting since the existence of the LLE entails that memory interference stems from each individual item (i.e., item-noise) compared to other sources such as interference from pre-experimental experience (i.e., context noise). Regarding the existence of the LLE, Brandt et al. (2019) recently showed that the experimental designs that support a null LLE suffer from confounds regarding the ordering of experimental conditions (i.e., using a counterbalanced within-subjects design). Therefore, with new evidence for a LLE, in the current study, we re-examined the LLE more systematically manipulating different list-length, delay-length, stimuli-type, and study-time (60 conditions), and using a between-subjects design with a large sample size via mTurk (3,600 participants, 60 participants per condition). Results show evidence for a LLE with different amount of interference across different stimulus type and conditions, which supports that item-noise affects recognition memory. Additionally, we utilized a computational model (Osth & Dennis, 2015) to compare the relative amount of interference (e.g., item-noise, context-noise) affecting recognition memory across conditions. We find that although item-noise exists, there is a greater contribution of context-noise in recognition memory.
Dr. Adam Osth
In episodic memory research, there is a debate concerning whether decision-making in recognition and source memory is better explained by models that assume discrete cognitive states, or continuous underlying strengths. One aspect in which these classes of models differ is their predictions regarding the ability to retrieve contextual details (or source details) of an experienced event, given that the event itself is not recognized. Discrete state models predict that when items are unrecognized, source retrieval is not possible and only guess responses can be elicited. In contrast, models assuming continuous strengths predict that it is possible to retrieve the source of unrecognized items (albeit with low accuracy). Empirically, there have been numerous studies reporting either chance accuracy or above-chance accuracy for source memory in the absence of recognition. For instance, studies presenting recognition and source judgments for the same item in immediate succession have revealed chance-level accuracy, while studies presenting a block of recognition judgments followed by a block of source judgments have revealed slightly above-chance accuracy. In the present investigation, data from two novel experiments involving multiple design manipulations were investigated using a hierarchical Bayesian signal detection model. Across most conditions it was shown that source accuracy for unrecognized items was slightly above chance. It is suggested that findings of a null effect in the prior literature may be attributable to design elements that hinder source memory as a whole, and to high degrees of uncertainty in the participant-level source data when conditioned on unrecognized items.
Prof. Joe Houpt
General recognition theory (GRT), a multivariate generalization of signal detection theory, is a powerful means for inferring the interaction of representations and decision processes when perceiving a multidimensional stimulus. In order for inferences to be made from a GRT experiment, stimuli must be sufficiently confusable so that subjects make identification errors. Stimulus intensities are typically chosen through repeated pilot testing, and the same stimuli are used for every subject in the experiment. This approach is time consuming on its own but can critically fail for some subjects due to individual differences. Here, we propose an algorithm to improve the effectiveness of GRT by adapting the design of the experiment to individual subjects. Our method leverages adaptive psychophysical methods (e.g., Psi, Quest+) to iteratively fit a highly constrained GRT model to a subject’s responses in real time. The algorithm converges rapidly on a rough approximation of the subject’s internal perceptual process by assuming perceptual independence, perceptual separability, and decisional separability. The user only needs to specify the intensity range of interest for each perceptual dimension of the stimulus for the adaptive process to generate reasonable stimuli to use in the main GRT experiment. When combined with the analysis code provided by existing R packages, our method permits a completely automated pipeline from hypothesis to data collection to statistical inference. We present the results from a simulation study assessing the recoverability and statistical properties of the algorithm and one human experiment comparing the adaptive process to the more traditional pilot testing approach.
It has been well established in recognition memory paradigms that participants exhibit higher probabilities of falsely endorsing lures that are perceptually similar to the studied words. Recognition memory models explain this phenomenon as a consequence of global similarity computation - choice probability is proportional to the aggregated similarity between the probe word and each of the study list words. However, to date such models have not integrated perceptual representations of the words themselves. In this work, I explore the consequences of a variety of word-form representations from the psycholinguistics of reading literature. These include representations where similarity is a function of the number of in-position letter matches (slot codes and both edges representation), representations with noisy position codes (the overlap model; Gomez, Ratcliff, & Perea, 2008), along with matches based on relative position matches (bigram models). Global similarity among the representations was linked to choice and response times using the linear ballistic accumulator model (Brown & Heathcote, 2008). Results demonstrated a.) a general superiority of bigram models, b.) changes in perceptual representations under shallow processing, and c.) comparable interference from perceptual similarity as semantic similarity, where semantic similarity was calculated using Word2Vec representations (Mikolov et al., 2013).