Dr. Sudeep Bhatia
Free association among words is a fundamental and ubiquitous memory task, yet there have been few attempts to apply established cognitive process models of memory search to free association data. We address this by using a simplified variant of a popular recurrent neural network model of recall, the Context Maintenance and Retrieval (CMR) model, which we fit on a large free association dataset. We find that this network, with response biases and asymmetric cue-context and context-cue weight matrices, outperforms previous models without these components (which emerge as special cases of our model), on a variety of metrics including prediction of association asymmetries. We also find that continued free association, where the participant provides multiple responses to a single cue, is best described with a combination of (a) a partially decaying context layer, where representations of the cue and earlier responses are largely maintained over time and (b) a weak but persistent and non-decaying effect of the cue. This network also accounts for ‘response chaining’ effects in continued free association, whereby earlier responses seem to prime later responses. Finally, we show that training our CMR variant on free association data generates improved predictions for list-based recall, demonstrating the value of free association for the study of many different types of memory phenomena. Overall, our analysis provides new explanations for empirical findings on free association, predicts free association with increased accuracy, and integrates theories of free association with established cognitive process models of memory.
Dr. Sudeep Bhatia
Dr. John McCoy
What kinds of words are more memorable? Can we use insights from data science and high-dimensional semantic representations, derived from large-scale natural language data, to predict memorability? In Study 1, we trained a model to map semantic representations directly to recognizability and recallability of 576 unique words from a multi-session mega-study. Specifically, we tested how well we could predict the average memorabilities of words using their vector representations. Leave-one-out cross validation results demonstrated that our model was able to reliably predict which words are more likely to be recognized and recalled with very high accuracy (r = 0.70, 95% Confidence Interval (CI) = [0.656, 0.739]). We next compared our model predictions to an alternative psycholinguistic model which was only trained on conventional word properties such as concreteness and word frequency (r = 0.28, 95% CI=[0.203, 0.353]). Despite previous work in the memory literature that have consistently demonstrated the importance of psycholinguistic properties, our method of mapping rich semantic representations to recognition and recall data outperformed this alternative model. Combining semantic representations and psycholinguistic properties, however, further increased our models’ predictive power (r = 0.72, 95% CI=[0.679, 0.757]). In Study 2, we sought to examine and interpret the information contained in semantic representations that gives rise to these successful predictions. We studied individual words and concepts that are most (vs. least) strongly associated with different words in our study word pool in these multi-dimensional spaces. These associations allowed us to characterize the variability in memorability across different study words and determine which attributes, traits, and concepts are most associated with the words that participants were more likely to remember. Results of this study highlighted top constructs that were related to memory performance. These constructs included those relating to humans (e.g., family-, female-, male-related constructs), emotions, and arousing situations. Altogether, we introduced a computational approach that can generalize its learned mappings to make quantitative predictions for the memorability of millions of words or phrases with semantic representations, without the need of any further participant data. In addition, we were also able to identify psychological concepts and constructs that are most-related to high (or low) memory performance. Thus, we provide evidence that using high-dimensional semantic representations is a powerful predictive tool to shed light on which words are more likely to be remembered and what the underlying psychological constructs of successful memory may be.
Memory models supply many examples of a common feature of computational cognitive modelling; namely that a model may be simple to describe and simulate and yet have no closed form expression which permits it to be fit via maximum likelihood estimation, or similar techniques. One such model is the Feature Model (Nairne, 1988, 1990; Neath & Nairne, 1995) which was developed to model immediate serial recall. In recent work we have used Approximate Bayesian Computation methods to fit both the original and a revised version of the feature model to data from serial recall, free recall, and order reconstruction tasks. We will discuss the Revised Feature Model (RFM) and the procedure for fitting it to data by considering the example of the production effect; a well-known encoding effect, according to which when some words within a list are read aloud during study they are better remembered than words read silently. The RFM accounts for the production effect via a combination of relative distinctiveness and the costs of the richer encoding associated with production, and we will show that it provides a good account of the production effect in both immediate and delayed recall tasks. The success of this approach means the Revised Feature Model can now be added to the set of memory models that may be quantitatively fit to data, and compared with each other.
How do humans judge that a stimulus is novel? Novelty judgement is a fundamental property of human memory and an important problem for artificial intelligence. While computational memory models can predict speed and accuracy of recall and recognition, many models fail to predict response time and accuracy on rejected foil items in experimental tasks. We present a formal analysis of computational models of human memory, including MINERVA (Hintzman, 1986), IRM (Mewhort & Johns, 2005), ACT-R DM (Anderson, 2009), and HDM (Kelly, Arora, West, & Reitter, 2020). We test the models on two tasks: the fan effect (Anderson, 1974) and the extra-list feature (ELF; Johns & Mewhort, 2003) effect. The models are able to perform the fan effect on target items when using a multiple recall strategy, but not when using a recognition judgement or single recall. To account for the ELF effect, we propose a new model that uses complex-valued vectors. We compare and contrast our model to existing models and discuss the implications of our theoretical findings for memory modelling and deep learning.
Dr. Hyungwook Yim
Prof. Simon Dennis
The free association task provides a glimpse into the organizational structure of concepts in memory and has been used by theorists as a benchmark for computational models of semantic processing. While descriptive accounts like the Topics model and Latent Semantic Analysis have been shown to match free association data, to date no process model has been tested. We compared three descriptive models (Topics, LSA and word2vec, Mikolov et al., 2013) and two process models (Dynamic Eigen Network and BEAGLE; Jones & Mewhort, 2007). Overall, word2vec showed the best match to the South Florida free association norms. Of the process models, the DEN outperformed BEAGLE. When association pairs were characterized as either forward, backward, syntagmatic, paradigmatic, form-based or other, the profiles of performance of the models were remarkably similar. All models failed to capture form-based associations, as would be expected, and also performed best on paradigmatic associations.