Comparing Model Variants Across Experimental and Naturalistic Data Sets
Computational models of human memory have largely been developed in laboratory settings, using data from tightly controlled experiments that were designed to test specific assumptions of a small set of models. This approach has resulted in a range of models that explain experimental data very well. Over the last decade, more and more large-scale data sets from outside the laboratory have been made available and researchers have been extending their model comparisons to include such real-life data. We follow this example and conduct a simulation study in which we compare a number of model variants across a range of eight data sets that include both experimental and naturalistic data. Specifically, we test the Predictive Performance Equation (PPE)---a lab-grown model---and its ability to predict performance across the entire range of data sets depending on whether one or both of its crucial components are included in the model. These components were specifically designed to account for spacing effects in learning and are theory-inspired summaries of the entire learning history for a given user-item pair. By replacing these terms with a simple lag times (rather than full histories) or a single free parameter, we reduce the PPE's complexity. The results, broadly speaking, suggest that the full PPE performs best in experimental data but that not much predictive accuracy is lost if the terms are omitted from the model when naturalistic data are concerned. A possible reason is that spacing effects are not very important in real-life data but very important in spacing experiments.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: