This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

No distinction between 'capacity' and 'precision': Populations of noisy familiarity signals explain visual memory errors

Prof. Timothy Brady
University of California San Diego ~ Psychology
Mark Schurgin
University of California, San Diego, United States of America
John Wixted
University of California, San Diego, United States of America

Over the past decade, many studies have used mixture models to interpret continuous report memory data, drawing a distinction between the number of items represented the precision of those representations (e.g., Zhang & Luck,2008). Such models, and subsequent expansions of these models to account for additional phenomena like variable precision, have led to hundreds of influential claims about the nature of consciousness, working memory and long-term memory.Here we show that a simple generalization of signal detection theory (termed TCC ā€“ target confusability competition model - accurately accounts for memory error distributions in much more parsimonious terms, and can make novel predictions that are entirely inconsistent with mixture-based theories. For example, TCC shows that measuring how accurately people can make discriminations between extremely dissimilar items (study red; then report whether studied item is red vs. green) is completely sufficient to predict, with no free parameters, the entire distribution of errors that arises in a continuous report task (report what color you saw on a color wheel). Because this is inconsistent with claims that the continuous report distribution arises from multiple distinct parameters, like guessing, precision, and variable precision, TCC suggests such distinctions are illusory.Overall, with only a single free parameter ā€“ memory strength (dā€™) ā€“ TCC accurately accounts for data from n-AFC, change detection and continuous report, across a variety of working memory set sizes, encoding times and delays, as well as accounting for long-term memory continuous report tasks. Thus, TCC suggests a major revision of previous research on visual memory.



visual working memory
signal detection
population codes
visual long-term memory


Cognitive Modeling
Perception and Signal Detection
Memory Models
How much do results rely on BIC? Last updated 3 years ago

Here, the two models seem to differ quite a bit in terms of the number of parameters as TCC seems to have only one parameter. However, BIC penalizes rather heavily for additional parameters (depending a bit on N). Hence, how much of the results rely on BIC? Does the same pattern show for AIC? Also, does the mixture model show some qualitative mis...

Dr. Henrik Singmann 2 comments
Cite this as:

Brady, T., Schurgin, M., & Wixted, J. (2020, July). No distinction between 'capacity' and 'precision': Populations of noisy familiarity signals explain visual memory errors. Paper presented at Virtual MathPsych/ICCM 2020. Via