Prof. Brad Wyble
Mr. Ryan O'Donnell
Early theories of working memory (WM) (Atkinson & Shiffrin, 1968; Baddeley & Hitch, 1974; Baddeley, 2000, Ericsson & Kintsch,1995) have discussed the essential role of visual knowledge and long-term memory in WM performance. Yet, there is no computational model to show how WM representations can be built from the visual knowledge using neurocomputational mechanisms. We propose a model of WM-visual knowledge that uses the latent representations in a visual knowledge network to encode information in memory. This neurally plausible model (named MLR) represents visual knowledge using a variational autoencoder (VAE; Kingma & Welling, 2013) that learns to compress and reconstruct the pixle-wise visual stimuli. We modified the VAE to represent shape and color in separate maps, to be able to test theories of shape-color bindings. The MLR model uses a binding pool (BP; Swan & Wyble, 2014) to flexibly store information in a shared neural resource, such that attributes of an item (e.g., shape and color) could be represented in one memory system, and bound together according to task demands. The storage of information in working memory is accomplished through self-sustaining activity patterns, and synaptic weights are fixed during storage and retrieval.
We showed that MLR can efficiently store and retrieve familiar items when their compressed representation in visual knowledge is encoded. On the other hand, novel items could be encoded and retrieved from less compressed representations. Additionally, the MLR model could extract the categorical information of an item and store it in memory along with the visual information. Finally, our model provides an explanation of how WM is linked to visual knowledge, to store familiar stimuli efficiently, while also being able to build on-the-fly memories of novel stimuli.