Stochastic sampling provides a unifying perspective on working memory limits
Recent debate regarding the limits of working memory has focused on whether memory resources are better characterized as discrete or continuous, with models of each type competing to best capture the errors humans make in recall. I will argue that this apparent dichotomy is largely illusory, and that the critical distinction is instead between deterministic and stochastic mechanisms of WM, with only the latter being compatible with observed human performance and the underlying biological system. I will show that reconceptualizing existing models in terms of sampling reveals strong commonalities between supposedly opposing accounts. A probabilistic limit on how many items can be successfully recalled from WM is an emergent property of continuous models, despite these models having no explicit mechanism to enforce such a limit. Furthermore, adding stochasticity in the number of samples to a discrete model puts its ability to describe behaviour on a par with continuous models. Finally, stochastic sampling has a theoretical connection with biologically plausible implementations of WM based on the inherently stochastic activity of neural populations.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: