mathpsych.org

Invited Speakers

Five routes to better models of cognition

Wolf Vanpaemel, KU Leuven

William K. Estes Early Career Award winner 2013.

 

An important goal in cognitive science is to build strong and precise formal models of how people acquire, represent, and process information. I argue that there are several invaluable but under-used ways in which models of cognition can be improved. I present a number of worked examples to show how models of cognition can be enhanced by: relying on (prior) predictions rather than on post(erior pre)dictions; reducing dependence on free parameters by capturing theory in the prior; fighting the Greek letter syndrome by testing selective influence; engaging in model expansion; and taking the plausibility of data into account when testing models. Adopting these modeling practices will require modellers to be creative and to overcome their hypochondriacal fear of subjectivity, but will lead to an increased understanding of cognition.

An important goal in cognitive science is to build strong and precise formal models of how people acquire, represent, and process information. I argue that there are several invaluable but under-used ways in which models of cognition can be improved. I present a number of worked examples to show how models of cognition can be enhanced by: relying on (prior) predictions rather than on post(erior pre)dictions; reducing dependence on free parameters by capturing theory in the prior; fighting the Greek letter syndrome by testing selective influence; engaging in model expansion; and taking the plausibility of data into account when testing models. Adopting these modeling practices will require modellers to be creative and to overcome their hypochondriacal fear of subjectivity, but will lead to an increased understanding of cognition.

 

Symmetry and the computational goals that underlie perception

Horace Barlow, University of Cambridge, UK

 

Although I am (or was) a neurophysiologist, I do not think records of impulse trains from neurons in perceptual systems can be interpreted properly until we answer the question "What are the goals of the computations these systems and their neurons are performing?" This is simply because you cannot test whether a system does the job you think it may do unless you have ideas about what that job is.  The proposition I like the sound of, and shall argue for here, is that the two main computations in early vision are cross-correlation of patches of the image with fixed templates, and auto-correlations of pairs of image patches related by some specified transformations.  One definition of symmetry is "invariance under transformation", so is symmetry detection the main computational goal of early vision?  This is the first point to be discussed, and I think it turns out that the answer is "Yes", but perhaps this applies only to some, not all, of the transformations you might wish to include in the definition of symmetry.  The second question is "How does detecting symmetry help?" Symmetries are forms of regularity or redundancy, and if you know about them you can make more reliable and sensitive predictions than if you don't, and you will have potentially serious cognitive advantages over your competitors.  There are some ancient observations on the way that damage to the visual cortex interferes with the orienting response that tend to support these views.  It should be possible to allocate specific types of symmetry detection to specific cortical areas neurophysiologically, or possibly using fMRI.  Some preliminary psychophysical experiments capable of measuring the absolute efficiencies for detecting non-random or non-independent positioning of dots in otherwise random arrays have already given encouraging preliminary results.  I think the view that symmetry detection is the main new trick of the cerebral cortex deserves closer examination.

 

Reinforcement Learning and Psychology: A Personal Story

Richard S. Sutton, University of Alberta

 

The modern field of reinforcement learning (RL) has a long, intertwined relationship with psychology. Almost all the powerful ideas of RL came originally from psychology, and today they are recognized as having significantly increased our ability to solve difficult engineering problems such as playing backgammon, flying helicopters, and optimal placement of internet advertisements. Psychology should celebrate this and take credit for it! RL has also begun to give something back to the study of natural minds, as RL algorithms are providing insights into classical conditioning, the neuroscience of brain reward systems, and the role of mental replay in thought. I have been working in the field of RL for much of this journey, back and forth between nature and engineering, and have played a role in some of the key steps. In this talk I tell the story as it seemed to happen from my point of view, summarizing it in four things that I think every psychologist should know about RL: 1) that it is a formalization of learning by trial and error, with engineering uses, 2) that it is a formalization of the propagation of reward predictions which closely matches behavioral and neuroscience data, 3) that it is a formalization of thought as learning from replayed experience that again matches data from natural systems, and 4) that there is a beautiful confluence of psychology, neuroscience, and computational theory on common ideas and elegant algorithms.