ICCM: Neuroscience I
Stefan Frank
Hartmut Fitz
Mr. Yung Han Khoe
Event-related potentials (ERPs) are used to study how language is processed in the brain, including differences between native (L1) and second-language (L2) comprehension. In low proficiency L2 learners, syntactic violations give rise to an N400, but this changes into a P600 as their L2 proficiency increases. The precise functional interpretation of ERPs, however, remains a matter of debate. Fitz and Chang (2019) proposed a theory where ERPs reflect learning signals that arise from mismatches in predictive processing. These signals are propagated across the language system to make future predictions more accurate. We test if this theory can account for the N400-to-P600 switch in late bilinguals, by implementing a model capable of simulating the N400 and P600. We perform an experiment designed to elicit a P600 effect in simulated L2 learners progressing through learning stages. Simulated Spanish-English participants showed similar ERP effects in their L2 (English) as human participants did in ERP studies. Over the course of L2 learning, simulated N400 size decreased while P600 size increased, as it does in humans. Our findings support the viability of error propagation as an account of ERP effects, and specifically of how these can change over L2 learning.
This is an in-person presentation on July 20, 2023 (09:00 ~ 09:20 UTC).
Kathryn Simone
Ms. Nicole Dumont
Dr. Michael Furlong
Chris Eliasmith
Prof. Jeff Orchard
Terry Stewart
Learning from experience, often formalized as Reinforcement Learning (RL), is a vital means for agents to develop successful behaviours in natural environments. However, while biological organisms are embedded in continuous spaces and continuous time, many artificial agents use RL algorithms that implicitly assume some form of discretization of the state space, which can lead to inefficient resource use and improper learning. In this paper we show that biologically motivated representations of continuous spaces form a valuable state representation for RL. We use models of grid and place cells in the Medial Entorhinal Cortex (MEC) and hippocampus, respectively, to represent continuous states in a navigation task and in the CartPole control task. Specifically, we model the hexagonal grid structures found in the brain using Hexagonal Spatial Semantic Pointers (HexSSPs), and combine this state representation with single-hidden-layer neural networks to learn action policies in an Actor-Critic (AC) framework. We demonstrate our approach provides significantly increased robustness to changes in environment parameters (travel velocity), and learns to stabilize the dynamics of the CartPole system with comparable mean performance to a deep neural network, while decreasing the terminal reward variance by more than~150x across trials. These findings at once point to the utility of leveraging biologically motivated representations for RL problems, and suggest a more general role for hexagonally-structured representations in cognition.
This is an in-person presentation on July 20, 2023 (09:20 ~ 09:40 UTC).
Dr. Michael Furlong
Kathryn Simone
Dr. Madeleine Bartlett
Prof. Jeff Orchard
We present a unified model of how groups of neurons can represent and learn probability distributions using a biologically plausible online learning rule. We first present this in the context of insect olfaction, where we map our model onto a well-known biological circuit where a single output neuron represents whether the current stimulus is novel or not. We show that the model approximates a Bayesian inference process, providing an explanation as to why the current flowing into the output neuron is proportional to the expected probability of that stimulus. Finally, we extend this model to show that the same circuit can detect temporal patterns such as those violations of expectations that produce the EEG mismatch negativity signal.
This is an in-person presentation on July 20, 2023 (09:40 ~ 10:00 UTC).
Iris van Rooij
A core inferential problem in the study of natural and artificial systems is the following: given access to a neural network, a stimulus and behaviour of interest, and a method of systematic experimentation, figure out which circuit suffices to generate the behaviour in response to the stimulus. It is often assumed that the main obstacles to this "circuit cracking'' are incomplete maps (e.g., connectomes), observability and perturbability. Here we show through complexity-theoretic proofs that even if all these and many other obstacles are removed, an intrinsic and irreducible computational hardness remains. While this may seem to leave open the possibility that the researcher may in practice resort to approximation, we prove the task is inapproximable. We discuss the implications of these findings for implementationist versus functionalist debates on how to approach the study of cognitive systems.
This is an in-person presentation on July 20, 2023 (10:00 ~ 10:20 UTC).
Dr. Madeleine Bartlett
Terry Stewart
Chris Eliasmith
Probability theory is often used to model animal behaviour, but the gap between high-level models and how those are realized in neural implementations often remains. In this paper we show how biologically plausible cognitive representations of continuous data, called Spatial Semantic Pointers, can be used to construct single neuron estimators of probability distributions. These representations form the basis for neural circuits that perform anomaly detection and evidence integration for decision making. We tested these circuits on simple anomaly detection and decision-making tasks. In the anomaly detection task, the circuit was asked to determine whether observed data was anomalous under a distribution implied by training data. In the decision-making task, the agent had to determine which of two distributions were most likely to be generating the observed data. In both cases we found that the neural implementations performed comparably to a non-neural Kernel Density Estimator baseline. This work distinguishes itself from prior approaches to neural probability by using neural representations of continuous states, e.g., grid cells or head direction cells. The circuits in this work provide a basis for further experimentation and for generating hypotheses about behaviour as greater biological fidelity is achieved.
This is an in-person presentation on July 20, 2023 (10:20 ~ 10:40 UTC).
Submitting author
Author