Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Fast online reinforcement learning with biologically-based state representations

Authors
Dr. Madeleine Bartlett
University of Waterloo, Canada ~ Cheriton School of Computer Science
Prof. Jeff Orchard
University of Waterloo, Canada ~ Cheriton School of Computer Science
Terry Stewart
National Research Council of Canada
Abstract

In previous work, we provided a neurally-based Actor-Critic network with biologically inspired grid cells for representing spatial information, and examined whether it improved performance on a 2D grid-world task over other representation methods. We did a manual search of the parameter space and found that grid cells outperformed other representations. The present work expands on this work by performing a more extensive search of the parameter space in order to identify optimal parameter sets for each configuration using one of four representation methods (baseline look-up table, one-hot, random SSPs and grid cells). Following this optimization, the baseline, one-hot and random SSPs methods did show improvement over the previous study, in some cases showing performance as good as grid cells. These findings, combined, suggest that whilst the baseline and one-hot methods do perform well once optimized, grid cells do not necessarily require optimization in order to produce optimal performance.

Tags

Keywords

Reinforcement Learning
grid cells
Spatial Semantic Pointers
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Bartlett, M., Orchard, J., & Stewart, T. (2022, July). Fast online reinforcement learning with biologically-based state representations. Paper presented at Virtual MathPsych/ICCM 2022. Via mathpsych.org/presentation/860.