Fast online reinforcement learning with biologically-based state representations
In previous work, we provided a neurally-based Actor-Critic network with biologically inspired grid cells for representing spatial information, and examined whether it improved performance on a 2D grid-world task over other representation methods. We did a manual search of the parameter space and found that grid cells outperformed other representations. The present work expands on this work by performing a more extensive search of the parameter space in order to identify optimal parameter sets for each configuration using one of four representation methods (baseline look-up table, one-hot, random SSPs and grid cells). Following this optimization, the baseline, one-hot and random SSPs methods did show improvement over the previous study, in some cases showing performance as good as grid cells. These findings, combined, suggest that whilst the baseline and one-hot methods do perform well once optimized, grid cells do not necessarily require optimization in order to produce optimal performance.
There is nothing here yet. Be the first to create a thread.