Instance-based cognitive modeling: a machine learning perspective
Cognitive Instance-Based Learning (CogIBL) model is a cognitive framework implemented within the constraints of ACT-R principles. This formulation though, defined within the Cognitive Science field, does not reveal the model's full strength and capabilities. In this work, we show that CogIBL, essentially, implements Kernel Smoothing, a non-parametric Supervised Learning function approximation method. Under this perspective, abstracted from cognitive concepts and expressed as a statistical learning algorithm, we argue that all CogIBL's implementations fall under two main learning paradigms: Supervised Learning and Reinforcement Learning. This new perspective has multiple benefits. First, it reveals CogIBL's structural differences from parametric approaches such as Neural Networks. It links it with well-studied statistical learning theory which provides theoretical guarantees of convergence, reveals its properties at full and establishes good evaluation practices highlighting where the model should expected to perform well and why. Second, the model, under the new formulation, can be implemented with popular tensor libraries such as Tensorflow and Pytorch making it scalable and fully parallelizable. This enables it to interact with prevalent Reinforcement Learning libraries such as OpenAI gym and Deepmind Lab, get trained in parallel with synchronous updates, and output multiple decisions at the same time. Finally, we discuss what this new approach reveals about the strength and weaknesses of the model and how a modeler can benefit from these.