By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Anderson et al. (2019) present an ACT-R model of how humans learn to play rapid-action video games. To further test this model, we utilized new measures of action timing and sequencing to predict skill acquisition in a controlled motor task named Auto Orbit. Our first goal was to use these measures to capture time-related effects of speed on motor skill acquisition, operationalized as a performance score. Our second goal was to compare human and model motor skill learning. Our results suggest that humans rely on different motor timing systems in the sub- and supra-second time scales. While our model successfully learned to play Auto Orbit, some discrepancies in terms of motor learning were noted as well. Future research is needed to improve the current model parametrization and enable ACT-R’s motor module to engage in rhythmic behavior at fast speeds.
To be keen learners, humans need not only external rewards but also internal rewards. To date, there have been many studies on environment learning using intrinsic motivation for artificial agents. In this study, we aim to build a method to express curiosity in new environments via the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture. This model focuses on the "production compilation" and "utility" modules, generic functions of ACT-R, and it regards pattern matching in the environment as a source of intellectual curiosity. We simulated a path-planning task in a maze environment using the proposed model. The model with intellectual curiosity showed that understanding of the environment was improved by the task of searching the environment. Furthermore, we implemented the model using a standard reinforcement learning agent and compared it with the ACT-R model.
As cognitive modeling has matured, so too have its tools. High-level languages are such tools and present a rich opportunity for the acceleration and simplification of model development. Reviewing some of a the major contributors to this area, a new language (Jass) is introduced for building ACT-R models. Jass simplifies and accelerates model development by providing an imperative language that is compiled to production rules. A complex model implemented using this language is detailed.
Models of learning and retention make predictions of human performance based on the interaction of cognitive mechanisms with temporal features such as the number of repetitions, time since last presentation, and item spacing. These features have been shown to consistently influence performance across a variety of domains. Typically omitted from these accounts are the changes in cognitive process and key mechanisms used by people while acquiring a skill. Here we integrate a model of skill acquisition (Tenison & Anderson, 2016) with the Predictive Performance Equation (PPE; Walsh, Gluck, Gunzelmann, Jastrzembski, & Krusmark, 2019) using Bayesian change detection (Lee, 2019). Our results show this allows for both better representation of an individual’s performance during training and improved out-of-sample prediction.