The Cognitive Substrates of Model-Based Learning: An Integrative Declarative-Procedural Model
Understanding the fundamental cognitive process of decision-making is crucial for developing appropriate cognitive models. Two main planning-based approaches have been used to investigate learning in complex decision-making tasks: one using model-based reinforcement learning, an extension of reinforcement learning that includes high-level planning, and the other using instance-based learning (IBL), based on episodic memories of previous interactions. In this paper, we attempt to reconcile the two approaches by using ACT-R to implement a cognitively plausible substrate for the planning component of MB-RL. We review the model-based (MB) and model-free (MF) learning approaches in reinforcement learning and discuss their roles in decision-making strategy. Within the ACT-R framework, we propose a promising model that incorporates memory retrieval in MB planning, offering a cognitively plausible approach to the planning component of MB-RL. Our combined model successfully replicates well-known findings in the literature, including developmental reliance on memory and response time variations between common and rare options. Finally, our model naturally accounts for the balance of memory and RL depending on the relative cost of each. We argue for the superiority of our cognitive model and address the significance of this study for understanding the brain and computational processes underpinning decision-making strategies, as well as for applications in artificial intelligence and decision-making modeling.
Keywords
Hi Cher, I didn't quite understand this in your paper: "In the pure RL model, these probabilities are directly provided to the model. However, this assumption may not fully capture the nature of learning and cognitive processes in the task. This knowledge is not simply given, but actively updated and accumulated by agents from the environment thr...
This is my second question on you model. I was curious as to why you chose to use PyACTUp instead of the standard ACT-R implementation.
Nice work, Cher. I'm wondering about the stimuli on this task (and similarity to some other two-stage tasks my team has been looking at). Were the Stage 1 stimuli/prompts the same on every trial? The video has images of rockets (I think), and people had to pick a rocket in stage one. The depiction in the task diagram even suggests you could fal...
Hi Cher, this is the first of a couple of questions about your model. I was interested in why you chose to employ the SARSA MF framework as a substitute for ACT-R's procedural model.
Cite this as: