Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

The Cognitive Substrates of Model-Based Learning: An Integrative Declarative-Procedural Model

Authors
Cher Yang
University of Washington Seattle ~ Psychology
Prof. Andrea Stocco
University of Washington ~ University of Washington
Abstract

Understanding the fundamental cognitive process of decision-making is crucial for developing appropriate cognitive models. Two main planning-based approaches have been used to investigate learning in complex decision-making tasks: one using model-based reinforcement learning, an extension of reinforcement learning that includes high-level planning, and the other using instance-based learning (IBL), based on episodic memories of previous interactions. In this paper, we attempt to reconcile the two approaches by using ACT-R to implement a cognitively plausible substrate for the planning component of MB-RL. We review the model-based (MB) and model-free (MF) learning approaches in reinforcement learning and discuss their roles in decision-making strategy. Within the ACT-R framework, we propose a promising model that incorporates memory retrieval in MB planning, offering a cognitively plausible approach to the planning component of MB-RL. Our combined model successfully replicates well-known findings in the literature, including developmental reliance on memory and response time variations between common and rare options. Finally, our model naturally accounts for the balance of memory and RL depending on the relative cost of each. We argue for the superiority of our cognitive model and address the significance of this study for understanding the brain and computational processes underpinning decision-making strategies, as well as for applications in artificial intelligence and decision-making modeling.

Tags

Keywords

Decision-making
Reinforcement Learning
Model-Based Learning
Instance-Based Learning
Cognitive architecture
Discussion
New

Hi Cher, I didn't quite understand this in your paper: "In the pure RL model, these probabilities are directly provided to the model. However, this assumption may not fully capture the nature of learning and cognitive processes in the task. This knowledge is not simply given, but actively updated and accumulated by agents from the environment thr...

Jim Treyens 1 comment

This is my second question on you model. I was curious as to why you chose to use PyACTUp instead of the standard ACT-R implementation.

Jim Treyens 1 comment
Question about stimuli Last updated 11 months ago

Nice work, Cher. I'm wondering about the stimuli on this task (and similarity to some other two-stage tasks my team has been looking at). Were the Stage 1 stimuli/prompts the same on every trial? The video has images of rockets (I think), and people had to pick a rocket in stage one. The depiction in the task diagram even suggests you could fal...

Dr. Leslie Blaha 2 comments

Hi Cher, this is the first of a couple of questions about your model. I was interested in why you chose to employ the SARSA MF framework as a substitute for ACT-R's procedural model.

Jim Treyens 1 comment
Cite this as:

Yang, Y., & Stocco, A. (2023, June). The Cognitive Substrates of Model-Based Learning: An Integrative Declarative-Procedural Model. Paper presented at Virtual MathPsych/ICCM 2023. Via mathpsych.org/presentation/1279.