Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Using reinforcement learning models in decision neuroscience: A tutorial with hierarchical Bayesian approaches with Stan

Authors
Dr. Lei Zhang
University of Birmingham ~ Centre for Human Brain Health
Abstract

Recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in decision neuroscience and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes underlying social decision-making. Additionally, there is a growing popularity of hierarchical Bayesian approaches for performing model estimation, which provides the granularity of population-level regulation meanwhile retains individual differences. However, cognitive and social neuroscientists do not necessarily have formal training in computational modeling, which involves multiple steps that require programming as well as quantitative skills. To bridge this gap, this tutorial will first present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla-Wagner RL model. I will then provide a principled interpretation of the functional role of the learning rate parameter. I will also discuss potential misconceptions of RL models and provide an applicable workflow for applying RL models. Finally, I will showcase a few studies that applied RL modeling frameworks in decision neuroscience, including an emerging field of Computational Psychiatry. In the practical session, I will focus on a newly developed probabilistic programming language Stan (mc-stan.org), and an associated R package hBayesDM (github.com/CCS-Lab/hBayesDM) to perform hierarchical Bayesian analyses of a simple RL task. In sum, this tutorial aims to provide simple and scalable explanations and practical guidelines for employing RL models in order to assist both beginners and advanced users in better implementing and interpreting their model-based analyses. Workshop materials can be found here: https://github.com/lei-zhang/talks_and_workshops/tree/main/20230718_MathPsy_ICCM_EMPG

Tags

Keywords

reinforcement learning
computational modeling
hierarchical Bayesian approach
social neuroscience
model-based analysis
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Zhang, L. (2023, July). Using reinforcement learning models in decision neuroscience: A tutorial with hierarchical Bayesian approaches with Stan. Abstract published at MathPsych/ICCM/EMPG 2023. Via mathpsych.org/presentation/1175.