Workshop: Reinforcement Learning Models in Decision Neuroscience
Recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in decision neuroscience and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes underlying social decision-making. Additionally, there is a growing popularity of hierarchical Bayesian approaches for performing model estimation, which provides the granularity of population-level regulation meanwhile retains individual differences. However, cognitive and social neuroscientists do not necessarily have formal training in computational modeling, which involves multiple steps that require programming as well as quantitative skills. To bridge this gap, this tutorial will first present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla-Wagner RL model. I will then provide a principled interpretation of the functional role of the learning rate parameter. I will also discuss potential misconceptions of RL models and provide an applicable workflow for applying RL models. Finally, I will showcase a few studies that applied RL modeling frameworks in decision neuroscience, including an emerging field of Computational Psychiatry. In the practical session, I will focus on a newly developed probabilistic programming language Stan (mc-stan.org), and an associated R package hBayesDM (github.com/CCS-Lab/hBayesDM) to perform hierarchical Bayesian analyses of a simple RL task. In sum, this tutorial aims to provide simple and scalable explanations and practical guidelines for employing RL models in order to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.
Workshop materials can be found here: https://github.com/lei-zhang/talks_and_workshops/tree/main/20230718_MathPsy_ICCM_EMPG