Using multimodal data-fusion to identify connectomes of reinforcement learning constructs and associations with depressive phenotypes
Reinforcement learning (RL) - the process by which we learn about the environment is dysregulated in many psychiatric disorders but is especially impaired in major depressive disorder (MDD). Understanding the neurobiological correlates of RL is, therefore, a promising avenue to parse depression pathophysiology. However, RL is a multifaceted construct involving several sub-processes ranging from valuation, accumulating evidence for these options (sequential sampling), choosing the best option (explore-exploit behavior), salience attribution and lastly feedback integration (learning rate). Using computational modeling we can quantify these sub-processes and elucidate the underlying latent behavioral constructs. Interestingly, animal work has shown that these different sub-processes have different biological underpinnings, suggesting that RL sub-processes can be utilized to parse MDD heterogeneity and develop more targeted interventions. The goal of this study is to identify the functional and structural connectome of these RL subconstructs using multimodal data fusion. 46 (15 healthy, 31 clinical) subjects completed a structural T1-weighted MPRAGE scan and an RL task where they have to learn to choose the stimulus associated with rewards. A combined Q-learning/Drift Diffusion model was used to estimate RL parameters including drift rate (DR), boundary threshold (BT) and learning rate (LR) for each subject. The boundary threshold is the amount of evidence needed until a decision threshold is reached. Wider decision boundaries lead to slower and more accurate decisions, whereas narrower boundaries lead to faster but more error-prone decisions. The drift rate reflects the average speed with which the decision process approaches the response boundaries. High drift rates lead to faster and more accurate decisions. Learning rate represents the degree to which the expected values are updated and how we adjust the decisions in changing circumstances. We performed a Linked Independent Component Analysis (LICA) of 1) modulated grey matter (GM) images generated by FSLVBM, 2) vertex-wise cortical thickness (CT) and pial surface area (PSA) maps estimated using FreeSurfer across all subjects. LICA is a data-driven multivariate approach that identifies a set of multimodal spatial patterns, each comprised of morphometric properties linked across modalities, and subject loadings for each that capture inter-subject variability. LICA identified three components uniquely associated with the three RL parameters. The LR component comprised of GM density in the ventromedial and dorsolateral prefrontal, visual cortices; PSA and CT in the amygdala/hippocampus, whereas BT and DR components showed different spatial patterns. Critically, component loadings correlated with clinical symptoms. Lower structural covariance (SC) in LR component was associated with higher anxiety and lower anhedonia, suggesting different mechanisms of action. Similarly, lower SC in BT component was associated with negative affect. Multimodal data-fusion disentangles the structural connectomes of RL sub-constructs providing insight into MDD heterogeneity. Other studies utilizing these methods and prediction models will also be discussed.
There is nothing here yet. Be the first to create a thread.