Dr. Ricky Romeu
Huntington's disease is a debilitating neurodegenerative illness involving motor and cognitive impairments throughout its progression, eventually leading to death. Diagnosis is based on motor symptoms; however, the cognitive symptoms are more debilitating. Assessing the disease's consequences on cognition can hint at what processes should be targeted for cognitive-behavioral treatment. We present an application of a hierarchical diffusion model (Ratlciff et al., 2016; Vandekerchkove et al., 2010) to an ambulatory assessment with manifest (HD) and premanifest (PM) Huntington's patients and compare their performance, as assessed by the model, to performance from controls on a numerosity task (McLaren et al., 2020). We found a gradation of impairment across the groups in the mean drift rate, such that: (1) HD always had a lower drift rate than controls; (2) HD had lower drift rates than PM in the "easy" condition, but they had essentially equivalent rates in the "difficult" condition; (3) PM had lower drift rates than controls in the "difficult" condition, but they had essentially equivalent rates in the "easy" condition. These results held even after regressing on age for all groups, and were not observed when analyzing average response times or correct/incorrect response percentages. Our Bayesian approach also allowed us to assess which parameters were most reliably estimated with ambulatory data through the Gelman-Rubin statistic. Overall, we found that the hierarchical diffusion model provided novel insights into the progression of Huntington's disease, with our Bayesian model providing a powerful method of assessment and group separation even with in-home, ambulatory data on mobile phones.
Prof. Clintin Davis-Stober
In previous studies, we tested properties of sexual decision making using a novel sexual gambles task in which participants made repeated choices between hypothetical sexual partners based on physical attractiveness and risk of contracting a sexually transmitted infection (STI). We found that the vast majority of participants (~98%) used a rational, compensatory strategy when choosing between partners and that between-subject variability in choice behavior was associated with sexual attitudes and behaviors (Hatz et al., in press). In the present study, we tested whether this pattern of results would hold under acute alcohol intoxication, a manipulation known to impact cognitive processing abilities. Young adult moderate drinkers (N=44) were recruited from a large Midwestern university and surrounding community to participate in a double-blind, within-subjects laboratory alcohol administration study consisting of counterbalanced alcohol (target peak BrAC=0.10 g%) and placebo sessions. Participants completed the sexual gambles task at matched points (BrAC =~ .080%) on the ascending and descending limbs of intoxication in the alcohol session and at approximately matched points in the placebo session. We used Bayesian model selection to test whether participants used a compensatory (i.e., a numerical utility representation) or non-compensatory decision making strategy on the task. We then used a p-median clustering algorithm (Brown et al., 2016) to identify between-subject variability in choice behavior. In a replication of our previous findings, nearly all participants used a compensatory strategy when making sexual decisions, regardless of beverage condition or limb of intoxication. Results and implications will be discussed.
Prof. Jay I. Myung
Prof. Woo-Young Ahn
Nicotine addiction is a major health problem worldwide, and it is imperative that we develop reliable and inexpensive tools for predicting its treatment outcomes. Recent studies suggest that computational modeling and associated tools may provide us with neurocognitive markers of addictive disorders. Importantly, adaptive design optimization (ADO), which guides stimulus selection in an optimal way, may lead to rapid, precise, and reliable markers of addictive disorders. Large-scale mobile or wearable data may also reveal digital phenotypes for daily-life, but it remains untested if ADO may contribute to developing optimized digital phenotypes for addictive behaviors. Here, we conducted a longitudinal study with 43 individuals participating in a smoking cessation clinic for up to 6 weeks to investigate if ADO-based markers from a smartphone app may predict their future nicotine intake. Two ADO-based cognitive tasks provided individuals’ model parameters regarding their decision-making on a daily basis. Participants also answered surveys regarding their smoking-related behaviors and psychological states every day. The results suggest that ADO-based digital phenotypes (in a smartphone app) show great test-retest reliability at a similar performance level to laboratory-based ADO-based markers. Time-lagged regression analyses using daily ADO-based digital phenotypes and survey responses revealed several significant features that predicted the amount of smoking on the next day while model parameters such as risk sensitivity and ambiguity sensitivity accounted for subjects' mean-level of nicotine intake. These findings suggest that ADO may contribute to the development of reliable digital phenotypes in daily life.
We update our beliefs based on evidence but often slower than Bayesian theory demands. Beliefs in stability may lead to these conservatism bias. Notably, patients with delusions do not show the conservatism bias and can be more Bayesian in probabilistic reasoning tasks. Still, their reasoning has been explained with reduced general cognitive abilities, i.e.a lower working memory capacity, overweighting of recent information, or lower thresholds for switching from one belief to another. We modeled the graded estimate version of the beads task, i.e. a task where one sees two jars containing opposite ratios of colored beads. One then estimates the probability that a shown bead comes from jar A. We model the deviations from an ideal Bayesian observer on three independent datasets, totalling n=176 healthy controls and n=128 patients with schizophrenia. The parameters describe a) the number of beads considered (memory), b) systematic deviation and c) unsystematic deviations (volatility) from probability estimates. We find that on average patients consider fewer beads, and show more volatile responding. However, patients have on average probability estimates that are closer to the true probabilities and hence show less of a conservatism bias. Our mathematical model captures well the cognitive mechanisms proposed to contribute to performance differences, known as jumping to conclusion bias, in the beads task. It also shows that taking fewer data into account may reduce a cognitive bias.
Ms. Rachel Lerch
John A. Tarduno
Robert A. Jacobs
In the visual change detection paradigm, observers are shown two stimuli in succession, labeled x and y, and are asked to report whether they are the same or different. The goal in such studies is to determine the probability that an observer will detect a difference as a function of the stimuli, represented by f(x, y). However, when the number of possible stimuli is large, it is infeasible to sample all (x, y) combinations. Bayesian hierarchical models offer a solution to this problem by introducing statistical dependencies between variables (e.g., different observers or different stimuli). In this work, we utilize a Gaussian Markov Random Field (GMRF) prior to estimate visual sensitivity. GMRFs are a technique from spatial statistics that introduces dependencies between variables based on their proximity. As applied to the change detection paradigm, such a prior assumes that f(x, y) should be similar to f(x + delta, y). Our approach allows for the estimation of the complete function f(x, y) even when the stimulus space is sparsely sampled. Posterior inference for the model is performed using MCMC, implemented via the Stan software package. We apply our approach to a change detection experiment in which stimuli were visually complex animations of geologic faults varying in their structural features. Research participants were novices to the domain of geology, who first underwent one of two training sessions that introduced knowledge of different geologic fault categories. Our analysis reveals a significant effect of category knowledge on visual working memory performance.
N. Pontus Leander
COVID-19, a public health emergency of international concern as declared by WHO, is rapidly sweeping the world. Emerging evidence on risk perception and public responses during the pandemic (e.g. SARS, H1N1) implied that risk perception could be highly related to emotion or even mental health (Qian, et.al, 2003; Raude & Setbon, 2009; Bults, et.al, 2011). This study was based on the PsyCorona Survey, an international project on COVID-19 covering over 56,000 participants from 96 countries. Specification curve analysis (SCA) was used to examine the relationship of risk perception of COVID-19 with emotion and self-rated mental health, which considers all reasonable model settings to avoid subjective bias on modelling choices. Firstly, 162 multilevel linear regression models were established for risk perception and emotion, all of which indicated that high risk perception of COVID-19 significantly increases the level of negative emotions (median β=0.24, P<0.001) and reduces the level of positive emotions (median β=-0.18, P<0.001). Moreover, higher risk perception was also associated with worse mental health (β=-0.19, P<0.001). We further used SCA to explore whether the relationship between risk perception and mental health is mediated by emotion. Among the 54 regressions of mental health on risk perception and emotion, 36 models showed a strong mediation effect, with no significant direct effect of risk perception on mental health after controlling for emotion. We concluded that the risk perception of COVID-19 could influence emotion and ultimately have impact on mental health.