Symposium: Bayesian Advances in Modeling Individual Differences
Dr. Julia Haaf
Prof. Edward de Haan
Experimental studies of brain lesions can reveal the neural underpinnings of behavior and inform theories of cognitive processes. But standard pre-post analysis methods used in lesion studies make an unnecessarily permissive assumption: They assume that some individuals' abilities will be better after lesions have been applied. This assumption is ethically and scientifically problematic: (1) it contributes to the pervasive low statistical sensitivity of lesion studies (wasting animal lives), and (2) it limits inferences to population averages when researchers are seeking insights that apply to each individual. These problems are exacerbated when researchers infer lesion-spared abilities from non-significant p-values. We propose Bayesian hypothesis tests that incorporate constraints on individual differences and can quantify evidence of spared abilities. Our tests reflect researchers' substantive knowledge and appropriately constrain permissible outcomes: (1) carefully applied lesions impair each individual's ability and (2) the magnitude of impairment correlates with pre-lesion ability. As a result, our tailored Bayesian hypothesis tests (1) increase statistical sensitivity (saving animal lives), (2) warrant inference at the level of individuals, and (3) can quantify evidence for spared abilities. In a series of simulation studies, we compare the performance of our tests with standard procedures. We quantify the gains in evidence and the resulting sample size savings for sequential designs. Of course, there is no free lunch. The increase in statistical sensitivity is the result of additional assumptions; violations of these assumptions can lead to biased inference. We explore the consequences of violating assumptions about response distributions and the structure of individual differences.
This is an in-person presentation on July 20, 2023 (09:00 ~ 09:20 UTC).
Dr. Alexandra Sarafoglou
In experimental semantics, researchers are interested in the cognitive processes involved in language processing. The theory in this research area is highly formalized and rich, and usually embedded in formal logic. For instance, looking at the representation of quantifiers, formal logic predicts that the meaning of the quantifiers "more than half" and "most" are identical (i.e., more than 50% for two objects), that the meaning of these quantifiers is unambiguous, and consequently that all individuals will perceive these quantifiers in the same way. While formal logic leads to precise theoretical predictions, a drawback is that it often fails to explain the richness of the observed data. Previous literature has found, for instance, that the quantifier "most" is associated with higher percentages than the quantifier "more than half," that the meaning of "most" is less precisely defined than "more than half," and that individuals vary considerably in their response pattern. In this talk, we present a novel statistical model that captures individual differences in the representation of quantifiers. In addition, the model explains these differences by introducing cognitive processes to the theory such as thresholds, vagueness, and response error. We will illustrate our approach by applying our model to longitudinal data.
This is an in-person presentation on July 20, 2023 (09:20 ~ 09:40 UTC).
Dr. Jeffrey Rouder
Recently, there has been a merger between experimental and differential Psychology where experimental tasks have been employed to probe individual differences. While this merger appears desirable, results have been problematic in two ways. First, correlations between tasks measuring the same construct are relatively low. For example, flanker and Stroop tasks are both assumed to measure the ability to inhibit the prepotent responses, yet performance on these tasks in the literature typically have correlations around .1 (Enkavi et al., 2019; Rey-Mermet, Gade, & Oberauer, 2018). Such low correlation values stand in contrast with findings in other domains where measures of abilities often have substantial positive correlations (Ritchie, 2015), a fact known as Spearman’s positive manifold. These low correlations undoubtedly reflect low reliability leading to the well-known problem of attenuation. Following from this, the second way the merger has been problematic is that latent variable analyses tend to be unstable and unreplicable (Karr et al., 2019). Although there are methods of disattenuation, their resulting correlations are often too variable to provide meaningful insights (Rouder, Haaf, & Kumar, in preparation). To address the current predicament, we propose a new method of disattenuation that leverages the positive manifold by assuming it as a prior in a Bayesian hierarchical model. With this constraint, correlations may be disattenuated with reasonable precision, even in low-reliability experimental settings. We compare the performance of this approach to relatively unconstrained Bayesian hierarchical models (such as those with LKJ and Wishart priors) and the more conventional Spearman correction for attenuation.
This is an in-person presentation on July 20, 2023 (09:40 ~ 10:00 UTC).
Dr. Dora Matzke
Dr. Suzanne Hoogeveen
Dr. Udo Boehm
Prof. Andrew Heathcote
The study of individual differences in cognitive control using conflict tasks such as the Stroop task has proven difficult. Despite robust experimental effects, the reliability of individual differences tends to be low, and correlations between tasks are weak at best. A statistical explanation for this reliability paradox is that individual differences are masked by trial-to-trial variability and are too small to be detected. Modeling recommendations to improve the assessment of individual differences include the use of trial-level hierarchical models that account for trial noise, the use of descriptively more accurate models that account for the skewness of response time data, and the use of models that make cognitively more plausible assumptions, such as race or competitive models. At the same time, we may fall into the trap of overfitting. In this talk, we will compare Bayesian hierarchical models of increasing complexity with respect to their signal-to-noise ratio, i.e., the ratio of the amount of "true" individual differences (i.e., the signal) to the trial-by-trial variability (i.e., the noise). This ratio has been proposed to indicate the degree of attenuation that can be expected in correlational research in the area of cognitive control (typically 1 to 7). By combining the most powerful modeling techniques and using progressively more complex models, can we optimize the signal-to-noise ratio and gain increasing resolution for individual differences?
This is an in-person presentation on July 20, 2023 (10:00 ~ 10:20 UTC).
Dr. Suzanne Hoogeveen
Research in social cognition often relies on experimental tasks that generate responses in terms of accuracy and response times. Consider, for instance, the Implicit Association Test (IAT), which captures attitudes and stereotypes by measuring the strength of associations between concepts (e.g., race) and evaluations (e.g., good or bad) in a categorization task. In this task, based on cultural stereotypes, we expect responses to be faster and more accurate with white-positive / black-negative pairings than with black-positive / white-negative pairings. In this talk, we will introduce and illustrate different Bayesian hierarchical modeling approaches for the IAT. First, we will attempt to characterize the typical data pattern observed in the IAT, in order to better understand the relationship between speed and accuracy. Second, based on this pattern, we will outline three analytic approaches for quantifying individual differences in implicit associations that constitute alternatives to the traditional D-score analysis of the IAT. Specifically, we apply Bayesian hierarchical multivariate regression, multinomial processing trees with response times, and lognormal race models to the IAT data. These approaches share the benefit of integrating both response time and accuracy data and thus making use of the full resolution of the data. Additionally, the three modeling techniques have unique features that make them more or less suitable depending on the particular research question, theoretical focus, and design characteristics at hand. We will apply each model to two different datasets and discuss advantages, predictions, and individual estimates from each model.
This is an in-person presentation on July 20, 2023 (10:20 ~ 10:40 UTC).