Dr. Chih-Chung Ting
The impact of value difference on response times (RTs) is well established, but recent research has shown that RTs are lower when the overall value/intensity across all options is higher: choosing between two very attractive / high-intensity options leads to faster decisions than choosing between two less attractive / low-intensity options. Whereas the overall value effect on RT appears to be robust and generalized across various decision types, little is known about its effect on choice accuracy and eye movements. The present study investigates the computational mechanisms underlying the impact of the overall value of available options on decision-making. We used attentional drift-diffusion model (aDDM) to simulate decision-making under different levels of overall value, and found that the overall value was predicted to reduce choice accuracy (together with RT) regardless of choice domains. To test these predictions empirically, we conducted a 3 (OV: sum of option values/stimulus intensities) by 3 (VD: the difference between values/stimulus intensities) by 2 (choice domain: value-based and perceptual decision) within-subject design eye-tracking experiment with n = 60 participants. Remarkably, the results were partially consistent with the model predictions but suggested a high degree of domain-specificity of overall value effects. In particular, we found that accuracy rates were significantly lower at a medium OV level compared to high and low levels in the value-based decision only, while accuracy in perceptual decision was not significantly changed by overall value manipulation. With respect to eye-tracking, OV similarly affected fixation patterns across choice domains: middle and final fixation durations were significantly decreased from low to high OV. However, we observed that OV adjusted the relationship between the final fixation and the choice in value-based decisions only: the tendency of choosing the last fixated option (i.e., snack) was increased when OV was decreased. Together, our results suggest that overall value is involved in the choice process and different cognitive mechanisms are needed to capture domain-specific impacts of overall value on choice accuracy, final fixation bias and their interactions.
This is an in-person presentation on July 19, 2023 (09:00 ~ 09:20 UTC).
Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM’s performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately.
This is an in-person presentation on July 19, 2023 (09:20 ~ 09:40 UTC).
Mr. Amir Hosein Hadian Rasanan
Considering the type of information that people pay attention to when making decisions and how long their attention lasts can improve the predictions of decision-making models. Previous research has demonstrated that options receiving the most attention during the decision-making process are typically the ones that are chosen. Additionally, more valuable options tend to receive more attention than inferior options. However, the interaction between these two effects is not yet fully understood. There are two possible ways in which attention and subjective value could interact: attending to an option could amplify its subjective value in a multiplicative way, or attention could increase its choice probability in an additive way. Although some studies suggest a multiplicative interaction between attention and value (Smith & Krajbich, 2019), others provide evidence for an additive interaction (Cavanagh et al., 2014). The attentional drift-diffusion model (aDDM) successfully explained the effect of attention by assuming a multiplicative interaction between attention and value (Krajbich et al., 2010). The model posits that when individuals pay attention to an option, the accumulation process for that option is amplified. More recently, the gaze-weighted linear accumulator model (GLAM) following aDDM has been suggested, which assumes independent accumulators for each option and uses the gaze percentage for each option instead of fixation duration (Thomas et al., 2019). Models that assume a multiplicative interaction between attention and value have the advantage that they can predict magnitude effects in decision making, where options with higher subjective values are chosen faster than those with lower values. The present study introduces the Gaze-weighted Advantage Race Diffusion (GARD) model, which simultaneously assumes both additive and multiplicative interactions between attention and value. We rigorously tested this new model on three existing datasets on human food choice by Krajbich et al. (2010), Smith and Krajbich (2018), and Chen and Krajbich (2016). Our results show that the GARD model outperforms existing models that assume only a multiplicative interaction between attention and value, indicating that it provides a more accurate description of people's decision-making processes.
This is an in-person presentation on July 19, 2023 (09:40 ~ 10:00 UTC).
Dr. Emily Weichart
Dr. Layla Unger
The choices we make in our everyday lives require us to (1) selectively attend to the contents of a stimulus, and (2) connect those contents to information in memory. When learning, these two mechanisms interact with one another in a dynamic, cyclical fashion over time. Here, we explore how these interactions can produce ``learning traps'' by comparing profiles of selective attention (through eye-tracking data) and choice between two groups known to have different memory capacities: adults and 4-5 year-old children. Although the data confirm that children are less susceptible to representation traps, we also show through computational modeling that the mechanisms that explain this difference are poorer working memory, and greater interest in learning about the dimensions of information themselves. It seems as though by elongating the maturation of selective attention and working memory, nature engineered a way for children to explore the world, helping them to avoid learning traps.
This is an in-person presentation on July 19, 2023 (10:00 ~ 10:20 UTC).
Humans are confronted with a complex world, in which many choice situations are characterized by a large number of options described on multiple attributes. To meet this challenge, they must find a suitable trade-off between making informed decisions on the one hand and limiting invested resources such as time and effort on the other hand. We argue that humans achieve this balance by searching systematically for relevant information in an efficient and goal-directed, but not strictly optimal manner. More specifically, we propose a Bayesian cognitive model of information search in multi-attribute decisions. According to this model, the values of different attributes and options are represented as belief distributions that are updated by sampling information through the allocation of selective attention. A decision is made when the belief distribution of the currently best option is sufficiently higher than the distribution of all other options. The core element of our model is a myopic transition rule, according to which people plan one step ahead and allocate attention to an option’s attribute that is most likely to reveal decisive information in favor of the associated option. As an emergent property of this transition rule, our model predicts that information search is driven by three factors: the weights of attributes, the uncertainty about attribute values, and the accumulated value of options. Simulations of the model demonstrate that our theory accounts for a rich body of empirical findings on attention-choice interactions in both binary and multi-alternative decisions. For example, the model predicts i) the positive correlation between attention to an option and choice probability, ii) the attraction search effect, according to which people are more likely to keep attending to initially promising choice candidates, and iii) the negative correlation of the Payne Index (which quantifies alternative- vs. attribute-wise search) with the dispersion of attribute weights. Taken together, our computational theory offers a unifying description of information search and choice dynamics in multi-attribute decisions and suggests that humans search in an adaptive and efficient but still not strictly optimal way.
This is an in-person presentation on July 19, 2023 (10:20 ~ 10:40 UTC).