Poster Presentations
Dr. Rani Moran
Dr. Ami Eidels
Numerous studies investigated the beneficial effects of redundant perceptual information. Examples span the processing of faces, whereby combining two halves of a face could lead to improved identification, or stimuli as simple as dots of light (or a light and an auditory tone) where redundancy gain marks faster RTs to two signals over one. In many studies of redundant information the redundancy is in the signal, meaning that additional information may be revealed by the experimenter, or naturally observed in an environment that could be poor (no redundant information) or rich (redundant information present). But what if the signal is always poor ? can the cognitive system improve the efficiency of information processing by recruiting an additional, redundant processing system that is independent of the external environment and thus could possibly be controlled by the organism ? we report simulation results showing that a seemingly inefficient redundant system (modelled as a double diffusion model, 2DDM) can, under reasonable assumptions, outperform the standard one-process system that is assumed by most evidence accumulation models (as measured by reward rate). We discuss those assumptions and other limitations.
Dr. Brandon Turner
Attention has been argued as the gatekeeper to determine what information can be further processed. It not only gates external information through perceptual systems but internal information through working memory. However, there is a misalignment between measures and models of attention – researchers always measure external attention with observable metrics like eye-tracking yet model internal attention that directly relates to the behavioral responses. This paradox stands out when the two components truly mismatch, when what one is looking at is not what one is weighing more during decisions. Here, we present a paradigm where external attention is altered by feature salience under a categorization learning task and is measured with eye-tracking data. Our results show that people fixate at the salient feature first and more even if it is occasionally salient, validating the manipulation of external attention. When suboptimal information is salient, the process of picking up optimal information slows down: only after certain training with feedback do people learn to prioritize the optimal information during fixation. However, the accuracy of rational learners grows faster than the optimization of fixation allocation. These indicate that external and internal attention can be disentangled in some cases. We then propose a model under the Adaptive Attention Representative Model framework to disentangle the external and internal attention – we refer to them as sampling and decision weights respectively. This model assumes that the decision weight is updated through the gradient descent procedure to minimize error while the sampling weight is subject to physical characteristics of the features. Our model can capture both accuracy and fixation patterns under different conditions of feature salience, which disentangles these two interweaving components of attention computationally.
Prof. Konstantinos Tsetsos
Sebastian Olschewski
Human risk taking is less stable than what economic theories anticipate. Drawing from cognitive theories of information representation and integration, we predicted and found a novel case of apparent risk preference reversals in decisions made after experiencing reward information serially: Participants (n = 190, online convenience sample) undervalue high-variance relative to low-variance lotteries in independent valuations, consistent with risk aversion; but chose the high-variance lotteries more frequently in binary choices, consistent with risk seeking. A follow-up experiment (n = 868) shows that this behavioral gap can be closed but not reversed through changing the presentation format (sequential vs simultaneous presentation of options) and the task demand (single vs dual demand of valuations and choices). Further, aligning presentation format and task demand for valuations and choices increases the stability of individual risk taking across both tasks. We conclude that risk-taking behavior critically depends on compressed number representations and selective information integration.
Dr. Elizabeth Fox
Traditional bootstrap methods for uncertainty quantification in cognitive modeling require resampling of raw behavioral data, which can be computationally expensive and unsuitable for practical application in large-scale studies and real-time analysis. We present a novel bootstrap approach that operates directly on summary statistics rather than raw data. The improvement in computational efficiency can be dramatic, while statistical validity and calibration is maintained. The method exploits deterministic relationships between cognitive model parameters and the known sampling distributions of their corresponding summary statistics to enable parametric bootstrap resampling. Using the drift diffusion model as a test case, we demonstrate that summary-statistic bootstrap methods achieve excellent calibration on par with full Bayesian approaches. Computational benchmarks show 100-1000x speed improvements over traditional MCMC methods, so that parameter estimation completes in milliseconds rather than minutes or hours. The approach scales efficiently to complex experimental designs with multiple conditions and participants, and incurs no performance penalty with increasing sample size within the design. The methods are implemented in the open-source EZAS.py package.
Mrs. Nicole King
Dr. Robby Ralston
Dr. Brandon Turner
Model evaluation in cognitive science is often framed as testing one well-defined computational model against another well-defined computational model. However, this single-model approach risks overstating commitment to specific combinations of mechanisms while underplaying the uncertainty, flexibility, and exploratory nature of modeling. Here we propose Switchboard Analyses: instead of fitting one model, researchers simultaneously fit multiple variants that systematically vary assumptions, mechanisms, and parameterizations. This approach foregrounds robustness, reveals crucial interactions between model mechanisms and components, and clarifies which mechanisms are indispensable versus incidental. Beyond technical utility, switchboard analyses exemplify what modeling fundamentally is: the construction of idealized, formal frameworks for exploring psychological processes without premature commitment to any singular mechanistic story.
Prof. Joe Houpt
Pilots rely visual information from either the runway or from indicators on the primary flight display to successfully land an aircraft on a runway, which can be compromised by unexpected events (e.g. laser pointers aimed at the plane from the ground). These disruptions can have devastating results if occurring during a landing sequence, particularly towards the end of the landing sequence. The current practice is to abort the landing sequence followed by another attempt; however, subsequent attempt(s) can be costly and are not guaranteed success. The long-term goal of the current research study is to incorporate auditory guidance as part of a multimodal navigation system that would complement visual displays to compensate for visual disruptions during a landing sequence. We previously pilot tested potential auditory signals to incorporate into the multimodal navigation system and found up to 6% mean error rate when prompted to report direction of intended navigation as indicated by auditory signals. Due to the limitations of mean accuracy, our current proposed study will use General Recognition Theory (GRT), a more advanced perceptual model, to further assess perception of the auditory signals when paired with visual display in two separate experiments (Experiment 1: vertical guidance; Experiment 2: horizontal guidance). For vertical guidance, auditory signals will either present two tones sequential or one tone with pitch modulation (between-subject) to indicate upward or downward adjustment. For horizontal guidance, left and right headphone presentation will indicate left and right direction adjustments. Both experiments will use the same visual display currently used by pilots for vertical and horizontal guidance. Each experiment will consist of four auditory/visual stimuli (Exp. 1: up/up, up/down, down/up, down/down; Exp. 2: left/left, left/right, right/left, right/right) with four response options of perceived auditory/visual combination. We anticipate that congruent auditory/visual signals (Exp. 1: up/up, down/down; Exp. 2: left/left, right/right) will violate perceptual independence, while incongruent signals (Exp. 1: up/down, down/up; Exp. 2: left/right, right/left) would not. A violation of perceptual independence for congruent but not incongruent signals would indicate a unified perception of adjustment direction. The strength of perceptual dependence will be used to evaluate the effectiveness of each of the two tone presentations.
Stefan Radev
Ms. Konstantina Sokratous
Peter Kvam
How organisms respond to stimuli is a defining characteristic of humans and other animals, encompassing both shared regularities and individual differences. This study introduces moderational learning, a general framework for data-driven investigation of how stimuli are mapped onto responses in cognitive and behavioral research. Built on advanced deep learning architectures such as variational autoencoders (VAEs), the framework jointly learns the shared generative function of responses and latent variables that encode individual differences in a single task. Further extensions allow it to integrate information across multiple tasks or modalities to derive individual-level trait representations. Integrating the moderational learning framework with machine learning–based behavioral modeling provides a principled way to improve predictive accuracy and to reveal latent factors and generative processes underlying behavioral variability. Our simulation studies demonstrated that moderational learning accurately predicts behavior, recovers true latent factors, and identifies population heterogeneity even when participants adopt distinct strategies. Applied to value-based decision tasks, the framework outperformed both subject- and group-level neural networks in predicting human decisions and revealed low-dimensional latent factors underlying individual differences. Two findings further highlight its potential for data-driven theory building. First, in several decision tasks, a single latent factor accounted for most individual variability—contradicting the common tendency to add more parameters to cognitive models. At a higher, cross-task level of factor analysis, we identified three common latent factors: risk discounting, delay discounting, and bidding tendency. Second, visualization of the learned stimulus–response mappings uncovered unexpected behavioral patterns, indicating that responses in pricing tasks do not always decrease monotonically with aversive attributes such as delay. Together, these results establish moderational learning as a powerful framework for cognitive and behavioral science, paving the way for a systematic, data-driven paradigm that integrates behavior prediction, model discovery, and representation of individual differences.
Peter Kvam
Modern neural networks often produce overconfident predictions, even when those predictions are incorrect. Even large language models will confidently assert hallucinations. This issue of miscalibration is particularly problematic in high-stakes domains where effective human-AI collaboration depends on accurate confidence estimates. While post-hoc calibration techniques such as temperature scaling have shown promise (Guo et al., 2017; Culakova et al., 2020), they typically operate on output probabilities and do not leverage the rich internal representations learned by the model. We propose a complementary approach that estimates model confidence based on the density of latent representations of a neural network. We hypothesise that representations located in high-density regions of latent space correspond to familiar, well-learned inputs, whereas those in sparse regions reflect novel or ambiguous cases that are susceptible to error. Our method moves beyond output-level adjustments to offer a novel approach for understanding model uncertainty by grounding it in the structure of learned features. We evaluate this approach using a convolutional neural network trained to classify melanoma from dermoscopic images. We apply k-nearest neighbor analysis to the penultimate activations to derive similarity-based confidence scores and assess their calibration and discriminative utility. Preliminary results suggest that these scores offer a promising foundation for metacognitive augmentation in human-AI systems where communicating uncertainty is critical.
Vladimir Sloutsky
Dr. Brandon Turner
In a complex environment, cognitive agents must determine what dimensions of the environment are relevant to their goals. In computational models, the task relevance of a dimension is often represented as a single weight, determining how influential that dimension is during decision making. However, task relevance is not static, and a dimension may only be informative in light of other cues. In this case, singular weights fail to capture the agent’s dynamic representation of task relevance, which changes based on search and encoding. To examine how these dynamic representations may form and function, we augmented feedforward models of category learning (ALCOVE and AARM) with differentiable, attention-gated encoding, recurrent connections, and time-extended regularization. Ultimately, we find that adding trainable, recurrent weights to category learning models produces meaningful search patterns that are consistent with human performance in a variety of tasks. Our findings suggest that recurrence might offer an alternative account of how humans search through stimulus information based on an acquired representation of their experiences.
Prof. Joe Houpt
One of the many side effects of cancer and its treatment is a form of impairment called cancer-related cognitive decline (CRCD). Difficulties in consistently measuring CRCD have hindered progress in understanding the underlying mechanisms of this impairment, resulting in ripple effects on the development of effective treatments. In this study, we measured a multitude of health outcomes across a 6-month study of Texas women who have been diagnosed with cancer. By utilizing a holistic approach that integrates therapeutic yoga with motivational support and dietary guidance, we aimed to improve quality-of-life for these cancer patients and survivors. To assess changes in CRCD throughout the study, we assessed attention, executive control, and working memory at monthly intervals with neuropsychological tasks. The data was assessed with three response time models, as a sensitive and theoretically meaningful alternative to mean-based analyses. This set of analyses on preliminary data raises a discussion on future response time modeling implementation in the health sciences.
Dr. Vladimir Sloutsky
Dr. Brandon Turner
Our everyday decisions are driven by a simultaneous search of our current environment and through our related memory bank. A simple example of this could be our search through a long restaurant menu guided by our recall of previous dining experiences. The influence of prior restaurant outings may aid in simplifying the menu, as you recall antipasto as an appetizer after a prior experience believing it to be a type of pasta. Furthermore, we do not recall every dining experience while looking through the menu, as experience at an American restaurant may not have relevant, overlapping features. The evidence accumulation literature embeds these constraints on search at both the feature level and the exemplar level. The Exemplar Based Random Walk (EBRW) model demonstrates the joint impact of selective attention and relevant recall of exemplars when making categorization decisions. However, this model includes a static version of attention, rather than having attention change dynamically as the subject learns. The goal of this work is to implement aspects of selective exemplar recall into current models of category learning, such as the Adaptive Attention Representation Model (AARM). Not only will we analyze the subset of exemplars used to make decisions, but how partially encoded traces influence the exemplar recalled. Incorporating EBRW’s exemplar sampling into AARM offers insight into decision-making and order effects.
Sharon Chen
Gabriella Larson
Despite the ubiquitous nature of context and its importance to memory retrieval, context is not well understood. In part this is because the single word context has been used to describe a multifaceted construct. In this chapter, we categorize contexts into 1) context about the stimulus itself (source and semantic context) 2) contexts external to the stimulus that exist in the world (environmental context) or in the participant (internal state context), 3) contexts connecting the stimulus and the world (temporal and spatial context). For each of these elements of context, we review major empirical findings as well as theoretical frameworks relevant to the findings. We also consider event segmentation, or what elements cause the continuous flow of information in the world to be discretized into episodes. We propose that theorists move towards specifying different representations and processes for various context types and urge the development of nuanced variants of existing models that respect the multifaceted nature of context.
Eunice Shin
Joachim Vandekerckhove
Robustness is an essential property for the reliability of scientific inference. We seek to develop robust methods that do not break when data deviate modestly from idealized assumptions. For example, estimation methods that depend strongly on means and variances are heavily impacted by the presence of outliers. That is the case of the EZ-diffusion model (EZ-DDM), a system of closed-form estimators for the drift rate, boundary separation, and nondecision time parameters of the three-parameter drift diffusion model, from three summary statistics: the accuracy rate, and the mean and variance of response times. Building on our prior hierarchical Bayesian EZ-DDM, we present a robust implementation of the EZ-DDM where we replace the mean with the median and the variance with an IQR-based spread estimate, aiming to retain computational efficiency while reducing sensitivity to contaminant trials. Using simulation studies with varying sample sizes, number of trials per condition, effect sizes, and contamination levels, we compare the diagnostic accuracy of the robust EZ-DDM against the standard implementation with data generated following a within-subject t-test design. Results show that while both models perform similarly with clean data, the robust variant remains stable and accurate under contamination, preserving efficiency without sacrificing robustness. We recommend the robust EZ-DDM as a practical, scalable, and resilient alternative for real-world applications.
Dr. Frederick Callaway
Dr. Uma Karmarkar
Prof. Ian Krajbich
Research Question: Many choices involve selecting multiple items (e.g., grocery shopping). The study of how individuals select items from larger collections has attracted interdisciplinary attention from marketing (Arora et al., 2008), psychology (Regenwetter et al., 1998), economics (Fishburn, 1974) and computer science (Mathews, 1896). Yet in the lab, despite the simplicity of doing so (i.e., simply asking for more selections), multi-response paradigms remain an underutilized, low-cost extension to traditional decision science elicitations. As a result, the cognitive processes underlying these "multi-response" decisions remain understudied. Motivated by the idea that decision-making is a continuously evolving and noisy process, as suggested by theories of sequential sampling (Busemeyer et al., 2019), we propose that individuals adapt decision strategies to their goals by focusing on task-relevant value comparisons and managing cognitive load. Here we introduce a novel elicitation method for “multi-responses” to examine how selection requirements, preference strength, and set size influence the choice process. Methods: In Study 1, 75 subjects rated 60 consumer items (1–100 scale) before selecting 1, 2, or 3 items from sets of 4. In Study 2, 100 subjects completed a similar task but selected from larger sets (4, 8, or 12 items). Selection requirements were blocked, with block order randomized. We collected choice data, response times, and process data (eye movements in Study 1; mouse trajectories in Study 2). Study 2 was preregistered. Results: Subjects performed the task accurately, adapting their strategies based on selection requirements. We define accuracy as selecting the items with the highest value, based on the separately elicited liking ratings. In Study 1, for single-item selections, accuracy was positively influenced by the value of the best option (beta = 1.04, p < .001) and negatively influenced by the value of the second-best option. For two-item selections, the best and second-best options were most predictive of accuracy (beta = 0.79, p < .001). For three-item selections, the value of the third-best option had a positive effect on accuracy (beta = 0.85, p < .001). This pattern replicated in Study 2, with stronger effects in larger sets. Even though subjects could select the items in any order, they favored selecting items in rank order, consistent with a sequential sampling account. The best-to-second-best (1–2) sequence occurred in 60.3% of two-item correct trials, and the sequence (1–2–3) occurred significantly most frequently (31.5%) in three-item correct trials (all contrasts, ps < .001). In Study 2, subjects maintained this tendency in 4-item sets, but showed a reduced tendency with larger set sizes. Process-tracing results corroborated these findings. Subjects responded more slowly when selecting more items and faster when the relevant item value was largest. In Study 1, where initial selections took significantly longer than subsequent ones, eye-tracking data revealed similar patterns: subjects made fewer fixations when they had spent more time on the initial selection. Furthermore, in Study 2, subjects’ initial mouse trajectories were more curved when selecting a single item compared to selecting two or three (p < .001), suggesting greater conflict. Conclusions: Our work provides a comprehensive characterization of multi-response choice, demonstrating that individuals adapt their choice processes. The findings offer empirical support for extending existing sequential sampling models beyond single-item selection by explaining patterns in choice and process data. This lays the groundwork for developing computational models that can predict multi-response decisions.
Submitting author
Author