MathPsych Posters
Asli Kilic
In free recall, consecutive recalled words tend to be positioned nearby in the study list, depicted as the contiguity effect, with an asymmetry favoring forward recalls. This robust effect, which has been observed in many experimental settings and with multiple manipulations, can be explained by causal and noncausal models of episodic memory. Causal models such as SAM or TCM suggest that each recalled word is used as a probe for the next recall; whereas noncausal models such as the model of Davelaar et al. (2005), suggest a correlation between the mental states of study and test phases, to explain the contiguity effect. In an attempt to disrupt the suggested correlation between mental states, Kılıç et al. (2013) devised the probed recall task, which involves making a recall from the same list using the provided probe, after studying several lists. Their results suggested that the contiguity effect remains intact, although symmetric, even when the correlation is disrupted and supported causal accounts. This study aimed to increase performance in the probe recall task to understand whether the symmetry was caused by the lower performance obtained in the experiment by using event segmentation. Different distractor tasks between each list were presented to participants to increase discrimination of lists from each other. The results indicated a symmetric but intact contiguity effect even when performance is increased, depicted in a conditional response probability (CRP) curve, supporting causal explanations of the effect.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Katharina Mminele
Max Brede
Veronika Lerche
According to the diffusion model (Ratcliff, 1978), binary decisions stem from a process of continuous evidence accumulation with normally distributed noise. The Lévy-flight model (Voss et al., 2019) extends this framework by introducing the parameter α, which modifies the noise distribution. Specifically, lower α-values result in heavier tails of the noise distribution, leading to more frequent sudden large changes (i.e., jumps) in evidence accumulation. While α can enable a superior fit to the data, its psychological meaning remains empirically underexplored. Therefore, we examined whether α reflects guessing, predicting a decrease in α-values as individuals are prompted to guess. In our experiment, participants performed a brightness discrimination task under two conditions, each emphasizing a different approach to decision-making: In the guessing condition, we instructed participants to take an educated guess when in doubt. In the control condition, we instructed participants to only select an answer once they felt confident about their choice. Given that the modified noise distribution makes the likelihood of the Lévy-flight model intractable, we employed the BayesFlow framework, leveraging its simulation-based deep learning capabilities for our analyses. Contrary to our expectations, the difference in α between the two conditions did not reach statistical significance, possibly due to the high difficulty level of the employed task. Accordingly, we advocate for and delineate further inquiries of possible interpretations of α, particularly regarding guessing.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
The Intelligence Advanced Research Projects Activity (IARPA) — the research and development arm of the U.S. Office of the Director of National Intelligence — in January 2024 launched an innovative program that, for the first time, takes aim at the psychology of a cyber attacker. The goal of the Reimagining Security with Cyberpsychology-Informed Network Defenses (ReSCIND) Program is to leverage a cyber attacker’s human constraints, such as innate decision-making biases and cognitive vulnerabilities, to disrupt their attacks. While attackers take advantage of human errors, most cyber defenses do not similarly exploit the attackers’ cognitive weaknesses — ReSCIND strives to flip this paradigm. By combining traditional cybersecurity practices with the emerging field of cyberpsychology, IARPA is set to engineer this first-of-its-kind cyber technology that makes an attacker’s job that much harder, by focusing on the human behind the attack. The design of novel defense capabilities will be grounded by foundational science and the effectiveness quantified with rigorous experimentation and analysis. Experimental analysis results will be utilized to iteratively improve and model these cyberpsychology-inspired methods to impact attackers (e.g., causing frustration, surprise, choice overload, or risk aversion). Features such as the target network, attacker profile, and inferred attacker goals that can help predict and induce attacker mistakes and irrational behavior will be identified and incorporated into the defensive capabilities. Cultural aspects, operational tempo, motivation, and other specifics of real-world cyber campaigns will be critical considerations for experimental designs and modeling efforts.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
In cognitive psychology, simple response times are often modeled as the time required by a one-dimensional Wiener process to first reach a given threshold. This stochastic process's first-passage time follows a Wald distribution, which is essentially a reparametrized inverse-Gaussian distribution. Since the inverse-Gaussian distribution is part of the exponential family, there must exist a conjugate prior with respect to such a data-generating process. It can be shown that the Gaussian-Gamma distribution satisfies the conjugacy property, albeit under a parameterization different from that of the Wald distribution. This leads to a posterior distribution that does not directly correspond to the core parameters of the Wiener process; that is, the drift-rate and the threshold parameter. While the marginal threshold posterior under a Gaussian-Gamma prior is relatively easy to derive and turns out to be a known distribution, this is not the case for the marginal drift-rate posterior. Here, I addresses this issue by providing the exact solution for the marginal posterior distributions of the drift-rate parameter under a Gaussian-Gamma prior. Unfortunately, the probability density function of this distribution cannot be expressed in terms of elementary functions. Thus, different methods of approximation are discussed as an expedient for time-critical applications.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Deniz Pala
Asli Kilic
The contiguity effect is the finding that when an item is recalled, the next item to be recalled is inclined to come from neighbouring study positions to the position of the just recalled item. In the recall literature, the contiguity effect is observed with a forward asymmetry. Various models have been developed to account for the contiguity effect. Kılıç et al. (2013) offered two classes of models: Causal models such as the Temporal Context Model (TCM) suggest that when an item is recalled, it causes another item to be recalled due to the recalled items’ study context being incorporated into the test context, whereas according to non-causal models, the context in the study changes independently of the items and this study context is reiterated during the test phase. Kılıç et al. (2013) employed the probed recall task to disrupt this supposed reiteration. They observed a contiguity effect but not the forward asymmetry, which was attributed to low recall performance. In the current study, we aimed to increase the recall performance to decide whether the lack of asymmetry indicates the contribution of non-causal mechanisms or low performance. Therefore, the probed recall task was utilized along with overt rehearsal and sentence generation tasks during the study phase to increase recall performance. At the test, probe words were given and another word from the list of the probe words was requested. Conditional Response Probability (CRP) analysis revealed within and between-list contiguity effects, but the results did not show a forward asymmetry.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Ian Mackenzie
Valentin Koob
In conflict tasks, such as the Simon, Eriksen flanker, or Stroop task, a relevant and an irrelevant feature indicate the same or different responses in congruent and incongruent trials, respectively. The congruency effect refers to faster and less error-prone responses in congruent relative to incongruent trials. Distributional analyses reveal that the congruency effect in the Simon task becomes smaller with increasing RTs, reflected by a negative-going delta plot, whereas for other tasks, the delta plot is typically positive-going, meaning that the congruency effects become larger with increasing RTs. The Diffusion Model for Conflict tasks (DMC; Ulrich et al., 2015, Cognitive Psychology) accounts for this by explicitly modelling the information accumulated from the relevant and the irrelevant features and attributes negative- versus positive-going delta plots to different peak times of a pulse-like activation of the task-irrelevant feature. Recently, Lee and Sewell (2023, Psychonomic Bulletin & Review) questioned this assumption and advanced their Revised Diffusion Model of Conflict tasks (RDMC). We address three issues regarding RDMC in comparison with DMC: (1) The pulse-like function is not as implausible as Lee and Sewell suggest. (2) RDMC itself comes with the highly implausible assumption that different parameters are required for congruent and incongruent trials. (3) According to a new parameter recovery study, RDMC lacks acceptable recovery (in particular compared to DMC). Against this background, we do not see an advantage of RDMC over DMC at present.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Henrik Singmann
David Kellen
Long-term memory (LTM) and working memory (WM) are considered distinct memory components. This assumed difference is also reflected in different experimental tasks (e.g., recognition vs change detection tasks). However, there is as of yet no clear empirical evidence delineating what makes a task require LTM or WM. Recent work by Kellen et al. (2021, Psychological Review) established empirically that the general class of signal detection theory (SDT) models underlies recognition-memory judgements in LTM. A central feature of this model class is that it assumes unlimited capacity, which conflicts with one of the central assumptions regarding the structure of WM, that WM is capacity limited. The present work examines whether recognition judgements in visual WM satisfy the Block-Marschak (BM) inequalities. Satisfying the BM inequalities implies a random-scale representation, the key property of the SDT model class. In several experiments we find that performance in visual WM change detection does not satisfy BM inequalities. This finding implies that a random-scale representation does not hold for visual WM. However, when using the same stimuli as typically used in visual WM task in a LTM task, we replicate earlier results that LTM judgements satisfy the BM inequalities. Considering that the concept of capacity limits in WM is fundamentally at odds with the assumptions that underly random-scale representation, our result that visual WM judgements do not satisfy the BM inequalities is maybe unsurprising but nevertheless provide a first foray into establishing a strong empirical distinction between LTM and visual WM.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Michael Frank
Dr. Matthew Harrison
Alexander Fengler
Classical versions of sequential sampling models (SSMs) assume that the rate of accumulation is constant over a given trial. Empirical evidence however suggests that instead, moment by moment attention, indicated for example by eye gaze patterns, can shift the rate of accumulation such that it vacillates over the course of single trials. These dynamics are captured by models such as the attentional Drift Diffusion Model (aDDM). However, parameter inference for such models, in a way that faithfully tracks the generative process, remains a challenge. Specifically, the attention process, captured as arbitrary saccades and gaze times, forms a time-point-wise covariate which can’t be reduced to a fixed dimensional summary statistic, and thus poses a challenge even for likelihood-free methods on the research frontier. We propose a method for fast computation of likelihoods for a class of models which subsumes the aDDM. The method divides each trial into discrete time stages with fixed attention, uses fast analytical methods to assess stage-wise likelihoods and integrates these to calculate overall trial-wise likelihoods. Operationalizing this method we characterize parameter recovery in a variety of settings and compare to widely used approximations to the aDDM, which instead only use fixation proportions to maintain tractable likelihoods. We characterize the space of experiments in which such approximations may be appropriate and point out which settings drive model formulations apart. Our method will be made available to the community as a small python package, which will integrate seamlessly into a wider probabilistic programming ecosystem around the PyMC python library.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Prof. Joe Houpt
Oncology professionals rely on self-report questionnaires to assess cognitive function throughout cancer treatment. This has proven problematic, as inconsistent and contradictory results have hindered the ability to measure the nuances of cognitive dysfunction. In turn, researchers have struggled to pinpoint the true underlying cause of cancer-related cognitive impairment amongst the array of possible sources. We believe that by utilizing the partitioning, highly sensitive capabilities of response time modeling in cognitive assessments, we will be able to encapsulate cancer-related cognitive impairment more accurately. By studying response time model trends across assessments of biomarkers, we aim to evaluate the possibility of immune response playing a causal role in cognitive dysfunction. The purpose of the current study was to establish a baseline understanding of the typical function associated with our assessment task and perform a power analysis to determine the necessary number of trials that will ensure as accurate of response time parameters as possible. The cognitive assessment tested in this project was the dual-n-back test. This task challenges participants to remember a string of auditory and visual stimuli, allowing researchers to test the capabilities and limits of working memory, one of the main areas of cognition impacted by cancer-related cognitive impairment. We believe that the current study’s quantitative approach will elucidate components of the underlying cognitive mechanisms involved in working memory, through the workload capacity metric. This study served as an important first step in understanding and measuring the complexities of cancer-related cognitive impairment.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Adam Osth
Dr. Daniel Feuerriegel
An assumption of the “full” diffusion model is that the rate of evidence accumulation varies across trials, which can account for slow errors and asymptotic accuracy (Ratcliff, 1978). This assumption has been criticized by researchers as an ad hoc addition to the model that adds additional flexibility to the model. In the present work, we ask whether linking the drift rate to systematic experimental factors can mitigate the need for a drift rate variability parameter. Using a recognition memory dataset with electroencephalography (EEG) recordings (n = 132), we systematically linked drift rate to individual trials using exogenous experimental factors – such as word frequency and study-test lag– along with endogenous factors using EEG data. We expected that the inclusion of such factors would reduce the estimates of the drift rate variability parameter. We first demonstrated the feasibility of this modelling approach with simulated data. However, counter to this prediction, with experimental data, model fits indicated that the inclusion of systematic variability resulted in little decrease in the random drift rate variability parameter. This suggests that the implementation of a normal distribution of drift rate can be hard to meaningfully interpret in practice, and that other mechanisms might be involved in producing slow errors which are not implemented in DDM.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Alexander Fengler
Dr. Matthew Nassar
Mr. An Vo
Sequential Sampling Models (SSMs) are ubiquitously applied to empirical data of two or more alternative choice tasks, subsuming a large variety of task paradigms. Nevertheless the space of models typically considered is often limited to those that are analytically tractable for inference. More recently the field of simulation based inference has enabled the development and evaluation of a much broader class of models. Here we leverage developments in likelihood free inference using artificial neural networks in order to evaluate a range of models applied to a hierarchical decision making task. Participants were presented with stimuli, in the form of lines that varied across three dimensions: movement direction, line orientation and color. These three features imply three potential decisions (dominant motion direction etc.) on a given trial. One feature was considered the ‘high-dimension’, and determined which of the remaining two ‘low-dimensional’ features were relevant for a given choice scenario. The task is therefore hierarchical, in that the high dimensional features acts as a filter on which one of two remaining tasks a subject needs to solve. To investigate the corresponding cognitive strategies used by participants to solve these tasks, we developed a range of diffusion model variants to assess whether participants accumulate evidence strictly hierarchically and therefore sequentially, in parallel, or via a hybrid resource rational approach. We will assess model fits and posterior predictive simulations to arbitrate between these accounts and to link them to trial-by-trial neural dynamics (via EEG) associated with encoding of higher and lower dimensional features.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Jianqiao Zhu
Prof. Adam Sanborn
The systematic deviations from rational Bayesian updating of beliefs, such as conservatism and base-rate neglect, have been extensively studied. Two primary cognitive models have been proposed to explain these biases: simple heuristics (Woike et al., 2023) and stochastic sampling approximations of the Bayesian solution, such as the Bayesian Sampler (Zhu et al., 2020). However, recent research suggests that neither of these explanations fully accounts for observed behaviors. In a study by Stengård et al. (2022), only about half of participants' responses aligned with heuristics, indicating a gap between heuristic-based and Bayesian models. To address this gap, we propose exploring a new class of models that blend heuristics with Bayesian approaches. In our study, we investigate simple mixtures of heuristics and the Bayesian Sampler, as well as a hybrid model combining heuristics for setting priors and Bayesian methods for refining estimates using stochastic samples. Our analysis indicates that neither heuristics nor the Bayesian Sampler alone are sufficient to explain the observed data. Instead, a combination of these approaches appears to offer a more comprehensive explanation for human decision-making behaviors. By incorporating elements of both heuristic reasoning and Bayesian updating, our hybrid model shows promise in better capturing the complexities of human cognition and decision-making processes. Further research in this direction could provide valuable insights into understanding and potentially mitigating cognitive biases in real-world contexts.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Prof. Arndt Bröder
Research investigating the processes of multiple-cue judgments usually relies on simple artificial stimuli with predefined cue structures, since the cognitive models used in this area of research require that the cue structure is known. Unfortunately, this hinders the application of these models to situations involving complex stimuli with unknown cue structures. Building upon early categorization research, in two studies we demonstrate how the cue structures of complex and realistic stimuli can be extracted from pairwise similarity ratings with a multidimensional scaling analysis (MDS) and then subsequently be used to model participants' quantitative judgments with a hierarchical Bayesian model. After an initial validation study, we use MDS to generate cues for complex stimuli with an unknown cue structure based on pairwise similarity ratings of N = 110 participants. These cues are then used in a hierarchical Bayesian model to analyze judgments of these complex stimuli from N = 80 participants. Our results replicate previous findings that demonstrate the influence of learning tasks and feedback on strategy selection in judgment tasks. This highlights the feasibility of our approach and extends the generalizability of previous findings to more complex stimuli.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Lucas Castillo
Johanna Falben
Prof. Adam Sanborn
The Autocorrelated Bayesian Sampler (ABS; Zhu et al., 2023) is a sequential sampling model that assumes people draw autocorrelated samples from the memory of hypotheses according to their posterior beliefs. Samples are then integrated to produce choices, response times, confidence judgments, estimates, confidence intervals, or probability judgments. For example, for forced choices samples are aggregated until those in favour of one response category exceed those in favour of the other, and then the favoured option is chosen. The ABS consists of two components: the mechanism of sampling and the response time distribution. Within this framework, we propose a novel ABS model integrating the MCREC sampling algorithm (Castillo et al., 2024) and a Gaussian response time distribution. We compared both ABS variants with the well-established and widely used Drift Diffusion Model (DDM; Ratcliff, 1978; Ratcliff & McKoon, 2008; Ratcliff & Rouder, 1998) to investigate the strengths and limitations of the ABS models. We fit three models to a random dot motion task data (from Murphy et al., 2014) using Approximate Bayesian Computation (ABC; Beaumont et al., 2002; Csilléry et al., 2010; Marin et al., 2012) to evaluate how well the models account for the data. Our comparison incorporates statistics such as accuracy rates, response time quantiles, and the probability of repeating past choices. Through this analysis, we aim to illustrate how differences in their assumptions and approaches affect their performance across varied contests, thereby identifying directions for enhancing the explanatory power of these models.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Rocio Alcala-Quintana
Psychophysical data on duration discrimination are typically fitted via logistic or Gaussian psychometric functions. These functions are symmetric, with the consequence that estimates of the 25% and 75% points are forced to lie at the same distance from the standard duration albeit in opposite directions. This characteristic is at odds with Weber’s law, which posits that the just-noticeable difference is proportional to the standard duration. Thus, if the proportionality factor were, say, 1.5, a duration of 300 ms would be just discriminable from a duration of 200 ms and a duration of 450 ms would be just discriminable from a duration of 300 ms. Put together, when the standard duration is 300 ms, points of equal discrimination performance below and above the standard should lie at different distances (in ms) from the standard, in contrast to what fitting symmetric psychometric functions renders. We conducted a simulation study that fitted psychometric functions to data generated to obey Weber’s law, which essentially implies that the relevant scale for time is log duration instead of duration. The results show that fitting conventional psychometric functions (of duration in ms) misrepresents discrimination performance and provides erroneous estimates of the difference limen, whereas fitting asymmetric psychometric functions (of log duration) captures generating performance adequately. Psychometric functions of log duration generally fitted the data much better than psychometric functions of duration in ms, although the fit turned out similar in some cases. Some empirical data are presented that corroborate the validity of these simulation results.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Miguel García-Pérez
Order effects are a pervasive phenomenon in psychophysics. They manifest in some measure of discrimination performance differing with the order in which the stimuli to be compared (standard vs. test) are presented in each trial. Different types of order effects have been described that hold for a wide range of sensory modalities and stimuli, and many of them can be accounted for by a number of models of psychophysical performance. However, time perception seems to be fundamentally different and the origin of order effects observed in duration discrimination tasks remains undisclosed. We conducted an experimental study using a duration discrimination task to collect data at different standard durations in the range of hundreds of milliseconds. Every dataset was then analyzed under different frameworks given by well-established models that can accommodate order effects: Indecision (https://doi.org/10.3389/fpsyg.2017.01142), Internal Reference (https://doi.org/10.3758/s13414-012-0362-4), and Sensation Weighting (https://doi.org/10.3758/s13414-020-01999-z). Psychometric functions derived from each model were fitted to the data both separately and jointly across presentation orders. All analyses were carried out twice, assuming either duration in milliseconds or log duration as the relevant scale for time perception. Our results provide a comprehensive map of order effects in discrimination of short durations and a solid analysis of the strengths and weaknesses of each model. Implications are discussed paving our way towards a better understanding of order effects in duration discrimination.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Prof. Rafal Bogacz
Psychologically, habits are defined as the reward-independent, stimulus-response relationships which form when identical actions are repeated often. These behaviours were originally studied in animals, and Hardwick et al. (2019) recently developed a paradigm to identify habits in humans. They hypothesised that human habits may be detected when participants are forced to act too quickly for conscious (goal-directed) control to be applied. They trained participants extensively on a stimulus-response mapping, and then the mapping was reversed. When participants were tested post-reversal, their behaviour changed depending on how rapidly they needed to react. Specifically, participants made more ‘habitual’ errors, i.e., choosing the original response, when forced to respond within 300-600ms. Hardwick et al. proposed that parallel accumulators were responsible, wherein the goal-directed system is initiated after a delay. However, no formal mathematical model exists that instantiates this proposal and allows for multiple drift rates which change both across (via reinforcement learning) and within (parallel accumulators) trials. In this paper, we present a novel 2-drift race model, and calculate the probability of reaction times and choices so it can be efficiently fitted to data from the paradigm by Hardwick et al. To test their proposal, we compare the quality of fit of a single-drift Q-learn race model and that of our model, in which habit and goal-directed actions accumulate independently. Furthermore, the best fit parameters of the 2-drift model can provide several key insights into, and quantifiable measures of, the mechanistic structure underlying the differences between individuals’ reliance on habits, undetectable in behaviour alone.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Prof. Pietro Cipresso
Affect dynamics, or the study of changing patterns of emotional experiences throughout time, has developed as an important area of research in Mathematical Psychology. Traditionally, Affect Dynamics analysis has used the Experience Sampling Method (ESM), a data collection approach in which participants report their feelings, thoughts, and behaviours at various periods during the day. This approach models Intensive Longitudinal data (ILD) using Mixed Linear or nonlinear Models (MLM) or Vector Autoregressive Models (VAR). These models define emotion in terms of temporality and complexity. However, they overlook the fundamental unity of affective dynamism: the transition between states. Although emotions occur in sequential order, the transition between them considers the prior state in relation to the present one. Individuals can feel and describe numerous emotions at the same time, but one emotion usually takes precedence, influencing or being compared to the prior one. In this work, we want to employ discrete Markov chains to assess each transition between the prior and present emotional states, disregarding earlier transitions in the same manner that a Markov chain does. Indeed, Markov chains are mathematical systems that represent a succession of potential occurrences, with the probability of each event determined solely by the state obtained in the preceding event. Here, we present an empirical study that used self-reported emotional responses and physiological data (heart rate variability, face electromyography, and galvanic skin reaction) to create a discrete Markov chain and compare it to autoregressive models.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Andreas Voss
Semantic priming has been intensively investigated in the lexical decision task, naming task, and semantic categorization task. Although semantic categorization task with short SOAs is considered to exclude strategic expectancy mechanisms and post-lexical processes, the category congruence effect is likely to confound semantic priming in this task. The study aims to disentangle semantic priming and category congruence effects in the semantic categorization task. We tested these effects by presenting prime and target as different modalities (i.e., prime as word and target as pictures). Specifically, we varied whether the primes were semantically associated with the targets and whether primes and targets belong to the same category (i.e., living/ non-living). We plan to use Bayesian hierarchical diffusion modeling analysis to test these hypotheses: Firstly, whether there is automatic spreading activation in this task with the cross-modal paradigm, which was often found mapping at drift rates in diffusion modeling. Secondly, whether there is a response competition or head start, which should be mapped on non-decisional times. Moreover, whether there is a decision bias (a bias at the starting point) in category compatible condition, which is also plausible to explain the category congruence effect. Models selectively manipulating these effects will be compared to test hypotheses.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Henrik Singmann
The belief bias describes the phenomenon that, when asked to judge the validity of a logical argument, people are influenced by the believability of the argument’s conclusion. We investigated the belief bias in the context of everyday arguments regarding controversial political topics like those found on (social) media. Arguments differed in their (informal) argument quality; ‘good’ arguments provide strong evidence for their conclusion, whilst ‘bad’ arguments provide only weak evidence. Participants rated their beliefs about a series of political claims (e.g., ‘abortion should be legal’) on a 7-point Likert scale and rated the strength of good and bad arguments about these claims. In Experiment 1, participants rated argument strength on a scale of 1 (extremely bad argument) to 6 (extremely good argument), while in Experiment 2 they rated it on a binary scale (i.e., either bad or good). We analysed both experiments using linear models and probit models, i.e., equal variance signal detection models. In both experiments, with both types of models, we found a belief bias for everyday arguments. Participants thought the quality of good arguments was stronger than the quality of bad arguments, but also perceived arguments in line with their beliefs as better than arguments that were not. Furthermore, independent of the model used, we found no evidence for an interaction between belief and argument quality.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Prof. Joe Houpt
Current research for military pilots tends to frequently center on the amount of information made available for the individual based on perceptual modeling - emphasizing cognitive information processing for anticipated incidents or simple outcomes. However, although these analyses account for a process explanation description in decision-making, it does not posit the perceptual effects of decision-making on task performance based on multiple visual cues and stimuli under extreme visual conditions – as the varied dimensions of a stimulus do not necessitate straightforward effects when adjusting each cue alone. By changing and adapting the multiple facets of stimuli, we can detect which cues aid in identifying the needed visual information to increase and maintain the fastest and most precise decision responses through visual perception processing. We suggest that through the General Theory Recognition Model (GRT), we can examine multidimensional stimuli to develop and regulate the best properties to support aviation cues and symbology. This study is designed to investigate visual information stimuli and disruption through filtered overlays/varying backgrounds/various cues that may directly or indirectly affect the task or information processing of the participant. Manipulated symbols, such as those with applied color-specific filters or non-neutral backgrounds, showed increased speed performance in symbol recognition within the visual range, including visual stimuli in a more peripheral zone. Applied filters may aid in faster detection of visual cues but may change the overall meaning of the specified symbol, creating a need to verify the intended connotation without loss of speed or accuracy in task performance and identification.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Kensuke Okada
With the growing movement toward robust modeling in response to the crisis of reproducibility in psychological research, it is a crucial practice to test whether an existing model can explain other datasets and evaluate the differences in model behavior that can be observed between different datasets. The framework of secondary data analysis, in which studies are conducted using existing data, is helpful for such efforts because it allows for the use of a wide variety of datasets containing minor differences in experimental conditions and other details, thus contributing to validating the robustness of the model. In this study, using models representing the form of a mathematical function of the forgetting curve as an example, we considered how to evaluate models in a dataset-integrated manner using multiple secondary datasets. Specifically, we implemented Bayesian hierarchical models that account for the differences among datasets based on a meta-analytic approach. Then, using the Bayesian Evidence Synthesis framework, we repeatedly fitted the models with sequential additions of datasets and observed the transition of the Bayes Factors. We report the results of the preliminary simulation study. We constructed power and linear models that explain the forgetting curves using power and linear functions, respectively, and compared these two models with artificial datasets. The results confirmed that the Bayes Factors correctly chose data-generating models in situations where the data-generating models were known. Moreover, we discuss our preregistered protocol for secondary data analysis that we plan to conduct.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Joachim Vandekerckhove
Bayesian inference requires the use of numerical solutions since posterior distributions in closed-form are rarely computable in complex models. Popular algorithms and specialized software demand a considerable amount of computational resources and Bayesian analyses requiring hours or days of uninterrupted computation are common. Furthermore, the need for scalable Bayesian methods intensifies as large datasets on diverse domains become readily available. In this work we explore the performance of Consensus Monte Carlo (CMC) in the context of hierarchical models. This distributed algorithm splits the data into several different chunks and assigns each one to a different machine, calculates the posterior distribution corresponding to each data partition, and then mixes them back together to obtain the posterior distribution reflecting the whole dataset, where the final “consensus” distribution is a weighted average of the posterior distributions returned by each machine. We illustrate the workings of CMC by implementing a hierarchical model of choice equilibrium over NFL play-by-play decisions. The dataset includes over a quarter million plays from 2013 to 2023 and, given its moderate size, allows for a direct comparison between CMC and the model implemented in a single machine using all observations at once. The hierarchical model we use as example describes choices between rushing or passing as a function of the relative gain in yards returned by each of those alternatives, and explains deviations from optimal equilibrium in terms of covariates at team, game, and quarter level.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Sascha Meyen
Volker Franz
Hypothesis testing is one of the most widely used tools in inferential statistics. Yet, hypothesis tests — be it frequentist or Bayesian — have their respective problems and can cause sever misinterpretations. We argue that one reason for these persistent problems is the following discrepancy: While hypothesis tests are explicit on which parameter-values are theoretically contained in each hypothesis, they are usually not explicit on which parameter-values would in a practical setting lead (most likely) to which test outcome. For example, certain small 'true' effects although deviating from the typical point-null hypothesis will in most cases lead to Bayes Factors supporting the null hypothesis depending on the sample size (or, more generally, precision). To make these test-characteristics explicit we introduce the concept of Regions of Support (ROS). ROS can serve both as a check for researchers’ expectations as well as a comparison of different tests. We evaluate standard Bayesian and frequentist point-null tests as well as interval (equivalence) tests on a simple, two independent samples setting. Interestingly, for interval tests our ROS analysis finds that Bayes factors suffer from an undesirable bias towards the equivalence hypothesis. We argue that other methods such as the Bayesian highest density interval (HDI) with region of practical equivalence (ROPE) or its frequentist analogue (confidence interval with ROPE) do not show this bias and might be preferable. With that, we demonstrate the diagnostic value ROS can have and hope that — due to its general applicability to any test — it will find its way into researchers' statistical toolboxes.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Jian-Qiao Zhu
Jake Spicer
Prof. Adam Sanborn
When making financial forecasts, individuals often tend to overreact to recent information. This phenomenon has been consistently observed in both laboratory studies involving naïve participants (Afrouzi et al., 2023: https://doi.org/10.1093/qje/qjad009) and professional consensus real-world forecasting (Bordalo et al., 2020: https://doi.org/10.1257/aer.20181219). Leading models attribute this overreaction to either an overestimation of recent information or memory constraints favoring more accessible information. An alternative explanation posits that individuals accurately integrate all available information into the posterior probability distribution for forecasting. However, a key challenge arises from the inability to directly access this posterior distribution, leading forecasters to depend on approximation methods, such as sampling. Local sampling algorithms, supported in other forecasting contexts (Spicer, et al., 2022: https://doi.org/10.31234/osf.io/fjtha), may introduce overreaction due to the starting point bias, as well as greater variability in predictions due to their stochastic nature. Here, we leverage these phenomena to discern between competing explanations for the observed forecasting behaviour. By reanalyzing data from a lab prediction task using a random walk price series (Afrouzi et al., 2023), we observe increasing variability in predicted values and forecast errors as the horizon expands, in keeping with sampling explanations. This data only offers a single prediction at each horizon, however; to further explore within-individual variability, we present a new experiment where participants are asked to repeatedly predict the same future value and use the results to distinguish between models.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Martin Schnuerch
Prof. Arndt Bröder
The lens model equation builds on the Brunswik lens model and decomposes judgmental achievement (i.e., the correlation between judgments and true criterion values) into several correlational parameters. One of these parameters, the (linear) matching parameter G, is commonly used as an indicator of the extent to which individuals utilize the available environmental cues according to their respective validity to form a judgment. However, because G denotes the correlation between predicted values of two linear regression models containing the same set of predictors, it exhibits some undesirable statistical properties, such as a bias toward high values and a dependence on the number of cues, as first pointed out by Castellan (1973, Psychometrika). Since the G-parameter, despite its statistical limitations, remains a widely used tool in many fields of judgment research, we propose a hierarchical equivalent to address its limitations. We compare the statistical properties of the conventional G-parameter to its hierarchical equivalent in different simulation scenarios and in application to empirical data from metamemory research. Our results suggest that the hierarchical G estimator is more robust to misspecifications of the regression models, for example, due to unknown cues or item-cue interactions, and leads to more reliable estimates due to hierarchical shrinkage. We discuss that while G may not be a psychologically meaningful measure in all task environments, the hierarchical equivalent leads to more accurate estimates in judgment scenarios where G can be considered a sensible measure of matching accuracy.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Martin Schnuerch
Mr. Lukas Schumacher
Memory performance declines over the course of a memory test, a finding referred to as output interference. A promising way to disentangle memory interference and motivation as underlying mechanisms is by means of the drift diffusion model (DDM). The DDM is a cognitive model for analyzing response time and choice data from binary decision tasks. Previous applications in the context of output interference focused on the development of drift-rate and boundary-separation parameters to measure changes in retrieval and motivation, respectively. However, motivation could also affect participants’ tendency to engage in fast guessing instead of a more effortful cognitive process as measured by the DDM. Moreover, parameter development is typically analyzed across trial blocks rather than single trials. To address these limitations and (a) disentangle guesses from informed responses and (b) estimate parameter trajectories on a single-trial level, we used neural superstatistics, an emerging method for inferring parameter trajectories from empirical data, to estimate a non-stationary diffusion/fast-guess mixture model. The model was fitted to empirical recognition memory data from forced-choice and yes/no categorization tasks. We found that, while drift rate and boundary separation decrease over the course of the experiment, the probability to resort to fast guessing increases as well. These results emphasize the importance of accounting for guessing when analyzing output interference in recognition memory and highlight the usefulness of non-stationary cognitive models.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Nick Chater
Prof. Adam Sanborn
Dr. Jian-Qiao Zhu
There are multiple repulsion biases in perceptual decision making. In motion transparency, the perceptual experience of two superimposed coherent motions is biased toward repulsion when the angle between the two exceeds 30 degrees (Braddick et al., 2002). Moreover, in decision making, when a discrimination task proceeds a perceptual judgement task, the former biases the latter away from the discrimination reference probe (the repulsion effect; Zamboni et al., 2016; Spicer et al., 2022). Can these two repulsive effects co-occur and what kind of model could explain such co-occurrence? We presented participants with transparent motion stimuli and asked participants to perform two tasks sequentially: a motion direction discrimination task relative to a reference probe and then a motion direction report task of all observed motion groups. The perceptual repulsion effect and the decision-making repulsion effect were replicated independently. The reported direction relative to the probe stimulus was biased away from the probe from the discrimination task, but only when participants performed a discrimination task first. A separate experiment confirmed that the bias away from the probe was a decision-making bias that was attenuated by a pause after the discrimination task. How can we provide a unified explanation of both repulsion biases? Bayesian models can explain the repulsion between two transparent motions as the result of inference using an internal generative model (Gershman et al., 2016), specifically inferring and subtracting the joint motion of the stimuli, while evidence accumulation models explain the repulsion effect in decision making (Spicer et al., 2022). We propose that the brain sequentially samples from the posterior distribution over the generative model given the stimuli (capturing repulsion between the motions) with optimal stopping (capturing repulsion from the probe).
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Jan Göttmann
Anna-Lena Schubert
The congruency effect, characterized by faster reaction times in congruent trials compared to incongruent ones, is a consistent finding in various conflict tasks. Despite being considered a reflection of consistent cognitive abilities like inhibition or attentional control, inconsistencies in delta function trajectories and performance correlations across tasks present significant challenges. To address these limitations and to identify underlying processes using computational modeling, the Diffusion Model for Conflict Tasks (DMC) has been developed, showing promising predictions of different shapes of delta functions. However, estimating DMC parameters using traditional methods is challenging due to its intractable likelihood, leading to extensive computational effort. In this study, we used BayesFlow, a simulation-based approach that leverages deep neural networks, to overcome these challenges. BayesFlow approximates the underlying likelihood function from simulated data and generates a posterior probability distribution by employing two neural networks. It offers an extremely efficient approach, since, after training the networks, the parameter estimation is completed in real time. We conducted a simulation study to assess the capability of BayesFlow to recover simulated parameters. The implementation showed reasonable simulation-based calibration, sensitivity, and goodness-of-fit. The estimation of DMC parameters achieved excellent recovery, with correlations between simulated and recovered parameters ranging from r = .88 to r = .99, exceeding those of existing estimation techniques.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Mr. Haijiang Yan
Prof. Adam Sanborn
Probability matching is a bias in human decision making, where people asked to choose between options with unequal probabilities sub-optimally ‘match’ the frequency of their responses to that of underlying events. Intriguingly, this effect is observed when subjects are not given any information about these probabilities, as well as when they are informed in advance. We investigate whether probability matching can be the result of local sampling from an approximated distribution. Local sampling offers a unifying account for various biases and characteristics of human probabilistic judgements. Previous work has shown independent sampling captures global probability matching patterns. However, independent sampling ignores serial dependencies and often fails to account for nuances of behaviour. We explore the extent to which local sampling can improve on these results, and how it compares to competing explanations. We designed an online experiment (N=147), describing to participants a six-sided die with four sides of one colour and two sides of another. Subjects then had to perform three variations of a binary choice task, as a counterbalanced within-subject factor. Two of them involved predicting the next outcome in a series of die rolls, with and without feedback. In the third task participants were asked to construct sequences of die rolls, one at a time, by mentally simulating the process. We compare local sampling algorithms with several other quantitative models in their ability to account for characteristics of participants’ behaviour in these tasks, such first-trial responses, serial dependence effects, and reaction times.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Ms. Emily Todd
Prof. James Rowe
Dr. Alexander Murley
Ms. Rebecca Williams
Disinhibition is a prominent feature of syndromes associated with frontotemporal lobar degeneration (FTLD), encompassing impulsive behaviours and difficulty suppressing inappropriate or habitual responses. Being disinhibited in these syndromes has been linked to higher caregiver burden, earlier institutionalisation, and poorer prognosis (Murley et al., 2021). There are currently no treatments for disinhibition in FTLD. However, an avenue for potential treatment is that of neurotransmitter deficits. Gamma-aminobutyric acid (GABA) and noradrenaline deficits in FTLD are well established and are correlated with disinhibition (Murley et al., 2020; Ye et al., 2023). To develop and validate treatment strategies for disinhibition, we need to understand the delicate balance of neurotransmitter deficits in these syndromes and their link to disinhibition. Here we use a manual stop-signal task to quantify inhibitory control in Progessive Supranuclear Palsy (PSP, Richardson’s syndrome, n= 5), behavioural variant frontotemporal dementia (bvFTD; n = 9) and age- and sex-matched healthy adults (n=14). The stop-signal task is a well-established tool to quantify inhibitory control, with trans-species and trans-diagnostic utility. We confirm that patients with PSP and bvFTD are impaired on the stop signal task (SSRT; M = 301.38, SD = 98.87) compared to controls (M = 187.38, SD = 32.78, p = 0.0003). Ongoing work is analysing the contribution of GABA-ergic and noradrenergic deficits to these deficits in inhibitory control. Understanding the variance of inhibitory control has implications for timing of symptom onset, prognostication, and the development of pharmacological interventions to mitigate the behavioural challenges faced by affected individuals and their caregivers.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Ms. Sonia Acuna Vargas
Dr. Michael D. Nunez
Chris Donkin
This research delves into decision-making, examining the effect of information seeking on the propensity to change one's mind. It also aims to differentiate between two metacognitive states: "believe I can know" and "don't believe I can know." Additionally, the study investigates whether beliefs in knowledgeability and the act of changing one's mind are associated with specific neural markers, thereby exploring the relationship between information seeking and change of mind The methodology involves a color judgment task where participants are initially required to respond as quickly as possible. In some trials, they are given the opportunity to seek more information before reporting their final decision along with their confidence level. The results have shown different behavioral patterns in the change of mind under various information seeking scenarios, suggesting a significant role for information seeking in decision-making processes. Furthermore, decoding analysis of EEG data has demonstrated the ability to distinguish between the two metacognitive states at an individual level. These findings offer valuable insights into the underlying cognitive processes involved in information seeking and change of mind.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Mr. Gregory Bowers
Mr. Andrew Manory
Ashley Cook
When one attempts to multi-task, performance decreases, even for cross-modal (aural) working memory (WM) and (visual) search (VS) tasks. In this work, we investigate how the underlying decision-making (DM) processes change as a function of crossmodal multi-tasking and cognitive load. Specifically, we use a shifted Wald model to assess one’s drift, i.e., the rate at which evidence is accumulated, and threshold, i.e., the amount of evidence needed to make a decision, in a 2-alternative force choice (2AFC) VS task and a single-bound (go/no-go) WM task (n-back) of various difficulties (1-,2-,3-back); each in isolation and in dual-task contexts. We capture parameters for each task, for each single task and multi-task condition at each cognitive load (1-,2-, 3-back). At the group-level, we find one’s drift rate increases and threshold remains constant in the VS task, but only in the presence of a 1-back task; in context of a 2-/3-back task, drift (slight) and threshold (large) increase. In the n-back task, both drift and threshold decrease as the difficulty of the n-back increases and, except for 3-back, decrease even more in context of the VS task; however, this is at the cost of accuracy, and only correct response times were investigated using the shifted Wald model. In the 3-back task, parameters slightly increase when attempting to dual-task, compared to an isolated context. We discuss the feasibility of utilizing the shifted Wald for 2AFC and go/no-go tasks and discuss individual differences in the impact of cognitive load on parameters of DM.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Prof. Joe Houpt
The asymmetry between symmetry and asymmetry Some have argued that symmetry is a core feature in visual perception. In a previous study, we found change detection was facilitated when a change from asymmetry to symmetry was an incidental cue. In the current study, our goal was to investigate whether that increased efficiency holds when a change from symmetry to asymmetry is an incidental cue. Participants were asked to judge whether the orientation of two lines may change in a way that preserves asymmetry or in a way that creates symmetry. For trials with pairs of lines, the lines create symmetry or asymmetry as an incidental feature. We applied the capacity coefficient, a tool from system factorial technology, to assess performance. The capacity coefficient gives both categorical results, whether there is a cost, benefit, or no change when two lines are used together, and quantitative results that could be used for examining individual differences. In previous study, we found all participants were super capacity when detecting a change from asymmetry to symmetry using the single-target-self-terminating (STST) capacity coefficient. In the current study, it was far less likely that participants demonstrated super capacity when there was an incidental change from symmetry to asymmetry, again using STST capacity. For comparison, in the previous study, when a change in symmetry was not an incidental cue (i.e., both reference and probe were symmetric) participants were generally around unlimited capacity but in the current study participants were limited capacity in the analogous trials (i.e., both reference and probe were asymmetric) based on the OR capacity coefficient.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Jocelyn Espinoza
Color perception is highly dependent on the perceived context of the color, as demonstrated in the viral discussion surrounding, “The Dress.” One critical practical concern is that even when just a small band of color is blocked, it can influence the perception of a much wider range of colors. Due to a spate of incidents in which pilots were temporarily blinded by green laser pointers aimed at their aircraft from the ground, psychophysical research has been examining the direct effect laser eye protection (LEP) on pilots’ perception of information presented in colors near to the filtered range. Our current research examines how the perception of color in the context of other colors is affected by LEP. Participants observed four shapes at a time, each of which is has a distinct solid-color background. One of the shapes is filled with a slightly different hue than the other three, and the participant’s task was to indicate which shape was a different color. To calibrate for individual color perception, the task begins with an adaptive phase, to determine the level of hue change. Then, participants go through a control phase, and a phase with LEP to compare accuracy. The data collected indicates that accuracy is decreased when presented with red and green shapes across desaturated backgrounds and increased when presented with blue and yellow shapes across extremely saturated backgrounds.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Jennifer Trueblood
How does category learning, such as medical image classification tasks, change mental representations? Is the change in mental representations similar to the change in neural network representations when trained on specialized tasks? In this project, we compare similarity obtained from neural network representations to human similarity representations before and after they both were trained to classify white blood cell images into blast (cancerous) and non-blast (non-cancerous) categories. We focus on the two neural network representations for each image: (i) pre-trained GoogLeNet on ImageNet (Stock Representation), which has not received training on white blood cell classification and (ii) GoogLeNet trained on cancer cell classification (Task Representation) using transfer learning following Holmes et al. (2020). Using each neural network representation, we calculate the similarity between two images as the Euclidean distance between the image embeddings. We also conducted an experiment where we recruited human participants from MTurk using CloudResearch. We probe human representations by eliciting similarity judgments on carefully curated pairs of images before and after they learn to classify the cancer cell images. We draw comparisons between human and artificial neural network representations and discuss the implications for medical image training.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Zita Oravecz
Joachim Vandekerckhove
Interrupted time series analysis is a statistical method to study the effects of a deliberate intervention by observing data over a period before and after a change. In this project, we consider interrupted time-series data from a mobile health intervention study aimed at promoting psychological well‐being in college students. We will apply a model for interrupted time series based on the Ornstein-Uhlenbeck (OU) diffusion model, a stochastic time series model whose main parameters capture intraindividual variability, an attractor point or homeostasis level, and an elasticity parameter that governs the speed with which the process returns to its attractor after a perturbation. Interruptions in these time series can be characterized as discrete state shifts in one or more of these parameters, leading to the hierarchical Bayesian interrupted OU model that we apply to the mobile health intervention. We evaluate the intervention's effectiveness by examining the levels of psychological well-being across four study phases: pre-intervention, intervention, immediate post-intervention, and late post-intervention. We operate under the assumption that we can categorize the time series according to these phases, anticipating that participants' psychological well-being tends to stabilize at specific homeostatic levels during each phase. Additionally, we evaluate the applicability of BayesFlow to this broader class of problem. BayesFlow is a new simulation-based inference method that can provide high-efficiency Bayesian parameter estimation even with complex, time-variant models, but its application to a multilevel hierarchical model such as ours requires a thoughtful implementation. We discuss strategies for our specific case and possible expansions of our work.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Dr. Constantin Meyer-Grant
Prof. Christoph Klauer
We updated the R package “rtmpt” with a newly developed method to incorporate response times in the class of multinomial processing tree (MPT) models. Like the method implemented in the previous version of the package, this new method allows for the estimation of process-completion times as well as process-outcome probabilities. However, in contrast to the previous method, in which each process-completion time was assumed to follow an exponential distribution, it assumes that these quantities are determined by the outcome of a Wiener diffusion process. Consequently, the process completion times no longer possess the questionable memoryless property. In addition, the new method can account for non-monotonic hazard rates of a single processing branch. Both of these characteristics make the new method more realistic in view of actual response times. Furthermore, a comparison of both approaches can serve as a means to perform robustness checks with respect to the auxiliary assumptions regarding process kernels. We show how to use the new method and demonstrate the validity of the underlying hierarchical Bayesian MCMC algorithm via a simulation based calibration.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Jorg Rieskamp
The role of visual attention in perceptual/preferential decision-making has been established over the decades. Hence, several computational models based on sequential sampling theory (e.g., attentional drift-diffusion model (aDDM), gaze-weighted linear accumulator model (GLAM), or gaze-advantage race diffusion model (GARD)) have been proposed to account for visual attention in the accumulation process. These computational models are quite successful in explaining the role of visual attention in the accumulation process and have been used in different domains. However, only a few computational packages are developed to estimate the parameters and fit these models on empirical data. 'ASSM' is a Python package that provides a hierarchical Bayesian parameter estimation framework for attentional sequential sampling models. This package is developed based on Stan and includes different versions of aDDM, GLAM, and GARD models (e.g., uni-attribute/multi-attribute) with different attentional mechanisms. Moreover, 'ASSM' supports both individual- and group-level fitting procedures for these computational models and accommodates empirical data collected from 2-alternative or multi-alternative decision tasks.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Thomas Richter
Rolf Ulrich
Markus Janczyk
In many areas of psychology and neuroscience, drift-diffusion models (DDMs) have become an important framework for understanding decision processes. Models in this framework assume that response information accumulates in an incremental but noisy manner until a threshold is reached. To date, several software packages exist to fit DDMs, ranging from more classical packages such as fast-dm (Voss & Voss, 2007, BRM) or ez (Wagenmakers et al. 2007, PBR) to modern Python packages such as pyddm (Shinn et al., 2020, eLife) or PyBEAM (Murrow & Holmes, 2023, BRM). However, many of these packages are either limited to time-invariant parameters or require knowledge of Python. Here we present the dRiftDM package, an R package for fitting DDMs with time-varying parameters. The package uses a numerical approximation of the Kolmogorov forward equation to fit DDMs via maximum likelihood. The dRiftDM package is designed to be easy to use and with the typical requirements of psychological researchers in mind. For example, we provide straightforward functions for fitting and loading data sets, exploring model properties, or performing model comparison. dRiftDM can be used flexibly to implement a wide range of DDMs. In addition, it already provides pre-built models that are common in cognitive psychology. By making it easy to apply DDMs in R, dRiftDM is a valuable tool that provides researchers with an entry point to a model-driven approach to their data.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Mr. Amirreza Bagherzadehkhorasani
Jongchan Pyeon
Knowing evacuation routes can save lives during disasters. Therefore, it is important to optimize them. This work aims to improve emergency exit route design in situations like earthquakes or fires. We utilized Virtual Reality (VR) to design mazes with varying difficulty levels (easy and hard) and visual cue types (colors and objects), to examine their impact on the visual search time and recall abilities of participants (N=20). Participants completed memory and cognitive tasks before navigating the mazes in two separate trials, one with and another without instruction. The study employed a fractional factorial experimental design considering factors: gender, dominant hand, and types of visual cues, to study their influence in two maze difficulty levels. Our finding suggests that Gender, the difficulty, and the interaction of Gender and Visual cue type have a statistically significant effect on the average time users spend at each tile. We further analyzed if the effect of Gender can be explained by spatial abilities. However, gender remained a statistically significant factor in navigation performance. Also, we developed the first GOMS model in VR and analyzed the method's longest completion time. Findings from this research can assist VR designers in creating inclusive user-friendly interfaces alongside real-world applications, such as more effective emergency exit routes in interior designs, potentially saving lives during disasters.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Michael Schredl
People in dreams arise from the dreamer’s semantic memory of people and their relations. If memory is veridical, properties of people in dreams would reflect properties of waking social life. A man recorded people occurring in his dreams for 32 years. We report two comparisons with waking life. First, appearance of a person in a dream is analogous to contact with that person in waking life. Saramȁki, et al. (2014) found properties of contacts individuals made by mobile phone over successive time intervals. (a) A small number of people receive a large fraction of calls. (b) There is turnover of people over intervals, with higher retention for higher frequency people. (c) The shape of an individual’s distribution of frequencies of calls to people has some variation, but despite turnover tends to persist in time. The properties were found in frequencies of people occurring in the dreams. Second, waking life social networks tend to have a power law degree distribution and the small world combination of high clustering with short distances between vertices. For each year, a dream social network was made by representing each person by a vertex and joining two vertices with an edge if they cooccurred in a dream. A power law fit the degree distributions well and the Small-World Propensity was at the upper limit of 1. Occurrences of people in dreams have properties like those in waking life, although there are changes over time, for example, a slight decrease in distances between vertices.
This is an in-person presentation on July 20, 2024 (17:00 ~ 20:00 CEST).
Submitting author
Author