Menu

Member Login
Forums
The Society

Meetings

Future events
MathPsych / ICCM 2024
(open)
Virtual MathPsych/ICCM 2024
(open)
Math Psych at Psychonomics 2023
MathPsych/ICCM/EMPG 2023
(archived)
Virtual MathPsych/ICCM 2023
(archived)
MathPsych at Psychonomics 2022
(archived)
In-Person MathPsych/ICCM 2022
(archived)
Virtual MathPsych/ICCM 2022
(archived)
MathPsych at Virtual Psychonomics 2021
(archived)
Virtual MathPsych/ICCM 2021
(archived)
Australasian Mathematical Psychology Conference 2021
(archived)
MathPsych at Virtual Psychonomics 2020
(archived)
Virtual MathPsych/ICCM 2020
(archived)
Older events
Popular talks

The Society

Meetings

Future events
MathPsych / ICCM 2024
(open)
Virtual MathPsych/ICCM 2024
(open)
Math Psych at Psychonomics 2023
MathPsych/ICCM/EMPG 2023
(archived)
Virtual MathPsych/ICCM 2023
(archived)
MathPsych at Psychonomics 2022
(archived)
In-Person MathPsych/ICCM 2022
(archived)
Virtual MathPsych/ICCM 2022
(archived)
MathPsych at Virtual Psychonomics 2021
(archived)
Virtual MathPsych/ICCM 2021
(archived)
Australasian Mathematical Psychology Conference 2021
(archived)
MathPsych at Virtual Psychonomics 2020
(archived)
Virtual MathPsych/ICCM 2020
(archived)
Older events
Popular talks

Close

This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Metascience

Flexibility in reaction time analysis: Many roads to a false positive?

Dr. Luis Morís Fernández

Miguel A. Vadillo

Miguel A. Vadillo

In the present talk, we explore the influence of undisclosed flexibility in the analysis of reaction times (RTs). RTs entail some degrees of freedom of their own, due to their skewed distribution, the potential presence of outliers and the availability of different methods to deal with these issues. Moreover, these degrees of freedom are usually not considered part of the analysis itself, but preprocessing steps that are contingent on data. We analysed the impact of these degrees of freedom on the false-positive rate using Monte Carlo simulations over real and simulated data. When several preprocessing methods are used in combination, the false-positive rate can easily rise to 17%. This figure becomes more concerning if we consider that more degrees of freedom are awaiting down the analysis pipeline, potentially making the final false-positive rate much higher. We propose that pre-registering studies would ameloriate this problem by reducing the degrees of freedom when analyzing RT data.

Type I error in diffusion models: A drift towards false positives?

Dr. Joaquín Morís

Dr. Luis Morís Fernández

Miguel A. Vadillo

Dr. Luis Morís Fernández

Miguel A. Vadillo

Diffusion models are one of the most used tools to analyze reaction times (RT), and their relevance keeps growing overtime. According to these models, evidence is accumulated over time, until a threshold is reached, leading to a response. Contrary to simpler RT analysis approaches, these models are equipped with more parameters to be estimated, such as the drift rate, the threshold or non-decisional factors. This allows a more nuanced understanding of the process underlying the decision and response. Unfortunately, this higher number of parameters can also be problematic. We present a series of three simulations with Ratcliff's diffusion model. Simulation 1 used empirical data, Simulation 2 simulated data based on empirically estimated parameters and Simulation 3 was carried out with simulated data based on common distributions of the parameters. The three simulations show that commonly used statistical analyses in diffusion models can lead to an inflation of the Type I error rate. Different strategies to prevent this problem are discussed, including pre-registration of the analysis, model comparisons and Type I error corrections.

Foundational challenges for mathematical and computational cognitive modeling in the 21st century

Ronaldo Vigo

The emergence of machine learning as a computationally intensive approach to analyzing and discovering patterns in data has brought attention to aspects of mathematical modeling in cognitive science and psychology that have been, for the most part, previously ignored. In this talk, I discuss a handful of general mathematical modeling constructs, principles and problems that should be considered by those attempting to construct formal models of cognitive phenomena. These include the meaningfulness/soundness and completeness problem in modeling, the tractability of algorithms, the problem of parameter estimation in “deep learning” neural networks, the adequate testing of the predictive and explanatory power of models, and others. The goal is to demonstrate how modelers may enrich their toolbox of mathematical structures/constructs while improving the robustness, validity and reliability of their models.

On bias, signal detection, and sequential sampling

Ørjan Røkkum Brandtzæg

Prof. Robert Biegler

Prof. Robert Biegler

We aim to address two issues. First, empirical work on how payoff asymmetries can bias decisions has used the total number of false positives compared to the total number of false negatives as a criterion. We explain why and when this criterion dissociates from bias c such that one criterion can indicate a liberal bias while the other indicates a conservative bias. Second, what is the optimal decision criterion in a sequential sampling problem, in which noise can be reduced by increased sampling, but at a cost? We derive a function for the cost of sampling, and use that to find the optimal sampling effort for a range of parameters. We will examine both using the case of male sexual overperception, the tendency of men to either believe or to act as if women are more interested in sex than is actually the case. The argument generalises to other examples of decisions under asymmetric payoffs.

Prediction error and surprise

Ms. Rebekka Lisøy

Prof. Gerit Pfuhl

Prof. Robert Biegler

Prof. Gerit Pfuhl

Prof. Robert Biegler

Prediction is one of the fundamental functions of the brain. Prediction allows the organism to prepare for events and direct attention to what is important: the unexpected, surprising and unknown. An event can only be identified as unexpected if there is an expectation or prediction to begin with, and if there is a large enough deviation from that prediction. Because there is random variation in events themselves, in the perception of events, and in their prediction, “large enough” can only be a statistical judgement. If either the criterion for what is surprising is inappropriate, or if the estimate of prediction error is systematically wrong, then the balance between type I and type II errors shifts. Excessive surprise caused by overestimation of prediction error has been proposed to be a cause of both psychosis and autism (Fletcher and Frith, 2009; Frith, 2005; van de Cruys et al., 2014). The question of whether the criterion for surprise might contribute has received little attention. In a simulation, we varied both the misestimation of prediction error and the criterion for surprise by the same factor, and calculated how often individuals with varying criteria and degrees of misestimation are surprised. We find that the criterion for surprise has a greater influence on the proportion of surprises than misestimation of prediction error. Evaluation of computational theories of psychosis and autism depends on developing experimental designs that can distinguish these factors.

Presenting author

Submitting author

Author

Submitting author

Author