Measurement & Reasoning
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Simulation studies are widely used for evaluating the performance of statistical methods in psychology. However, the quality of simulation studies can vary widely in terms of their design, execution, and reporting. In a review of 100 articles containing a simulation study from three leading methodological journals in psychology, we find that many articles do not provide complete and transparent information about key aspects of the study. To address this problem, we provide a summary of the ADEMP (Aims, Data-generating mechanism, Estimands and other targets, Methods, Performance measures) design and reporting framework that has gained traction in biostatistics in recent years, and adapt it to simulation studies in psychology. Based on this framework, we provide ADEMP-PreReg, a step-by-step template for researchers to use when designing, potentially preregistering, and reporting their simulation studies. We give formulae for estimating common performance measures, their Monte Carlo standard errors, and for calculating the number of simulation repetitions to achieve a desired Monte Carlo standard error. In this presentation, we will further discuss the advantages and disadvantages of simulation study preregistration, and highlight unique aspects and differences to preregistration of other empirical studies that have to be considered for a simulation study preregistration to be useful.
Operational definitions are afforded a central role in psychology, taught in nearly every introductory methods class and textbook. However, this deference may be unjustified; as we consider changes to how we carry out scientific inference, it is worth reconsidering the role of operational definitions in psychological science. Defining a construct in terms of a specific measurement outcome cuts out important theoretical content about how the construct affects behavior on a specific task, sources of error, the measurement process, and how the construct affects other tasks. In this talk, I examine how contemporary modeling approaches violate basic assumptions of operational definitions and operationalism more generally -- foregoing assumptions about objectivity, repeatability, independence, and fixed elicitation procedures. Counterintuitively, these departures imbue model-based definitions of constructs with superior measurement properties, such as improved reliability and validity, when compared to their operational counterparts. Instead of relying on operational definitions of constructs, I instead suggest that psychology can adapt relational or computational definitions, representing constructs as latent variables in a multilevel generative model of behavior, self-report, or neuroimaging data. These model-based metrics can better reflect measurement error, account for the interactions between measurement devices (tasks, scales) and measurement objects (participants, processes), provide a holistic account of latent constructs across different measurements, and improve scientific communication by clarifying core psychological concepts. Relational definitions of important constructs should naturally emerge as we apply models more regularly, and these definitions and models will improve as we discover or invent mathematical approaches that are suited to describing psychological processes.
The discrete state Quantum Random Walk (QRW) model of confidence accumulation has been widely studied to model the response times of fast decision tasks (Busemeyer et. al., 2019). Also, the Quantum probability-based model of response accuracy has been studied to model the response choices in a reasoning task (Trueblood & Busemeyer, 2012). However, QRW has not been widely studied to model the response times of reasoning tasks, which may range up to tens of seconds. Hence, the current simulation study evaluated the model fit of QRW on simulated reasoning response times. A discrete state discrete time Markov random walk model was utilized to approximate four within-trial confidence accumulation patterns, typically observed in meta-reasoning studies (Ackerman & Thompson, 2017), resulting in fast to slow response times (Malaiya, in press). Then, QRW was implemented using the Python computational package – JAX. Then, for each of the simulated reasoning response time datasets, the MCMC - No-U-Turn sampling method was utilized to sample from the posterior distribution of the drift rate and diffusion rate parameters of QRW. The convergence of MCMC chains was examined using the Gelman-Rubin R metric and Effective Sample Size. Then, to examine the validity of the fitted QRW, response times were sampled, with replacement, from the fitted QRW, weighted by the likelihood calculated using each posterior sample. Then statistics, such as the mean, of these sampled response times were compared with those of the simulated response times (used to fit QRW).
The brain has an innate ability to group sensory information together into discriminable categories. Therefore the ability to create these categories and understand the cognitive mechanisms behind their creation is of great interest to cognitive psychologists. There are three classic category learning tasks: rule-based, information-integration, and prototype. We investigate architectural differences in visual and decisional processing across these category learning task. The theoretical approach used to address this architectural question has been referred to as Systems Factorial Technology which is a set of analytic tools that makes use of response time patterns to distinguish serial, parallel, and other cognitive mechanisms. In this presentation we explore SFT predictions across the three category learning tasks.