#
Statistics

Using parameter contours to achieve more robust model estimation

Ms. Sabina Johanna Sloman

Stephen Broomell

Daniel Oppenheimer

Stephen Broomell

Daniel Oppenheimer

Many current practices in parameter estimation and model evaluation rely on fit statistics, calculated on the basis of estimated parameterizations of competing models. However, the design of an experiment can influence the conclusions a modeler draws about the parameterizations and relative performance of these models (Broomell, Sloman, Blaha, and Chelen, 2019). We highlight the importance of mapping the model-stimulus space, i.e. understanding how the parameter-dependent predictions of a model change across different stimuli. To achieve this goal, we represent models as a topography across the stimulus space, in which adjacent contour lines are defined by adjacent parameter values. Using data simulated from models of decision-making, we show how our proposed techniques can identify conditions under which traditional parameter estimation techniques will lead to inconclusive and inconsistent results. We also discuss ways in which modelers could exploit these insights to develop experimental designs for more robust parameter estimation. In addition, we demonstrate how a better understanding of the model-stimulus space can help researchers design powerful experiments to diagnose data generated by hypothesized models. Finally, we explore the conceptual implications of representing cognitive models as a topography of the stimulus space.

A hierarchical Bayesian model for the progressive ratio test

Ms. Yiyang Chen

Nicholas Breitborde

Mario Peruggia

Trisha Van Zandt

Nicholas Breitborde

Mario Peruggia

Trisha Van Zandt

The progressive ratio test (Wolf et al., 2014) is commonly used to measure motivation, yet the number of studies investigating its underlying mechanisms is limited. In this paper, we present a hierarchical Bayesian model for the progressive ratio structure. This model may be used to investigate the underlying mechanisms of human behavior in progressive ratio tests, which can identify the factors contributing to participants' performance. A simulation study shows satisfactory parameter recovery results for this model. We apply the model to a progressive ratio data set involving people with schizophrenia, first-order relatives of the schizophrenia patients, and people without schizophrenia. Analysis reveals that the motivation of people with schizophrenia decreases faster as time elapses than that of people without schizophrenia, which may make them less compliant with long continuous treatment sessions.

A robust Bayesian test for context effects in multi-attribute decision making

Dimitris Katsimpokis

Dr. Laura Fontanesi

Jorg Rieskamp

Dr. Laura Fontanesi

Jorg Rieskamp

In the past decades, context effects have been crucial in the development of cognitive models of decisions between multi-attribute alternatives. Nevertheless, to this date, only few studies have discussed what the best practices to analyze context effects are. Context effects occur when participants prefer identical options more or less depending on the choice set they are embedded in. Context effects are measured using what is called the Relative choice Share of the Target (RST), i.e., the change in preference of a target option from one choice set to the next. In this talk, we discuss two ways of calculating the RST: one frequently used in the literature, and a novel one we propose. Through simulations, we show that our proposed RST analyses overcome shortcomings of the more traditional approach. In particular, it is resistant to biases due to unequal sample size across choice sets. Furthermore, we apply our model to four previously published context effect studies, and we show that some reported context effects can change substantially (from significant to non-significant and vice versa). Implications of these results for cognitive modeling and empirical research on context effects are further discussed.

stanova: User-Friendly Interface and Summaries for Bayesian Statistical Models Estimated with Stan

Dr. Henrik Singmann

Psychological data often consists of multiple orthogonal factors. When analyzing such data with statistical models these factors, like all categorical variables, need to be transformed into numerical covariates using a contrast scheme. To the surprise of many users, the default contrast scheme in the statistical programming language R is such that the intercept is mapped onto the first factor level with the consequence that in models with interactions, coefficients represent simple effects at the first factor level instead of the usually expected average effects. I will present a software package for R, stanova (https://github.com/bayesstuff/stanova), that allows estimating statistical models in a Bayesian framework based on Stan and package rstanarm that avoids this problem. It by default uses a factor coding proposed by Rouder et al. (2012, JMP) in which the intercept corresponds to the unweighted grand mean and which allows priors that have the same marginal prior on all factor levels. In addition, stanova provides a summary method which reports results for each factor level or design cell – specifically the difference from the intercept – instead of for each model coefficient. This also provides a better user experience than the default output of many statistical packages. The talk will show the implementation of the package in R and its adaptation in JASP, an open source alternative to SPSS.

Prior predictive entropy as a measure of model complexity

J. Manuel Villarreal

Michael Lee

Alexander John Etz

Michael Lee

Alexander John Etz

In science, when we are faced with the problem of choosing between two different accounts of a phenomenon, we are told to choose the simplest one. However, it is not always clear what a “simple” model is. Model selection criteria (\emph{e.g.,} the BIC) typically define model complexity as a function of the number of parameters in a model, or some other function of the parameter space. Here we present an alternative based on the prior predictive distribution. We argue that we can measure the complexity of a model by the entropy of its predictions before looking at the data. This can lead to surprising findings that are not well explained by thinking of model complexity in terms of parameter spaces. In particular, we use a simple choice rule as an example to show that the predictions of a nested model can have a higher entropy in comparison to its more general counterpart. Finally, we show that the complexity in a model’s predictions is a function of the experimental design.

Lord's Paradox: An essay on causal inference

Richard M. Shiffrin

In 1967 Frederic Lord published a two page paper on weight changes over time by two groups. A scientist would surely conclude that the data show individuals in both groups were fluctuating in weight but not gaining or losing. Yet an analysis of covariance (ANCOVA) seemed to lead to a conclusion that the initially heavier group was gaining more than the initially lighter group. Lord seemed to present this as an example showing that inappropriate use of ANCOVA leads to absurd conclusions, yet statisticians and causal modelers have been re-examining this paradox ever since, sometimes concluding that one cannot reach a valid conclusion, sometimes concluding that the correct conclusion is more weight gain for the initially heavier group. I use this example to highlight the importance of using science to guide the way we do statistics, rather than using statistics to tell us how to do science. More generally, I wish to highlight the value in generating plausible, simple, and coherent models for observed data.

How useful is posterior-predictive model assessment: Insights from ordinal constraints

Dr. Julia Haaf

Dr. Jeffrey Rouder

Dr. Jeffrey Rouder

The importance of good model specification — having models that accurately capture differing theoretical position — cannot be understated. With this in mind, we submit that methods of inference that force scientists to use certain models that may not be appropriate for the context are not as desirable as methods with no such constraint. Here we ask how posterior-predictive model assessment methods such as wAIC and LOO-CV perform when theoretical positions are different space restrictions on a common parameter space. One of the main theoretical relations is nesting — where the parameter space of one model is a subset of that for another. A good example is a general model that admits any set of preferences; a nested model is one that admits only preferences that obey transitivity. We find however, that posterior-predictive methods fail in these cases providing no advantage to more constrained models even when data are compatible with the constraint. Researchers who use posterior predictive methods are forced to use non-overlapping partitions of parameter spaces even some of subspaces have no theoretical interpretation. Fortunately, there is no constraint of prior predictive methods such as Bayes factors. Because these model appropriately account for model complexity, models need not be a proper partitioning of parameter spaces and inference with desirable properties nonetheless results. We argue given that posterior predictive approaches forces certain specifications that may not be ideal for scientific questions, they are less desirable in these contexts.

Presenting author

Submitting author

Author

Submitting author

Author