#
Statistics: Order Constraints

Ms. Madison Harvey

Daniel Cavagnaro

Dr. Heather Price

Michel Regenwetter

There is much debate, in the field of Psychology and Law, about race effects and racial bias in legal contexts. Many studies report that people display favoritism towards suspects with light skin over suspects with dark skin. Other studies report the opposite effect, where participants give disproportionately favorable judgments and trial outcomes to suspects of color (see Mitchell et al., 2005 for a full meta-analysis). Such conflicting findings provide an ideal opportunity for model competition. We introduce order-constrained modeling to the field of Psychology and Law. Specifically, we model participants’ decisions when put in the role of an interrogator questioning a suspect. We formed a set of 28 mathematical models by taking a series of verbal hypotheses and translating them into order constraints on Binomial parameters. The hypotheses are informed by various research in Psychology and Law regarding how people’s initial guilt judgments about others might affect later decisions (O’Brien, 2009), how tattoos on a suspect could affect observers’ judgments (Brown et al., 2018), and how the race of a suspect might impact the decisions people make about the suspect (Mitchell et al., 2005). Order-constrained modeling allows us to distinguish very specific, nuanced predictions about how these factors might impact people’s decision making. It also allows for combinations of predictions about the impacts of these factors to be tested jointly, in a single statistical test. We also consider novel mixture models that can capture two sub-populations with different race effects. These mixture models provide an opportunity to explore race-related effects in a new light. We pit all competing hypotheses against each other and test them using our lab’s software, QTEST (Regenwetter et al., 2014; Zwilling et al., 2019).

This is an in-person presentation on **July 19, 2023**
(11:20 ~
11:40 UTC).

Dr. Yung-Fong Hsu

Cultural consensus theory (CCT), developed by Batchelder and colleagues in the mid-1980s, is a cognitively driven methodology to assess informants’ consensus in which the culturally “correct” (consensus) answers are unknown to researchers a priori. The primary goal of CCT is to uncover the cultural knowledge, preferences, or beliefs shared by group members. One of the CCT models, called the general Condorcet model (GCM), deals with dichotomous (e.g., true/false) response data which are collected from a group of informants who share the same cultural knowledge. We propose a new model, called the general Condorcet-Luce-Krantz (GCLK) model, which incorporates the GCM with the Luce-Krantz threshold theory. The GCLK accounts for ordinal categorical data (including Likert-type questionnaires) in which informants can express confidence levels when answering the items/questions. In addition to finding out the consensus truth to the items, the GCLK also estimates other response characteristics, including the item-diﬀiculty levels, informants’ competency levels, and guessing biases. We introduce the multicultural version of the GCLK that can help researchers detect the number of cultures for a given data set. We use the hierarchical Bayesian modeling approach and the Markov chain Monte Carlo sampling method for estimation. A posterior predictive check is established to verify the central assumptions of the model. Through a series of simulations, we evaluate the model’s applicability and find that the GCLK performs well on parameter recovery.

This is an in-person presentation on **July 19, 2023**
(11:40 ~
12:00 UTC).

Prof. José Luis García-Lapresta

Many decision making problems involve the use of linguistic information collected by questionnaires based on ordered qualitative scales. In such cases it is relevant how agents perceive the scales. Some of them can be considered as non-uniform, in the sense that agents may perceive different proximities between consecutive terms of the scale. For instance, in the framework of health-care and medicine, the ordered qualitative scale {poor, fair, good, very good, excellent}, used by patients to evaluate self-rated health, it could be considered as non-uniform if ‘fair’ is perceived closer to ‘good’ than to ‘poor’, or if ‘good’ is perceived closer to ‘very good’ than to ‘fair’, or if ‘very good’ is perceived closer to ‘good’ than to ‘excellent’. In order to facilitate the decision-makers to manage this ordinal information, we propose to assign numerical scores to the linguistic terms of ordered qualitative scales by means of several scoring functions. In this contribution we have introduced and analyzed several scoring functions. They are based on the concept of ordinal proximity measure that properly represents the ordinal proximities between the linguistic terms of the ordered qualitative scales.

This is an in-person presentation on **July 19, 2023**
(12:00 ~
12:20 UTC).

Ms. Meichai Chen

Many statistical analyses performed in psychological studies add extraneous assumptions that are not part of the theory. These added assumptions could adversely influence the conclusions one derives from the analyses. Order-constrained inference allows researchers to avoid unnecessary assumptions, translate conceptual theories into direct testable hypotheses, and run competitions among competing hypotheses. On top of these advantages, this reanalysis highlights how one can use order-constrained modeling to formulate more nuanced hypotheses at the item level and test them jointly as one single model. The data set comes from Pennycook, Bear, Collins and Rand (2020). The authors hypothesized that attaching warnings to a subset of fake news headlines increases the perceived accuracy of other headlines that are unmarked. Moreover, they also expected this effect to disappear when attaching verifications to true headlines. Using the QTEST software (Regenwetter et al., 2014; Zwilling et al., 2019), we assessed these hypotheses jointly across all individual headlines. To further leverage order-constrained inference, we ran a competition among competing hypotheses using Bayesian model selection methods. We observe that order-constrained inference not only provides us with a coarse view of all the hypotheses at the aggregate level, it also offers a fine-grained perspective of all the hypotheses at the item level.

This is an in-person presentation on **July 19, 2023**
(12:20 ~
12:40 UTC).

Submitting author

Author