Reasoning and metacognition
Stefan Bode
Dr. Patrick Cooper
Dr. Trevor Chong
Selective bias in information-search contributes to the formation of polarized echo-chambers of beliefs. However, the cognitive mechanisms underlying this bias remain poorly understood. In this study, we aimed to isolate the role of affective content on information source selection. In Experiment 1, participants won financial rewards depending on the outcomes of a set of lotteries. They were not shown these outcomes, but instead could choose to view a prediction of each lottery outcome made by one of two sources. Before choosing their favored source, participants were first shown a series of example predictions made by each. The sources systematically varied in the accuracy and positivity (i.e., how often they predicted a win) of their predictions. Choice behavior was analyzed using a hierarchical Bayesian modeling approach. Results indicated that both source accuracy and positivity impacted participants’ choices. Importantly, those seeking more positively-biased information believed that they had won more often and had higher confidence in those beliefs. In Experiment 2, we directly assessed the effect of positivity on the perceived credibility of a source. In each trial, participants watched a single source making a series of predictions of lottery outcomes and provided ratings corresponding to the strength of their beliefs in each source. Results suggested that positively-biased sources were not seen as more credible. Together, these findings suggest that positively-biased information is sought partly due to the desirable emotional state it induces rather than enhanced perception of credibility. Information sought on this basis nevertheless produced consequential biased beliefs about the world-state.
Consider the inference sequence “The glass had orange juice, therefore it had orange juice or tequila, therefore if it did not have orange juice then it had tequila”. How convincing is it? To draw inferences like this, people may consider the meanings of the statements involved (how is “or” and “if” to be interpreted?), their degree of belief that each statement is true (do we know for certain that the glass had orange juice?), and any logical relations between the statements (e.g. does one statement entail or preclude another?). In reasoning research, these three pieces of information have often been treated as independent and potentially conflicting – with the logical information considered rational, and the content and beliefs considered biases. But theoretically such a conflict is not necessary, and empirically it does not seem plausible. In the Bayesian approach to reasoning described here, the three pieces of information are integrated and jointly necessary to draw good inferences. This approach is based on the concept of coherence. Degrees of belief in statements are coherent iff they follow the principles of probability theory (e.g. the glass cannot be less than empty or more than full; and if it contains orange juice and we add tequila, then the volumes of the two liquids will add up). But measuring the coherence of people’s uncertain reasoning is not straightforward, especially in situations in which the information available is uncertain, incomplete and changeable. To make such measurements, we must account for how logical constraints between probabilities shift when new information becomes available; define and adjust for the probability of making a coherent response just by chance; and ascertain which patterns of statement probabilities would allow us to make plausibly falsifiable, and thus informative, assessments of sensitivity to coherence. I describe some of these challenges, and discuss how we might be able to tackle them in the quest to increase our understanding of reasoning under uncertainty.
Stephen Broomell
Prof. Cleotilde (Coty) Gonzalez
Time limits are common constraints that can change decision making. However, despite experimental evidence for many effects of time constraints, the overall empirical evidence has mixed findings about when specific changes in decision making do or not occur. We argue this uncertainty is facilitated by the methods commonly used to select time constraints in experiments, which create general time pressure but are not designed to prevent specific decision processes. We demonstrate a novel method for selecting time constraints for experiments. First, we draw from the optimal experimental design literature to design choice tasks for inferring decision strategy use. Then we measure response times of human participants using specific decision strategies on these tasks (Experiment 1). We analyze the response times to identify time constraints that should preclude specific decision strategies, and then attempt to replicate previously observed effects, such as shifts from weighted additive strategies to lexicographic strategies under stricter time constraints (Experiment 2). Experiment 2 found that participants shifted their decision strategies even in response to the most lenient time constraint, and that participants at all levels of time constraint made decisions consistent with a weighted additive strategy more often than predicted. The first finding is consistent with time-monitoring focused theories of time-constrained decision making, and the second finding raises the question of whether previous findings were influenced by experimental paradigms that prevented automatic processing. The empirical findings and the time constraint selection method are discussed for their methodological and theoretical relevance for studying decision making under time constraints.
Abhay Alaukik
Matthew Baldwin
Emily Unruh
Jenna Blyler
Previous work shows that task structure affects sampling behavior: when prompted to choose between two options (choice task), people sample information that is polarized from and more extreme than the underlying true information and that this polarization/extremism disappears when people instead estimate the relative preferability of two options (estimation task). However, these findings focused on information which was numeric ("Option A is 45% more efficient than Option B") and pertained to the same criterion ("efficiency"). Real-life information is often qualitative ("Option A is expensive") and considers multiple criteria (efficiency, environmental concerns, personal preferences, etc.). In a set of 3 studies, we test if sampling qualitative and independent information covering several criteria in choice (vs. estimation) tasks still lead to polarized and extreme samples. In the first study, we collected and analyzed participant-generated qualitative information about the options in a wide variety of dilemmas, retaining the most frequent entries. In the second study, participants rated these information on how likely they were to choose an option. These ratings were used to quantify the sway/weight of each qualitative piece of information. In the third study, participants freely sampled these information to evaluate several dilemmas in two of the following three task conditions: (a) choose between given option (i.e., choice task), (b) estimate which option is better and by how much (estimation), and (c) rate the quality of each option independent of the other. We show that the decision condition led people to gather polarized samples of information relative to the other conditions, and that the rating condition encouraged more information sampling overall compared to the other two conditions. These results suggest that independent rating goals can reduce information polarization and improve information search.
Transitive inference (TI) is a fundamental form of reasoning whereby, after learning a set of premises (e.g., A < B, B < C), people infer the relationship between novel pairs of items (e.g., A < C). Existing computational models of TI differ on how premises are combined to support novel inferences: According to encoding-based models, people form a unified cognitive map of the hierarchy (e.g., A < B < C < D …) during training and directly compare items’ positions during inference, with faster, more accurate judgments for items that are more distant. Under retrieval-based models people retrieve and integrate premises at the time of test, but because distant inferences require the retrieval of more intervening premises, these models predict slower, less accurate judgments for more distant inferences. Previous studies have examined either encoding- or retrieval-based models, while little existing work has considered how the reliance on these strategies might differ across individuals, training conditions, or even for different judgments within the same task. The present study examined how the use of encoding- and retrieval-based TI depends on the difficulty of training, with more difficult training expected to interfere with the construction of a unified cognitive map and increase the reliance on retrieval-based inference. While there was little evidence of pure retrieval-based inference, more difficult training conditions were associated with increased use of a hybrid strategy such that people relied on an unified map for distant inferences, while resorting to more effortful premise retrieval for inferences about nearby items whose positions in the hierarchy were more uncertain. I present a novel approach for identifying this hybrid inference strategy using Bayesian hierarchical modeling, with models fit to both choices and RTs during inference. These findings suggest that individuals adaptively recruit direct premise memory to complement inferences supported by unified cognitive maps.
Submitting author
Author