Session 2: Thursday 11 February, 10am-11am
Amy X. Li
A “learning trap” is a pattern of suboptimal decision-making thought to arise from the overgeneralization of early learning in environments where feedback is choice-contingent – i.e., feedback is provided about chosen options but not foregone options. Learning traps have been implicated in suboptimal decision-making in a range of domains. However, the cognitive mechanisms that underlie such traps are poorly understood. This paper describes a novel paradigm for investigating trap formation in category learning and preliminary modeling of underlying processes. Participants were tasked with learning to discriminate between categories of visual stimuli (“friendly bees” that added reward points, and “dangerous bees” that subtracted points). Accurate discrimination was based on a conjunctive rule involving two feature dimensions. Participants could choose to either approach or avoid individual instances on each learning trial. When feedback was contingent on approaching an instance, most participants learned an incomplete one-dimensional (1D) rule, resulting in suboptimal rewards. The prevalence of this learning trap was reduced by varying the payoff structure associated with the categories, so that small losses were common, and large rewards were rare. Preliminary modeling shows that some of these findings can be simulated by a modified version of Kruschke’s (1992) ALCOVE category learning model (ALCOVE-RL).
Dr. Vanessa Ferdinand
Ms. Elle Pattenden
Ideologically committed minds form the basis of political polarisation, but ideologically guided communication can further entrench and exacerbate polarisation depending on the structures of ideologies and social network dynamics on which cognition and communication operate. Combining a model of ideological cognition and a model of social influence dynamics on social networks, we develop a new model of ideological cognition and communication on dynamic social networks and explore its implications for ideological political discourse. We explicitly model ideologically filtered interpretation of social information and ideological commitment to initial opinion using the tensor product model, and explore how communication on dynamically evolving social networks exacerbate ideologically polarised political discourse. The results show that ideological interpretation and commitment are foundational to polarised discourse, but communication and social network dynamics tend to accelerate and amplify polarisation. Furthermore, when agents can sever social ties with those that disagree with them (i.e., avoidance of heterophily), even non-ideological agents may form an echo-chamber and form a cluster of opinions that resemble an ideological group. In all, our simulations suggest ideological cognition and social network dynamics interact under different social-technological circumstances to generate different consensualization-polarization dynamics in public opinion.
Dr. Rachel Stephens
Prof. John Dunn
Prof. Brett Hayes
A central question in the psychology of reasoning is what principles people use to determine inference quality. Dual-process theories propose people to have two qualitatively distinct types of thinking. Type 1 is said to be intuitive and heuristic, and Type 2 deliberative and analytic. Traditionally, evaluating the logical structure of text arguments has been associated with Type 2 processing. But people have been found to consider logical structure in perceptual discrimination tasks, suggesting logical structure can also be processed in Type 1. A simpler explanation for the effects of logical structure on perceptual judgments is that people make a trade-off between perceptual and logical cues, examining logical structure only when the perceptual task is ambiguous or difficult. In two experiments we varied ambiguity or difficulty of the perceptual task. Using a Bayesian latent mixture model, participants could be classified into three discrete groups: logical structure, perception and guessing. We found that more participants were classified as basing their responses on logical structure when the perceptual task was difficult or ambiguous. The findings provide evidence for the trade-off hypothesis, making the postulation of dual-processes unnecessary.
Prof. Maarten Speekenbrink
Prof. Ben Newell
<div>Gathering information about gamble-options from multiple sources simultaneously could impose difficulties in assigning an event to its respective source (i.e. did option ‘A’ payout $10 and option ‘B’ $12, or was it the other way around?). We examined the effect of source-monitoring-load on risky choice by testing repeated choices between “Safer” and “Riskier” slot-machines with different long-run averages. To manipulate source-monitoring load, the congruency between the screen-position of the machines in the choice-phase and the feedback-display of outcomes was manipulated across trials. As higher demands on source-monitoring were imposed, participants chose the superior option (higher long-run average) more often. A modelling analysis revealed that participants’ choices were consistent with a convex exponential weighting function that assigns greater weight to larger outcomes - polarizing evidence and choice towards the superior machine . We conclude that increasing source-monitoring-load encouraged participants to focus on information that is most consistent with a goal of receiving the highest payoffs. </div>
Dr. Bradley Walker
Prof. Yoshi Kashima
Dr. Nic Fay
Bayes’ theorem offers a normative prescription for how people should combine their original beliefs (i.e. their priors) in light of new evidence. The question of whether people reason about probabilities in accordance with Bayes’ theorem has long been a subject of fierce debate: some researchers suggests that priors are largely ignored (e.g., Bar-Hillel, 1980); others suggest that they are overweighted and people are conservative (e.g., Edwards, 1968); others suggest that Bayesian models predict performance quite well, but only at the aggregate population level (e.g., Mozer et al. 2008). Yet much of this previous work does not measure each person’s full prior distribution, making it difficult to determine exactly what the participants were doing. Over the course of two experiments, we elicited people’s full prior distributions for a simple probability task. We found that (a) people disregarded the prior and determined the posterior directly from the likelihood (which is mathematically equivalent to using a uniform prior) and (b) when estimating the posterior, people weighted evidence accurately only in the aggregate, with almost all individuals either overweighting or underweighting evidence relative to the normative standards of Bayes’ theorem. This work helps clarify to what extent Bayes’ theorem describes people’s actual probability estimates.
Dr. Saoirse Connor Desai
When do people that data has been “censored” from an evidence sample and how do they respond? The present work examines 1) how people generalize from a smaller sample that may have been subject to censoring, to a larger sample, 2) compares inferences based on different sample distributions, and 3) inferences with and without a censoring prompt. Participants sampled on-line quality ratings of a novel restaurant that followed several different distributions (e.g., bimodal, left-skewed), summarized in a frequency distribution figure. They then constructed their own frequency distribution of a larger “population” of ratings and answered questions about the trustworthiness/believability of the initial sample. Participants were more likely to “fill in” missing data when the sample distribution observations were sparse (e.g., one-star ratings), or were inconsistent with priors about distribution shape. Human responses were compared with predictions of a computational model that reproduced the initial sample, a Bayesian model that assumed no censoring, a Bayesian “censoring” model, and a model that averages the empirical priors and initial observations. The averaging model performed best but did not capture responses in the sparse observation conditions. Results suggest people factor in both their prior distributional beliefs and observed sample data when generalizing from censored data.