Categorization
Prof. Pernille Hemmer
Qiong Zhang
An individual stimulus from a category is often judged to be closer to the center of that category than its true location. This effect has been demonstrated across different domains of perception and cognition and has been explained by the Category Adjustment Model (CAM; Huttenlocher et al., 2000), which posits that humans optimally integrate noisy stimuli with prior knowledge to maximize their average accuracy. Subsequent extensions to CAM have been proposed to account for more complex category effects, such as when there is more than one category involved or when prior knowledge involves multiple levels of abstraction. However, the question remains whether there exists an underlying general framework for the way people perceive categories across different tasks. To fill this gap, we propose a generalized Bayesian model of category effects, called the generalized CAM model (g-CAM). We demonstrate that CAM and its previous extensions are special cases of g-CAM, and that g-CAM can additionally capture novel experimental effects involving atypical examples.
This is an in-person presentation on July 21, 2023 (15:20 ~ 15:40 UTC).
Jorg Rieskamp
Dr. Jana Jarecki
People excel in categorizations–even under time pressure. We investigated how the human mind copes with time pressure during category inference by comparing three cognitive mechanisms within a framework, in which inferences about new objects are informed by similar previous objects. Specifically, we tested whether time pressure causes people to focus their attention to fewer object features, to respond less precisely, or to simplify the similarity computation by counting the number of differing features between objects but ignoring the precise feature value differences. To this end, we collected experimental data in the domains of categorization and similarity judgments and combined inferential statistics and computational cognitive modeling within the exemplar-similarity framework. In the categorization experiment, participants (N = 61) solved a trial-by-trial supervised, binary category learning task without time pressure, followed by unsupervised transfer categorizations with individually-calibrated time pressure for half the participants (M = 902 ms). The experimental design was optimized in simulations to maximally discriminate between the formal models in the transfer task. The results show that participants categorized the transfer stimuli less consistently with than without time pressure. In turn, we found no credible evidence that time pressure induced an attention focus or a simplified similarity. In the similarity judgment experiment, participants (N = 175) rated on a slider the similarity of various stimulus pairs once without time pressure and once with an individually-calibrated time pressure, manipulated across participants to be either weak (N = 64, M = 2018 ms), medium (N = 55, M = 1225 ms), or strong (N = 56, M = 510 ms). The results corroborate those from the categorization experiment, strongly suggesting that time pressure lowers response precision. Participants’ similarity judgments got more variable with time pressure, plateauing at medium time pressure, with SDs being .13 (no time pressure) < .17 (weak) < .19 (medium) = .18 (strong), signs denote statistical significance in a linear mixed model. In turn, participants’ mean similarity judgments for the stimulus pairs followed the same rank order across all experimental conditions. This strongly suggests that time pressure did not change participants’ similarity judgments qualitatively, as would be expected from an attention focus or a simplified similarity. In sum, we found that cognitive load in similarity-based categorizations and judgments does not necessarily affect computational processes related to attention or psychological similarity, but rather the precision with which people translate their internal beliefs to manifest responses.
This is an in-person presentation on July 21, 2023 (15:40 ~ 16:00 UTC).
Michelle Tham
Michael Lee
One intuition in the categorization literature is that how we assign a stimulus to a given category depends on the assignment of other stimuli that we have encountered in the past. In other words, it is assumed that stimulus–stimulus interactions can affect categorization decisions. Nevertheless, categorization models typically avoid modeling this feature by either considering the “true” category assignment for the stimulus, for a fixed experimental design, or by taking some function of a participants’ previous responses. A consequence of these assumptions is that learning about the associations between a specific stimulus and the categories can only occur on trials when that stimulus is presented. Coupled Hidden Markov models (CHMM) allow stimulus-stimulus interactions in categorization to be modeled directly, so that association to categories are continuously updated The key idea under this approach is that the category (state) that a given stimulus (chain) is assigned to on a trial is a function of its assignment on the previous trial and the category that all other stimuli are inferred to be in. In other words, category assignments are updated continuously by a latent process based on participants’ trial-by-trial choices. We present a Bayesian implementation of a CHMM on two classic categorization tasks: Lewandowsky’s (2011) replication of the Shepard et al. (1961) Type VI category structures, and the extension of this task to ternary stimuli presented by Lee and Navarro (2002). We show that the CHHM model allows us to obtain posterior inferences about the category assignment (state) of each stimulus at every trial in the experiment.
This is an in-person presentation on July 21, 2023 (16:00 ~ 16:20 UTC).
Prof. Andy Wills
Prof. Bettina von Helversen
Stimulus classification is an everyday feat (e.g., in medical diagnoses by differentiating ultrasound images). Category feedback, however, is often non-deterministic (e.g., by 25% chance untrue a.k.a. probabilistic feedback) rendering experiences as somewhat unreliable, and the question is how humans (still) learn stimulus-category regularities. In probability learning and economic decisions, such as risky gambles, however, the question usually reverses to why humans do not perfectly exploit regularities when correct categorization leads to reward (e.g., non-rational probability matching; Feher da Silva, et al., 2017; Plonsky, Teodorescu, & Erev, 2015). Here, we address both questions in a domain-general framework formalizing how humans, in probabilistic tasks, learn sequential feedback regularities in parallel to visual category structures. We use our recently introduced Category Abstraction Learning (CAL) framework (Schlegelmilch, Wills, & von Helversen, 2021), a connectionist category learning model able to extrapolate and contextually modulate simple rules. We implement the idea that participants count the streak of common events (stimulus) to predict when rare events or violations of a learned rule will occur (e.g., conditional hypotheses). We show that CAL's learning mechanisms readily extend to the mentioned domains, predicting probability matching in general based, but also the proportion of strategies often discussed as Win-Stay-Lose-Shift (WSLS), and more recently studied sequential pattern learning (akin to gamblers fallacy). CAL also provides an account of expectancy priors (see Koehler & James, 2014), proposing that they stem from an awareness that unobserved stimuli lead to unobserved outcomes (contrasting) which are continuously updated during experience-based decision making. We present CAL simulations and brief reanalyses of studies on risky gambles, probability learning and fear conditioning (e.g., Szollosi et al., 2022) showing CAL's potential to address long-standing questions regarding non-stationary expectations of stimulus-outcome probabilities and risk preference in terms of rule abstraction.
This is an in-person presentation on July 21, 2023 (16:20 ~ 16:40 UTC).
Dr. Jana Jarecki
Jorg Rieskamp
People habitually assign objects to categories based on the objects’ features. In each category, the object features are distributed, meaning that they can vary and correlate across the category members. Past research has found mixed evidence concerning the extent to which people make use of the distribution of features in categories to categorize new objects. To investigate how within-category feature distributions affect people’s categorizations, we collected and analyzed data from two categorization experiments. Participants classified geometrical figures with two features in a trial-by-trial supervised, binary category learning task, followed by an unsupervised transfer task with new feature value combinations. In both experiments, the experimental designs were optimized to compare categorization models that either consider or ignore within-category feature distributions. Experiment 1 used a high-variance category and a low-variance category, and the transfer stimuli fell between the categories. In Experiment 2, both categories had a strong feature correlation, and the transfer stimuli were located in the correlational direction of one category but closer to the members of the other category. Importantly, processing the within-category feature distributions affected how the transfer stimuli would be classified. Our results show that participants’ classifications of the transfer stimuli were in line with ignoring the within-category feature distributions in both experiments. This means that participants (both Ns = 43) assigned the transfer stimuli predominantly to the low-variance category in Experiment 1 (M = 71%) and to the closer category with an incongruent feature correlation in Experiment 2 (M = 88%). Computational cognitive modeling showed that the model which ignores within-category feature distributions described most participants in both experiments with strong evidence (n = 27 in the variance experiment; n = 32 in the correlation experiment), suggesting that people mostly ignore the within-category feature distributions when they categorize new objects. One reason for these findings might be the computational costs involved in estimating the distribution of features in categories.
This is an in-person presentation on July 21, 2023 (16:40 ~ 17:00 UTC).
Submitting author
Author