Similarity & Perception
Dr. Dora Matzke
Prof. Andrew Heathcote
Despite the mapping between objective and subjective magnitudes being central to psychology’s foundational discipline of psychophysics, quantitative characterisations that are stable across different individuals and contexts have remained elusive. We address this problem through a theoretical framework defining subjective magnitudes as the inputs to a dynamic model of perceptual two-alternative forced choice. Three observer-specific parameters—their sensitivity to subjective magnitudes, and differences between magnitudes, and their decision urgency, along with the psychophysical function mapping objective to subjective magnitudes—determine the rate at which evidence for each choice accrues. Responses and response times are a function of the evidence rate, additive stochastic noise, the threshold amount of evidence required to make a choice, and the time for non-decision processes. We develop both non-parametric and parametric methodologies within this framework to measure the psychometric function and apply them to judgements about which of two rectangles has a greater area of one of two colours. In almost every participant over several experiments varying the decision context (sets of stimuli spanning different ranges), both methodologies converge on an identity mapping between the objective proportional area and the subjective input to the decision process. Further experiments, looking at broader stimulus ranges and different decision tasks explored the limits of this unanimity.
This is an in-person presentation on July 19, 2023 (09:00 ~ 09:20 UTC).
We are able to compare the loudness of a tone to the brightness of a visual stimulus, and vice versa. This may be explained by the long-standing assumption of a common representation of perceived intensity that is shared by almost all modalities. Luce, Steingrimsson, and Narens (2010, Psychological Review, 117, 1247-1258) formalize this idea within a cross-modal version of the theory of global psychophysics, which can be empirically tested in a parameter-free way through the axiom of cross-modal commutativity of successive magnitude productions. The paper provides a theory-based analysis of data on this axiom collected by Ellermeier, Kattner, and Raum (2021, Attention, Perception, & Psychophysics, 83 , 2955-2967), which is grounded on a recently suggested extension of the global psychophysical approach to cross-modal judgments (Heller, 2021, Psychological Review, 128, 509-524). This theory assumes that stimuli are judged against respondent-generated internal references which are modality-specific and potentially role-dependent (i.e., sensitive to whether they pertain to the standard or the variable stimulus in the performed cross-modal magnitude production task). The analysis reveals a massive and systematic role-dependence of internal references. This leads to predicting small but systematic deviations from cross-modal commutativity, which are in line with the observed data. In analogy to a term coined in the context of Weber's law this phenomenon is referred to as near-miss to cross-modal commutativity. The presented theory offers a psychological rationale explaining this phenomenon, and opens up an innovative approach to studying cross-modal perception.
This is an in-person presentation on July 19, 2023 (09:20 ~ 09:40 UTC).
Prof. Joe Houpt
Melanoma is a deadly skin cancer, and early detection is critical for improving survival rates. Dermatologists typically rely on a visual scan to diagnose melanoma by assessing the primary perceptual characteristics of a skin lesion. The common ABCDE heuristic, for example, suggests observers check a lesion for shape (A)symmetry, (B)order irregularity, number of unique (C)olours, and (E)volution over time. Whilst this heuristic provides a practical guide, it is a limited approach. Firstly, all lesions vary and often contain only a subset of these features. Secondly, a combination of abnormal features can lead to a diagnosis, making the diagnostic process complicated and error-prone. Advanced computer vision algorithms (CVA) have emerged as a powerful approach to melanoma identification. CVAs can evaluate lesion features to generate highly accurate and objective assessments. However, despite CVA advancements, they can only be used in conjunction with an expert assessment. Thus, the perceptual expertise of dermatologists remains a critical component in the accurate and timely detection of melanoma. Our project aims to improve the early detection of melanoma by investigating the perceptual judgments of skin lesion colour and shape made by humans and comparing them with the feature representations generated by computer vision algorithms. We recruited non-expert participants online to complete a two-alternative forced-choice task using skin lesion images from the ISIC archive. Participants were instructed to choose the image that exhibited a greater frequency of unique colours in one condition and greater border regularity in another among the two images presented in a trial. We analysed the data using the Bradley-Terry-Luce (BTL) model to estimate each lesion image's relative "strengths" along these perceptual dimensions. We then compared these estimates to computer vision assessments of the same perceptual features. We discuss the methodological approach, preliminary results, and future directions.
This is an in-person presentation on July 19, 2023 (09:40 ~ 10:00 UTC).
Bradley C. Love
Dr. Brett Roads
The standard view of learning - be it supervised, unsupervised, or semi-supervised - is event-based (e.g., a caregiver pointing to a dog and saying “dog”). However, recent work suggests that people also engage in a process called systems alignment in learning contexts. It has been shown that similarity structures align across domains. For example, objects that are spoken about in similar contexts appear in similar visual contexts. This is a potential rich source of information that human leaners could exploit. Indeed, recent work demonstrates that humans make use of alignable signals when they are available, both to improve learning efficiency and to perform zero-shot generalisation. Here, we present evidence which suggests that alignment processes could play a role in early concept acquisition. We find that children’s early concepts form near-optimal sets for inferring new concepts through systems alignment. By analysing the structural features of early concept sets, we find that this is facilitated by their uniquely dense connectivity. We suggest that this is conducive to alignment because short-range semantic relationships are particularly stable. Feeding these insights from early concept acquisition back into a Machine Learning pipeline, we build generative models which leverage these key structural features to construct optimal knowledge states. The resultant concept sets demonstrate an improved capacity for learning new concepts. Further inspired by these findings, we discuss the use of alignment-based priors for cross-modal learning in other Machine Learning systems, for example in the task of image classification.
This is an in-person presentation on July 19, 2023 (10:20 ~ 10:40 UTC).