Perception and Computation
Dr. Peter Watson
Dr. Parviz Azadfallah
Dr. Marija Blagojević
Mrs. Sara Saljoughi
Dr. Forogh Esrafilian
The assumptions and philosophy underlying artificial psychology (AP) have been presented and we motivate the need for a set of models which can incorporate information from complex mental systems using ideas such as fuzziness of a system and supervised and unsupervised algorithms which use artificial intelligence. We discuss the need for a multiplicity of modelling approaches to help us to understand the world. We mention issues involving hypothesis testing, in particular we introduce and define the p-value and highlight its shortcomings including p-hacking where data is manipulated so that it yields statistically significant results with an associated bias towards publishing studies which have p-values below a certain threshold. We mention widespread misunderstandings in the interpretation of the p-value and associated dangers such as giving the impression that the world is ‘black and white’ and motivate the need for complementary more nuanced approaches to testing statistical hypotheses which can overcome these deficiencies.
Dr. Jerome Busemeyer
Prof. Emmanuel Pothos
One of the most important challenges in decision theory has been how to reconcile the normative expectations from Bayesian theory with the apparent fallacies that are common in probabilistic reasoning. Recently, Bayesian models have been driven by the insight that apparent fallacies are due to sampling errors or biases in estimating (Bayesian) probabilities. An alternative way to explain apparent fallacies is by invoking different probability rules, specifically the probability rules from quantum theory. Arguably, quantum cognitive models offer a more unified explanation for a large body of findings, problematic from a baseline classical perspective. This work addresses two major corresponding theoretical challenges: first, a framework is needed which incorporates both Bayesian and quantum influences, recognizing the fact that there is evidence for both in human behavior. Second, there is empirical evidence which goes beyond any current Bayesian and quantum model. We develop a model for probabilistic reasoning, seamlessly integrating both Bayesian and quantum models of reasoning and augmented by a sequential sampling process, which maps subjective probabilistic estimates to observable responses. Our model, called the Quantum Sequential Sampler, is compared to the currently leading Bayesian model, the Bayesian Sampler (Zhu, Sanborn, & Chater, 2020) using a new experiment, producing one of the largest datasets in probabilistic reasoning to this day. The Quantum Sequential Sampler embodies several new components, which we argue offer a more theoretically accurate approach to probabilistic reasoning. Also, our empirical tests revealed a new, surprising systematic overestimation of probabilities.
Prof. Joe Houpt
Paul Havig
Fairul Mohd-Zaid
Murray Bennett
Ying-yu Chen
Dr. Elizabeth Fox
General Recognition Theory (GRT) is a framework for characterizing perceptual independence and separability. Experiments employing GRT follow a factorial complete identification paradigm where participants must distinguish stimuli based on their combined features (e.g., shape and color; brightness and loudness). Inferences are based on the patterns of confusions, hence, stimuli must be similar enough to produce errors but not so similar that a participant cannot discriminate them. Traditionally, the feature levels are determined through pilot testing and fixed across all participants. However, this pilot testing is time-consuming and using the same stimulus levels for all participants leads to problems due to individual differences. Participants who are too sensitive make too few errors, and participants who are not sensitive enough end up guessing. Furthermore, hardware differences may cause confounds for online studies and hamper replication attempts. We previously introduced a method for adapting the design of GRT experiments to individual participants based on the Psi psychophysical method. Simulation results indicated the efficacy of our approach and its robustness to violations of the measurement model’s assumptions. As part of a validation study with human participants, we ran a control study using the traditional pilot testing approach as a baseline for comparison. In Experiment 1, ten participants performed a complete identification task with stimuli defined by their size and orientation (separable condition). In Experiment 2, another ten participants performed the task with stimuli defined by their saturation and brightness (integral condition). We observed more violations of marginal response invariance in the integral condition than in the separable condition, but the difference could have been larger. Many participants performed near the intended accuracy criterion, but some participants achieved near perfect accuracy and others were near chance on one or both dimensions. We discuss how these results leave room for improvement through adaptive experimental design.
Mr. Jason Hays
Fabian Soto
We classify faces every day to help us gauge social situations, facilitate communication, and retain relationships. A goal of face perception research is to understand what specific face features are important during such tasks, and how shape information specific to one task (e.g., identity recognition) is influenced by that specific to another (e.g., expression recognition). One way to accomplish this is through the recovery of observer templates, which summarize what parts of an image are considered useful by the visual system to solve a particular task. Templates can be estimated through psychophysical techniques like reverse correlation. We use reverse correlation to estimate identity and expression templates by presenting participants with pairs of faces randomly sampled from a space of face shape parameters. By averaging chosen noise patterns, we obtained the template estimates. Although previous studies have superimposed noise by altering pixel luminance, we manipulate stimulus noise in face shape space using a three-dimensional face modeling toolbox. Our new approach allows us to directly visualize interactions between identity and expression through face model rendering and constrain interpretations to a simple and comprehensible stimulus space for faces. Permutation tests revealed that features informative for identity and expression recognition are distributed across the entire face. More importantly, we assessed invariance at the level of templates to find whether the shape information used to identify levels of one dimension (e.g., identity) does not vary with changes in another dimension (e.g., expression), a type of perceptual invariance known as template separability. Additional permutation tests found significant violations of template separability for both dimensions across all groups, suggesting that information sampling during face recognition is highly context-specific. Our results imply that the information used by the visual system during recognition of face identity and expression is highly precise, flexible, and context-specific.
Asli Kilic
By using numerosity perception tasks, in which nonsymbolic stimuli like a group of dots are presented to the participants and they are required to respond by using symbolic stimuli like Arabic numerals, we are investigating how numbers are represented mentally. In our study, we have used a two-choice numerosity decision task and manipulated whether feedback was provided or not. The participants were presented dots with a range of 10-90 and they were required to respond whether the number of dots were greater than 50. They were required to respond when a signal was presented at 60ms to 3,000 ms (seven lags) after the number of dots were presented. For this study, to model numerosity perception without a time constraint, we only used the responses of the 500 and 700 ms, which corresponds to free response time in the literature and our pilot studies. Results of numerosity perception research suggest that the mental representation of numbers which is refered to as the mental number line or the Approximate Number System is logarithmically scaled. Accordingly there is a consensus of an underestimation bias in numerosity perception tasks. For our data, we model the mental number line based on a logarithmic function by minimizing the underestimation bias, in other words number of errors for the numbers greater than our criterion, which is 50. We supported the general findings that support the logarithmically compressed mental number line by showing that the perceived “50” corresponds to a larger number mentally and this logarithmic compression is even more when no feedback is provided.
Submitting author
Author