Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Nov 16, 2020

Symposium at Annual Meeting of the Psychonomic Society

Schedule for Thursday, November 19, 2020


9:00 AM - 1:00 PM EST   |  All talks available on demand
1:00 PM - 2:00 PM EST   |  Live discussion panel and Q&A with speakers
2:00 PM - 3:00 PM EST   |  Poster session 

Note:  During the poster session, the poster presenters will be available on zoom.  The poster presentations are currently available for viewing.

You can visit the venue for the 2020 Virtual Psychonomics conference via this link and download the Society for Mathematical Psychology abstracts via this link.


Mathematical Cognitive Modeling in Human Factors Research

Abstracts for talks


Using Discrete Recurrence Quantification Analysis to Probe the Dynamics of Decision Making
Leslie Blaha | U.S. Air Force Research Laboratory

In this talk, I will explore applications of the visual analytics method Recurrence Quantification Analysis (RQA) to choice sequences and other discrete behavior time series. Choice sequences are often examined as aggregate behavior statistics, like choice proportions, or proxy summary statistics, like points earned.  But in the process of aggregation, much information about behavioral dynamics is lost. Yet, our descriptions of choice strategies, like “win-stay-lose-shift”, are statements about the behavioral dynamics; they suggest specific patterns that should be observed in the sequences.  Auto-RQA helps us characterize individual sequences in ways that highlight important aspects of behavioral dynamics, such as short-range switching between options and longer time-scale adaptations or shifts in preferences, when present. Cross-RQA provides tools allowing us to compare observed behaviors to specific strategies. I will discuss implications of using RQA for model selection and to inform intelligent machines for adaptive decision aiding and human-autonomy teaming.

Benchmarking Automation-aided Signal Detection
Jason S. McCarley | Oregon State University

Human operators often perform signal detection tasks with assistance from automated aids. Unfortunately, users tend to disuse aids that are less than perfectly accurate (Parasuraman & Riley, 1997), disregarding the aids' advice even when it might be helpful. To facilitate cost-benefit analyses of automated signal detection aids, we benchmarked the performance of human-automation teams against the predictions of various models of information integration. Participants performed a binary signal detection task, with and without assistance from an automated aid. Each trial, the aid provided the participant a binary judgment along with an estimate of certainty. Models chosen for comparison varied from perfectly efficient to highly inefficient. Even with an automated aid of fairly high sensitivity (d' = 3), performance of the human-automation teams was poor, approaching the predictions of the least efficient comparison models, and efficiency of the human-automation teams was substantially lower than that achieved by pairs of human collaborators. Data indicate strong automation disuse, and provide guidance for estimating the benefits of automated detection aids.

Toward Personalized Deceptive Signaling for Cyber Defense Using Cognitive Models
Cleotilde Gonzalez | Carnegie Mellon University

Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. These algorithms assume rational human behavior. However, human behavior in an online game designed to simulate an insider attack scenario shows that humans, playing the role of attackers, attack far more often than predicted under perfect rationality. We describe an instance-based learning cognitive model, built in ACT-R, that accurately predicts human performance and biases in the game. To improve defenses, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time. We discuss the results and implications of this adaptive signaling method for personalized defense.

A Mathematical Psychology Talent Show: Examples of How Our Models May Influence Human-Centered Design
Elizabeth Fox | U.S. Air Force Research Laboratory

The use of cognitive-theory-driven approaches may evaluate performance and cognitive processes with more rigor and precision than current procedures and metrics used in human factors research and application. A mathematical modeling approach allows for both more theoretically meaningful measures than raw accuracy or response time (RT), and for insight into the aspects of the cognitive process that may have led to better or worse performance. Extending the modeling approaches developed in mathematical psychology to evaluate applied environments may inform display design, multitask combination, assist adaptive automation, or supply pertinent feedback in real-time. In this talk, I demonstrate a few applications of mathematical models to inform human-centered design: the evaluation of multispectral fusion techniques, estimation of efficiency to compare multitask configurations, and the influence of task load on multitasking efficiency and management strategies. Each of these modeling approaches provide additional insights beyond traditional analyses. In conclusion, I illustrate how developing time-varying mathematical models can serve as a useful online tool for evaluating cognitive processes and performance.

A Conjoint Analysis Method to Knowledge Graph Rankings
Brett Jefferson | Pacific Northwest National Laboratory

Subject matter expert (SME) knowledge is often an integral component in multidisciplinary analyst teams. SMEs can provide proper context, meaning, and additional insight on data received from the real world. We use conjoint analysis to elicit SME expertise from various scenarios. Conjoint analysis provides a means to rank knowledge graph elements and determine node-level, edge-level, and subgraph (event) level weights. I will discuss findings for this novel application to graph data and potential use cases for such rankings.

Development of a computational model of explanation to support Explainable Artificial Intelligence (XAI)
Shane Mueller | Michigan Technical University

Recent advances in neural networks and deep reinforcement learning (e.g., for image/video classification, natural language processing, autonomy, and other applications) have begun to produce AI systems that are highly capable, but often fail in unexpected ways that are hard to understand. Because of their complexity and opaqueness, an Explainable AI community has re-emerged with the goal of developing algorithms that can help developers, users, and other stakeholders understand how these systems work. However, the explanation produced by these systems are generally not guided by psychological theory, but rather by unprincipled notions of what might be effective at helping a user understand a complex system. To address this, we have developed a psychological theory of explanation implemented as a mathematical / computational model. The model is focused on how users engage in sensemaking and learning to develop a mental model of a complex process, with a focus on two levels of learning that map onto System 1 (intuitive, feedback-based tuning of a mental model) and System 2 (construction, reconfiguration, and hypothesis testing of a mental model) processes. These elements of explanatory reasoning map onto two important areas of research within the mathematical psychology community: feedback-based cue/category learning (e.g, Gluck & Bower, 1988), and knowledge-space descriptions of learning (Doignon & Falmagne, 1985). We will describe a mathematical/computational model that integrates these two levels, and discuss how this model enables better understanding of the explanation needed for various AI systems.  This work was in collaboration with Lamia Alam, Tauseef Mamun, Robert R. Hoffman, and Gary L. Klein.


Abstracts for posters


Time Domain EEG Measures of Perception and System Factorial Technology: A Beginning Exploration
Allan J. Collins (1),
Gaojie Fan (2), Benjamin D. Maldonado (1), and Robin D. Thomas (1) | 1 Miami University;  2 Louisiana State University

Systems Factorial Technology (SFT) provides means to identify simple mental architectures that underlie basic cognitive tasks from observed patterns in response time data (Townsend & Nosawa, 1995).  Recently, this methodology has been extended to nested architectures (Thomas, et al., 2019).  Presumably, these cognitive architectures have neural instantiation.  In a related literature, various EEG measures have been related to aspects of perceptual decision making, such as encoding time, evidence accumulation, etc. (e.g., van Vugt, et al., 2019)  We explore if aspects of these EEG derived measures can be related meaningfully to patterns of response time distributions that are central to the SFT methodology.  

Exploring end-point use and identifying predictors of overall ratings in student evaluations of teaching (SETs)
Karyssa Courey and Michael D. Lee | University of California, Irvine

The present study examines student evaluations of teaching (SETs) at a large, public university. We evaluate end-point use across different scales and examine how well evaluation items predict overall instruction and course ratings among several majors. We find that students use the upper endpoints of scales more often when rating female professors compared to male professors. We also find that students use endpoints more often when using a 10-point 4-letter grading scale compared to a 7-point Likert-type scale. Hierarchical Bayesian regressions reveal, at the population level, that items pertaining to the instructor's clarity, engagement, knowledge, and fairness of grading best predict the rating of the instructor, while items pertaining to the course’s usefulness in developing future skills and the match between course objective and outcomes best predict the rating of the overall value of the course.

Using system factorial technology to study the effect of aerobic exercise on young adults’ attentional control
Hao-Lun Fu, Chun-Hao Wang, and
Cheng-Ta Yang | National Cheng Kung University

Previous studies have demonstrated the benefits of exercise for attentional control. However, the underlying processing mechanism remains unknown. Here, we investigated whether such exercise-induced cognitive benefits are associated with more efficient information processing. Forty-four participants participated in a 4-week aerobic exercise program. We employed System Factorial Technology (Townsend & Nozawa, 1995) and a redundant-target task to examine the changes in resilience capacity, a measure of the relative processing efficiency for two targets to that for a target with a distractor. Results revealed resilience capacity became smaller after exercise intervention although the RTs became faster. Further analysis revealed that the change in resilience capacity may be due to the violation of context invariance, which is in line with the selective improvement hypothesis (Colcombe & Kramer, 2003). These results shed light on the processing mechanism underlying exercise-induced changes in attentional control, and future studies should interpret the exercise effect with caution.

Exploring word memorability: How well do different word properties explain item free-recall probability?
Christopher R. Madan
| University of Nottingham

Words can vary in many dimensions and a variety of lexical, semantic, and affective properties have previously been associated with variability in recall performance. Free recall data was used from 147 participants across 20 experimental sessions from the Penn Electrophysiology of Encoding and Retrieval Study (PEERS) dataset, across 1638 words. Here I consider how well 20 different word properties—across lexical, semantic, and affective dimensions—relate to free recall. Semantic dimensions, particularly animacy (better memory for living), usefulness (with respect to survival; better memory for useful), and size (better memory for larger), demonstrated the strongest relationships with recall probability. These key results were then examined and replicated in the free recall data from Lau et al. (2018), which had 532 words and 116 participants. This comprehensive investigation of a variety of word memorability demonstrates that semantic and function-related psycholinguistic properties play an important role in verbal memory processes.

Intentional binding: unintentional artifact?
Laura Saad, Julien Musolino, and Pernille Hemmer | Rutgers University, New Brunswick

Intentional Binding (IB) is often used as an implicit measure of the sense of agency (SoA). Given the fundamental nature of the SoA, one would expect the presence of IB at the individual level. We compared aggregate vs. individual data in a pilot study as well as in a publicly available dataset. Aggregate results replicated the expected directionality for action and outcome binding for both studies. Crucially, inter-individual analyses across conditions revealed almost half of participants in the pilot study (N=15/35) and more than half of participants in the public dataset (N=11/20) had mean binding values for either action or outcome that were in the opposite of the expected direction. This is unexpected given the directionality of the perceived timing of events is critical to the IB effect. The misuse of averaging and the inconsistency of analyses in this domain will also be discussed along with implications for future research.

Semantic Organization of Characteristics in Natural and Supernatural Concepts
Joseph Sommer, Julien Musolino, and Pernille Hemmer | Rutgers University, New Brunswick

A prominent theory in the cognitive science of religion proposes that supernatural concepts are ubiquitous across cultures because they possess a “minimally counterintuitive” structure, which improves their memorability relative to natural concepts. So-called minimally counterintuitive (MCI) concepts contain one or a few characteristics that violate intuitive ontological theories, which makes them salient. By contrast, “maximally counterintuitive” (MXCI) concepts are purported to be less memorable than their MCI counterparts because they contain too many such violations. However, the fact that supernatural characteristics contain violations of intuitive theories is not the only way they differ from natural characteristics. We organize natural and supernatural characteristics generated by experimental participants into a multi-dimensional hierarchical structure and discuss the semantic organization of supernatural and natural concepts. We suggest that this methodology highlights subtler distinctions between supernatural and natural characteristics that dispense with the need for a novel memory mechanism involving violations of intuitive ontological theories. 

Representing ordered associations in symmetric models of memory
Jeremy J. Thomas and Jeremy B. Caplan | University of Alberta

Models of association memory make predictions about within pair order (AB vs. BA), either implying that order judgments of a retrieved pair should be at chance or perfect. Behaviour contradicts both predictions, when the pair can be recalled, order judgment is above chance, but still fairly low. We test two incremental modifications to symmetric, convolution-based models (which otherwise predict chance order judgment performance): 1) Encoding the item’s position as a subset of its features. 2) Position-specific permutations of item features. #1 achieved a close fit to order recognition data but compromised the well-known property of associative symmetry. #2 did not exhibit any reduction in symmetry but slightly overpredicted the dependence of order judgments on recall. In sum, simultaneously satisfying benchmark characteristics of association and order memory provides challenging constraints for existing models of association.

Choices, challenges, and constraints: a pragmatic examination of the limits of mental age matching in empirical research
Jack H. Wilson (1), Natalie Russo (1), Elizabeth A. Kaplan (1), Amy H. Criss (1), and Jacob A. Burack (2) | 1 Syracuse University;  2 McGill University

A common method of experimental control in the study of intellectual disability in children is mental age matching, which allows for meaningful comparisons between intellectually disabled children and typically developing children that consider the inherent differences in developmental rates between these two groups.  One's mental age is proportional to the product of one's IQ and one's chronological age.  It follows that development on IQ tests is measurably linear with chronological age.  We test this implication by first reverse engineering the distribution of raw scores on the subscales of three common kinds of IQ tests--Stanford Binet, Wechsler Abbreviated Intelligence Scale, and the Wechsler Intelligence Scale for Children--and then determining whether these scores are linear using Bayesian Information Criterion comparisons of segmented regressions.  We find linearity in only one subscale, imposing limitations on the accuracy of the mental age matching protocol.

Task context affects the group decision efficiency
Peng-Fei Zhu,
Cheng-Ju Hsieh, & Cheng-Ta Yang | National Cheng Kung University

We examined how task context (i.e., task rules and task difficulty) affects collective decisions. The Systems Factorial Technology was adopted to infer group decision-making efficiency. A T/L conjunction search task was conducted. Participants had to search for 0/1/2 Ts among 25/60 Ls. Specifically, in Experiment 1, participants had to detect the presence of any target (i.e., OR search rule); in Experiment 2, participants had to report the number of targets (i.e., AND search rule). Our results revealed supercapacity processing in both tasks, suggesting collective benefit. However, how task difficulties affected the collective benefit differed depending on the task rules. With an OR rule, collective benefit was unaffected by the number of distractors; by contrast, with an AND rule, collective benefit increased as the number of distractors increased. Together, our results suggested that under suitable task difficulty and appropriate decision rule, group decision-making would outperform individual decisions with more efficient processing.