Keynote Speakers
Angela Yu, University of California, San Diego

A Computational Investigation of the Facial Features underlying the Social Perception of Faces
Face processing plays a central role in everyday life: humans readily infer personality traits such as attractiveness, trustworthiness, and intelligence, from a glance of ita stranger’s face. Previous attempts to characterize the facial (physiognomic) features underlying personality trait judgments have lacked either systematicity or interpretability. Here, we utilize a computational framework to tackle this problem, by representing the space of all faces using the Active Appearance Model, which has recently been shown to have latent features encoded by face patch cells in the macaque monkey, and then using linear regression to identify facial features that maximally account for human perception of 20 social traits in a large dataset of faces and social judgments. Our model achieves state-of-the-art prediction on trait judgments — competitive with the best convolutional neural network. To tackle interpretability, we present a novel dual space analysis to characterize the linear combination of features that drives the perception of each trait. We find that facial features important for social perception are largely distinct from those underlying demographic and emotion perception, contrary to previous suggestions. We also use synthetically generated faces to visualize the constituent facial features underlying the perception of different social traits, and interpret these features in terms of a large repertoire of geometric features. Finally, we present a novel correlation decomposition analysis that parses trait judgment correlations (e.g. attractiveness and dominance) into the separable roles played by shared facial features and correlations of facial features in the human population – yielding novel and surprising insights.

Naomi Feldman, University of Maryland

From real to ideal: How listeners cope with variable linguistic input
Language is highly variable. Words are pronounced differently each time, cues to grammatical class are sometimes unreliable, and non-basic sentence types (like questions) have word orders that differ from those in canonical sentences. This variability creates challenges in language learning and perception. I argue that listeners cope with this variability in part by distorting their input in ways that “clean up” irrelevant sources of variability, turning real input into something closer to ideal input. This filtering strategy accounts for some of listeners’ most striking biases in speech perception and allows even very inexperienced language learners to bootstrap into a complex linguistic system.

Jennifer Trueblood, Vanderbilt University, Estes Early Career Award winner

The Dynamics of Contextual Sensitivity in Multi-Alternative Choice
Everyday we make hundreds of choices. Some are seemingly trivial — what cereal should I eat for breakfast? Others have long lasting implications — what stock should I invest in? Despite their obvious differences, these two decisions have one important thing in common; both can be sensitive to context. That is, our preferences for existing alternatives can be altered by the introduction of new alternatives. This raises the important questions of how preferences for different options are constructed, how they evolve over time, and how contextual sensitivity impacts that process. Dynamic process models provide a way to examine these questions and to explore the underlying cognitive processes involved in choice behavior. In this talk, I will describe how choice and response time data can be used to test different theories of context effects in multi-alternative, multi-attribute choice. I will also discuss how contextual sensitivity can be manipulated by making comparisons between options more difficult.

Leslie Blaha, Pacific Northwest National Laboratory, FABBS Early Career Award winner

How can Machines Understand Humans? Challenges but Mostly Opportunities for Modeling in Human-Machine Teaming
Human-machine teaming is fast becoming a dominant theme in a number of domains and promises to transform how we make decisions and interact with the world. By human-machine team, I refer to systems designed to partner human intelligence and some artificial machine intelligence to achieve a common goal. But for a human and machine to really partner as a team, one of the key challenges we must address is: how can we make machines understand and get smarter about their users? Meeting this challenge is an opportunity for mathematical psychology and cognitive modeling to provide the formal methods and models to measure, represent, and interpret human behaviors. In this talk, I will reflect on application-driven basic research emphasizing how the push for human-machine teaming raises interesting challenges and opportunities to advance both theory and application of human behavior modeling. I will touch on research on real-time cognitive state assessment, where the goal is to have machines learn something about humans. And I will touch on interactive streaming analytics and interactive machine learning, where the goal is to have machines learn from humans. Both perspectives on learning from humans challenge how we conceptualize user behaviors and leverage insights from models to inform the capabilities of human-machine teams.