Dr. Christopher Fisher
Christopher Adam Stevens
Although cognitive models are primarily used to formalize theories of cognition, they could be applied in artificial intelligence (AI) systems, such as autonomous managers (AMs) which optimize team performance through dynamic task allocation. Cognitive models can be incorporated into the AM's decision system to understand the implications of alternative task distributions. They can also be used as simulated agents to stress test AMs under a wide range of conditions. In a simulation study, we varied the cognitive model used in the AM's decision system and the cognitive model performing a task to explore the design space of AMs. We found a trade-off between optimality and robustness in which complex models performed the best when assumptions were met, but were not robust to violation of assumptions. These results highlight the importance of considering simple models when assumptions could be violated and showcase the utility of cognitive models in AI systems.
Updating people about the actions of others—social communication—is a powerful means by which humans learn about the world and maintain stable societies. However, how the mind/brain achieves this ability computationally remains unclear. Our goal is to model when, how, and why people choose to communicate information about others to others. Here we present current progress. We first describe our social communication framework, the test paradigm for model development and assessment, and an empirical experiment we conducted to obtain novel data to test model predictions. We then present our model, and compare it with two others. Our model outperformed the others, capturing the main patterns of the empirical data and matching the specific results most closely (i.e., percent of cases deciding to communicate about a target individual). Thus, our model successfully simulates human social decision-making, helping to understand how it is achieved by the human mind/brain.
This paper presents a cognitive modelling approach to investigating student learning of computer programming concepts via self-explanation. Self-explanation involves explaining instructional material to oneself by generating inferences about the material. Here, we discuss the potential of self-explanation for the domain of programming and present a preliminary Python ACT-R model of novice and experienced students learning basic Python concepts via self-explanation. The model adds to knowledge of learning via self-explanation in the domain by formalizing processes involved and by acting as a base model that can be expanded to explore and simulate more aspects of this type of student learning.
Prof. Andrea Stocco
Polarization of attitudes is an important, and often troubling or disruptive, effect of interest in many fields. We seek to shed some light on how such polarization arises by applying cognitive architectures to the problem. We created a novel embedding of individual cognitive agents, using ACT-R’s declarative memory model, into social networks, simulated them communicating over time, and observed the evolution of the agents’ attitudes, both collectively and individually. The primary measures we use are both Shannon entropies, of the distribution of attitudes in the final configuration of the whole social network, and of the distributions of memory traces in the individual agents as the simulation progresses.
Simulations were run over ten different network topologies, using three different distributions of initial attitudes, and five different values of the agents’ memory decay parameter.
These simulations demonstrated that polarization can be understood from a social and cognitive perspective simultaneously, each providing insights into the system’s behavior.