Virtual ICCM II
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
The Cooperative Action Task (CAT) is a platform for studying the development of team coordination in complex dynamic task environments. Teams of four cooperate to play a cooking video game across eight 1-hr sessions. For each session, each team plays eight 5-minute games spread equally among 4 different kitchen layouts. Team members communicate using gaze cursors that display the gaze location of each player. The results of the study reveal that teams used one of two strategies to coordinate in the game. Some teams used pre-assigned roles for players to increase action predictability, while others dynamically adapted to changing task demands in the game. The teams that used the later strategy scored higher in general. Additionally, team performance was lower when teams switched between strategies across games in the same task environment.
Prior research has found interference effects (IEs) in decision making, which violate classical probability theory (CPT). We developed a model of IEs called the probability theory + noise (PTN) model and compare its predictions to an existing quantum model called the Belief-Action Entanglement (BAE) model. The PTN assumes that memory operates consistently with CPT, but noise in the retrieval process produces violations of CPT. Using parameter space partitioning, we identified that both models can produce all qualitative patterns of IEs. We found that the BAE tends to produce IE distributions with a larger variance compared to the PTN. We also show that PTN predicts a relationship we term the conditional attack probability equality (CAEP) which is violated in previously reported data. The CAEP holds for the PTN regardless of chosen parameter values. However, the BAE is not constrained by the CAEP.
Probability theory is often used to model animal behaviour, but the gap between high-level models and how those are realized in neural implementations often remains. In this paper we show how biologically plausible cognitive representations of continuous data, called Spatial Semantic Pointers, can be used to construct single neuron estimators of probability distributions. These representations form the basis for neural circuits that perform anomaly detection and evidence integration for decision making. We tested these circuits on simple anomaly detection and decision-making tasks. In the anomaly detection task, the circuit was asked to determine whether observed data was anomalous under a distribution implied by training data. In the decision-making task, the agent had to determine which of two distributions were most likely to be generating the observed data. In both cases we found that the neural implementations performed comparably to a non-neural Kernel Density Estimator baseline. This work distinguishes itself from prior approaches to neural probability by using neural representations of continuous states, e.g., grid cells or head direction cells. The circuits in this work provide a basis for further experimentation and for generating hypotheses about behaviour as greater biological fidelity is achieved.
Previous work has modeled multiple strategies in a simple fault-finding task. We present multiple models of strategies that people apply to find faults in a complex circuit, with and without learning. We continued modeling multiple strategies for tasks with higher complexity. The multiple strategies are implemented in Python with a novel approach that uses hierarchical task analysis, the Keystroke-Level Model (KLM), and the power law, to predict performance time. To evaluate those models, we used human data from a large study (Ritter et. al, 2022). We model the test session data when participants had more time to learn and develop their strategies. We developed four strategies by analyzing the top 6 participants who had 100% correct rate in the test session. We then compared the human performance time and the prediction time by our strategy models, with and without learning. The strategies can predict 62% of the participants. We provide insights into why we sometimes failed to predict performances well.
We present an example of modeling errors by analyzing error types. For both designing interfaces and understanding learning, it is important to include error analysis to understand where time goes and how learning happens. We examine errors that participants make while doing a task. Errors for each trouble shooting task and each partici¬pant were analyzed, we came up a new way to categorize errors to shed lights on modeling errors. We also present an updated strategy model that generates errors and corrects errors, ending up with a better correlation with the participant’s performance.