By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Learning by instruction is one of the most common forms of learning, and a number of research efforts have modeled the cognitive process of instruction following, with many successes. However, most computational models remain brittle with respect to the given instructions and lack the ability to adapt dynamically to variants of the instructions. This paper aims to illustrate modeling constructs designed to make instruction following more robust, including (1) more flexible grounding of language to execution, (2) processing of instructions that allows for inference of implicit instruction knowledge, and (3) dynamic, interactive clarification of instructions during both the learning and execution stages. Examples in the context of a paired-associates task and a visual-search task are discussed.
Fatigue is a common occurrence in several occupational fields, often resulting in operator performance and health issues. Biomathematical models of fatigue have become useful tools in several fatigue risk management programs. However, these tools still have limitations in terms of identifying specific performance outcomes affected by fatigue, as well as individualizing fatigue estimates to individual operators. The integration of computational cognitive models and biomathematical models can help solve these issues in a complex operational context. The current effort aims to develop an integrated model of fatigue in the context of C-17 approach and landing operations. Specifically, we integrate a biomathematical fatigue model with a task network model to estimate performance degradation due to fatigue. The following paper outlines the development of the task network model and integration with the biomathematical fatigue model.
One of the hallmarks of expert performance in complex, dynamic tasks is the ability to select and perform the appropriate action within a constantly shifting environment, often under tight time constraints. In an example task, the video game Tetris, expert players select placement positions for the active zoid and navigate them into place in increasingly short time spans. Machine models of the same task are capable of producing human-like performance patterns, but either ignore or only roughly approximate the time constraints that seem to be an integral part of human behavior. Using a set of scaled time parameters derived from a large set of human subjects, we trained and tested an existing machine Tetris model and observed the resultant changes in performance and behavior.
Trust calibration for a human-autonomy team is the process by which a human adjusts their understanding of the automation’s capabilities; trust calibration is needed to engender appropriate reliance on automation. Herein, we develop an Instance-based Learning Theory ACT-R model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. We demonstrate that model matches well the human predictive power statistics measuring reliance calibration; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. Our model is a promising beginning toward a computational process model for trust and reliance for human-machine teaming.
This paper presents an analysis of a cognitive twin, implemented in a cognitive architecture. The cognitive twin is intended to be a personal assistant that learns to make decisions from your past behavior. In this proof-of-concept case, we have the cognitive twin select attendees to a party, based upon what it has learned (through ratings) about an agent's social network. We evaluate two versions of a model with respect to rate of change in the social network, the noise in the rating data, and the sparsity of the data.