Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Development of a computational model of explanation to support Explainable Artificial Intelligence (XAI)

Authors
Shane Mueller
Michigan Technological University ~ Dept. of Cognitive and Learning Science
Abstract

Recent advances in neural networks and deep reinforcement learning (e.g., for image/video classification, natural language processing, autonomy, and other applications) have begun to produce AI systems that are highly capable, but often fail in unexpected ways that are hard to understand. Because of their complexity and opaqueness, an Explainable AI community has re-emerged with the goal of developing algorithms that can help developers, users, and other stakeholders understand how these systems work. However, the explanation produced by these systems are generally not guided by psychological theory, but rather by unprincipled notions of what might be effective at helping a user understand a complex system. To address this, we have developed a psychological theory of explanation implemented as a mathematical / computational model. The model is focused on how users engage in sensemaking and learning to develop a mental model of a complex process, with a focus on two levels of learning that map onto System 1 (intuitive, feedback-based tuning of a mental model) and System 2 (construction, reconfiguration, and hypothesis testing of a mental model) processes. These elements of explanatory reasoning map onto two important areas of research within the mathematical psychology community: feedback-based cue/category learning (e.g, Gluck & Bower, 1988), and knowledge-space descriptions of learning (Doignon & Falmagne, 1985). We will describe a mathematical/computational model that integrates these two levels, and discuss how this model enables better understanding of the explanation needed for various AI systems. This work was in collaboration with Lamia Alam, Tauseef Mamun, Robert R. Hoffman, and Gary L. Klein.

Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Mueller, S. (2020, November). Development of a computational model of explanation to support Explainable Artificial Intelligence (XAI). Paper presented at MathPsych at Virtual Psychonomics 2020. Via mathpsych.org/presentation/307.