How to Answer Why -- Evaluating the Explanations of AI Through Mental
Model Analysis
- URL: http://arxiv.org/abs/2002.02526v1
- Date: Sat, 11 Jan 2020 17:15:58 GMT
- Title: How to Answer Why -- Evaluating the Explanations of AI Through Mental
Model Analysis
- Authors: Tim Schrills, Thomas Franke
- Abstract summary: Key question for human-centered AI research is how to validly survey users' mental models.
We evaluate whether mental models are suitable as an empirical research method.
We propose an exemplary method to evaluate explainable AI approaches in a human-centered way.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To achieve optimal human-system integration in the context of user-AI
interaction it is important that users develop a valid representation of how AI
works. In most of the everyday interaction with technical systems users
construct mental models (i.e., an abstraction of the anticipated mechanisms a
system uses to perform a given task). If no explicit explanations are provided
by a system (e.g. by a self-explaining AI) or other sources (e.g. an
instructor), the mental model is typically formed based on experiences, i.e.
the observations of the user during the interaction. The congruence of this
mental model and the actual systems functioning is vital, as it is used for
assumptions, predictions and consequently for decisions regarding system use. A
key question for human-centered AI research is therefore how to validly survey
users' mental models. The objective of the present research is to identify
suitable elicitation methods for mental model analysis. We evaluated whether
mental models are suitable as an empirical research method. Additionally,
methods of cognitive tutoring are integrated. We propose an exemplary method to
evaluate explainable AI approaches in a human-centered way.
Related papers
- Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI [20.21053807133341]
We try to provide an account of what constitutes a human-aware AI system.
We see that human-aware AI is a design oriented paradigm, one that focuses on the need for modeling the humans it may interact with.
arXiv Detail & Related papers (2024-05-13T14:17:52Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time
for Interactive Data Systems [7.578368459974474]
We discuss the evaluation of users' mental models of system logic.
Mental models are challenging to capture and analyze.
By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight.
arXiv Detail & Related papers (2020-09-02T18:27:04Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - The Grammar of Interactive Explanatory Model Analysis [7.812073412066698]
We show how different Explanatory Model Analysis (EMA) methods complement each other.
We formalize the grammar of IEMA to describe potential human-model dialogues.
IEMA is implemented in a widely used human-centered open-source software framework.
arXiv Detail & Related papers (2020-05-01T17:12:22Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.