Explanation Ontology: A Model of Explanations for User-Centered AI
- URL: http://arxiv.org/abs/2010.01479v1
- Date: Sun, 4 Oct 2020 03:53:35 GMT
- Title: Explanation Ontology: A Model of Explanations for User-Centered AI
- Authors: Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman,
Amar K. Das, Deborah L. McGuinness
- Abstract summary: Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
- Score: 3.1783442097247345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability has been a goal for Artificial Intelligence (AI) systems since
their conception, with the need for explainability growing as more complex AI
models are increasingly used in critical, high-stakes settings such as
healthcare. Explanations have often added to an AI system in a non-principled,
post-hoc manner. With greater adoption of these systems and emphasis on
user-centric explainability, there is a need for a structured representation
that treats explainability as a primary consideration, mapping end user needs
to specific explanation types and the system's AI capabilities. We design an
explanation ontology to model both the role of explanations, accounting for the
system and user attributes in the process, and the range of different
literature-derived explanation types. We indicate how the ontology can support
user requirements for explanations in the domain of healthcare. We evaluate our
ontology with a set of competency questions geared towards a system designer
who might use our ontology to decide which explanation types to include, given
a combination of users' needs and a system's capabilities, both in system
design settings and in real-time operations. Through the use of this ontology,
system designers will be able to make informed choices on which explanations AI
systems can and should provide.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI [2.5899040911480173]
We explore the features of explanations and how to use those features in evaluating their utility.
We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them.
arXiv Detail & Related papers (2022-06-27T21:42:53Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explanation Ontology in Action: A Clinical Use-Case [3.1783442097247345]
We provide step-by-step guidance for system designers to utilize our Explanation Ontology.
We provide a detailed example with our utilization of this guidance in a clinical setting.
arXiv Detail & Related papers (2020-10-04T03:52:39Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.