Directions for Explainable Knowledge-Enabled Systems
- URL: http://arxiv.org/abs/2003.07523v1
- Date: Tue, 17 Mar 2020 04:34:29 GMT
- Title: Directions for Explainable Knowledge-Enabled Systems
- Authors: Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L.
McGuinness
- Abstract summary: We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
- Score: 3.7250420821969827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interest in the field of Explainable Artificial Intelligence has been growing
for decades and has accelerated recently. As Artificial Intelligence models
have become more complex, and often more opaque, with the incorporation of
complex machine learning techniques, explainability has become more critical.
Recently, researchers have been investigating and tackling explainability with
a user-centric focus, looking for explanations to consider trustworthiness,
comprehensibility, explicit provenance, and context-awareness. In this chapter,
we leverage our survey of explanation literature in Artificial Intelligence and
closely related fields and use these past efforts to generate a set of
explanation types that we feel reflect the expanded needs of explanation for
today's artificial intelligence applications. We define each type and provide
an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in
their generation and prioritization of requirements and further help generate
explanations that are better aligned to users' and situational needs.
Related papers
- Automated Explanation Selection for Scientific Discovery [0.0]
We propose a cycle of scientific discovery that combines machine learning with automated reasoning for the generation and the selection of explanations.
We present a taxonomy of explanation selection problems that draws on insights from sociology and cognitive science.
arXiv Detail & Related papers (2024-07-24T17:41:32Z) - SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge [60.76719375410635]
We propose a new benchmark (SOK-Bench) consisting of 44K questions and 10K situations with instance-level annotations depicted in the videos.
The reasoning process is required to understand and apply situated knowledge and general knowledge for problem-solving.
We generate associated question-answer pairs and reasoning processes, finally followed by manual reviews for quality assurance.
arXiv Detail & Related papers (2024-05-15T21:55:31Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Attribution-Scores and Causal Counterfactuals as Explanations in
Artificial Intelligence [0.0]
We highlight the relevance of explanations for artificial intelligence, in general, and for the newer developments in em explainable AI
We describe in simple terms, explanations in data management and machine learning that are based on attribution-scores, and counterfactuals as found in the area of causality.
arXiv Detail & Related papers (2023-03-06T01:46:51Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Explainable Machine Learning with Prior Knowledge: An Overview [1.1045760002858451]
The complexity of machine learning models has elicited research to make them more explainable.
We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models.
arXiv Detail & Related papers (2021-05-21T07:33:22Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.