Levels of explainable artificial intelligence for human-aligned
conversational explanations
- URL: http://arxiv.org/abs/2107.03178v1
- Date: Wed, 7 Jul 2021 12:19:16 GMT
- Title: Levels of explainable artificial intelligence for human-aligned
conversational explanations
- Authors: Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil
Aryal, Francisco Cruz
- Abstract summary: People are affected by autonomous decisions every day and need to understand the decision-making process to accept the outcomes.
This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system.
- Score: 0.6571063542099524
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Over the last few years there has been rapid research growth into eXplainable
Artificial Intelligence (XAI) and the closely aligned Interpretable Machine
Learning (IML). Drivers for this growth include recent legislative changes and
increased investments by industry and governments, along with increased concern
from the general public. People are affected by autonomous decisions every day
and the public need to understand the decision-making process to accept the
outcomes. However, the vast majority of the applications of XAI/IML are focused
on providing low-level `narrow' explanations of how an individual decision was
reached based on a particular datum. While important, these explanations rarely
provide insights into an agent's: beliefs and motivations; hypotheses of other
(human, animal or AI) agents' intentions; interpretation of external cultural
expectations; or, processes used to generate its own explanation. Yet all of
these factors, we propose, are essential to providing the explanatory depth
that people require to accept and trust the AI's decision-making. This paper
aims to define levels of explanation and describe how they can be integrated to
create a human-aligned conversational explanation system. In so doing, this
paper will survey current approaches and discuss the integration of different
technologies to achieve these levels with Broad eXplainable Artificial
Intelligence (Broad-XAI), and thereby move towards high-level `strong'
explanations.
Related papers
- Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Who is this Explanation for? Human Intelligence and Knowledge Graphs for
eXplainable AI [0.0]
We focus on the contributions that Human Intelligence can bring to eXplainable AI.
We call for a better interplay between Knowledge Representation and Reasoning, Social Sciences, Human Computation and Human-Machine Cooperation research.
arXiv Detail & Related papers (2020-05-27T10:47:15Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.