Argument Schemes for Explainable Planning
- URL: http://arxiv.org/abs/2005.05849v1
- Date: Tue, 12 May 2020 15:09:50 GMT
- Title: Argument Schemes for Explainable Planning
- Authors: Quratul-ain Mahesar and Simon Parsons
- Abstract summary: In this paper, we use argumentation to provide explanations in the domain of AI planning.
We present argument schemes to create arguments that explain a plan and its components.
We also present a set of critical questions that allow interaction between the arguments and enable the user to obtain further information.
- Score: 1.927424020109471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is being increasingly used to develop systems
that produce intelligent solutions. However, there is a major concern that
whether the systems built will be trusted by humans. In order to establish
trust in AI systems, there is a need for the user to understand the reasoning
behind their solutions and therefore, the system should be able to explain and
justify its output. In this paper, we use argumentation to provide explanations
in the domain of AI planning. We present argument schemes to create arguments
that explain a plan and its components; and a set of critical questions that
allow interaction between the arguments and enable the user to obtain further
information regarding the key elements of the plan. Finally, we present some
properties of the plan arguments.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Learning to Solve Geometry Problems via Simulating Human Dual-Reasoning Process [84.49427910920008]
Geometry Problem Solving (GPS) has attracted much attention in recent years.
It requires a solver to comprehensively understand both text and diagram, master essential geometry knowledge, and appropriately apply it in reasoning.
Existing works follow a paradigm of neural machine translation and only focus on enhancing the capability of encoders, which neglects the essential characteristics of human geometry reasoning.
arXiv Detail & Related papers (2024-05-10T03:53:49Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Scope and Sense of Explainability for AI-Systems [0.0]
Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems.
It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
arXiv Detail & Related papers (2021-12-20T14:25:05Z) - Knowledge-intensive Language Understanding for Explainable AI [9.541228711585886]
How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
arXiv Detail & Related papers (2021-08-02T21:12:30Z) - Argument Schemes and Dialogue for Explainable Planning [3.2741749231824904]
We propose an argument scheme-based approach to provide explanations in the domain of AI planning.
We present novel argument schemes to create arguments that explain a plan and its key elements.
We also present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.
arXiv Detail & Related papers (2021-01-07T17:43:12Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.