Argument Schemes and Dialogue for Explainable Planning
- URL: http://arxiv.org/abs/2101.02648v2
- Date: Sun, 14 Feb 2021 23:03:42 GMT
- Title: Argument Schemes and Dialogue for Explainable Planning
- Authors: Quratul-ain Mahesar and Simon Parsons
- Abstract summary: We propose an argument scheme-based approach to provide explanations in the domain of AI planning.
We present novel argument schemes to create arguments that explain a plan and its key elements.
We also present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.
- Score: 3.2741749231824904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is being increasingly deployed in practical
applications. However, there is a major concern whether AI systems will be
trusted by humans. In order to establish trust in AI systems, there is a need
for users to understand the reasoning behind their solutions. Therefore,
systems should be able to explain and justify their output. In this paper, we
propose an argument scheme-based approach to provide explanations in the domain
of AI planning. We present novel argument schemes to create arguments that
explain a plan and its key elements; and a set of critical questions that allow
interaction between the arguments and enable the user to obtain further
information regarding the key elements of the plan. Furthermore, we present a
novel dialogue system using the argument schemes and critical questions for
providing interactive dialectical explanations.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Solving NLP Problems through Human-System Collaboration: A
Discussion-based Approach [98.13835740351932]
This research aims to create a dataset and computational framework for systems that discuss and refine their predictions through dialogue.
We show that the proposed system can have beneficial discussions with humans improving the accuracy by up to 25 points in the natural language inference task.
arXiv Detail & Related papers (2023-05-19T16:24:50Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Scope and Sense of Explainability for AI-Systems [0.0]
Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems.
It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
arXiv Detail & Related papers (2021-12-20T14:25:05Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z) - Argument Schemes for Explainable Planning [1.927424020109471]
In this paper, we use argumentation to provide explanations in the domain of AI planning.
We present argument schemes to create arguments that explain a plan and its components.
We also present a set of critical questions that allow interaction between the arguments and enable the user to obtain further information.
arXiv Detail & Related papers (2020-05-12T15:09:50Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.