Argumentation-based Agents that Explain their Decisions
- URL: http://arxiv.org/abs/2009.05897v1
- Date: Sun, 13 Sep 2020 02:08:10 GMT
- Title: Argumentation-based Agents that Explain their Decisions
- Authors: Mariela Morveli-Espinoza, Ayslan Possebom, and Cesar Augusto Tacla
- Abstract summary: We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) systems, including intelligent
agents, must be able to explain their internal decisions, behaviours and
reasoning that produce their choices to the humans (or other systems) with
which they interact. In this paper, we focus on how an extended model of BDI
(Beliefs-Desires-Intentions) agents can be able to generate explanations about
their reasoning, specifically, about the goals he decides to commit to. Our
proposal is based on argumentation theory, we use arguments to represent the
reasons that lead an agent to make a decision and use argumentation semantics
to determine acceptable arguments (reasons). We propose two types of
explanations: the partial one and the complete one. We apply our proposal to a
scenario of rescue robots.
Related papers
- Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs [63.26541167737355]
We present a framework to increase faithfulness and causality for knowledge-based reasoning.
Our framework outperforms all compared state-of-the-art approaches by large margins.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - The Case Against Explainability [8.991619150027264]
We show end-user Explainability's inadequacy to fulfil reason-giving's role in law.
We find that end-user Explainability excels in the fourth function, a quality which raises serious risks.
This study calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability.
arXiv Detail & Related papers (2023-05-20T10:56:19Z) - Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven
decision support [4.452019519213712]
We argue for a paradigm shift from the current model of explainable artificial intelligence (XAI)
In early decision support systems, we assumed that we could give people recommendations and that they would consider them, and then follow them when required.
arXiv Detail & Related papers (2023-02-24T01:33:25Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Argument Schemes and Dialogue for Explainable Planning [3.2741749231824904]
We propose an argument scheme-based approach to provide explanations in the domain of AI planning.
We present novel argument schemes to create arguments that explain a plan and its key elements.
We also present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.
arXiv Detail & Related papers (2021-01-07T17:43:12Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z) - An Argumentation-based Approach for Explaining Goal Selection in
Intelligent Agents [0.0]
An intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve.
In the context of goals selection, agents should be able to explain the reasoning path that leads them to select (or not) a certain goal.
We propose two types of explanations: the partial one and the complete one and a set of explanatory schemes to generate pseudo-natural explanations.
arXiv Detail & Related papers (2020-09-14T01:10:13Z) - Argument Schemes for Explainable Planning [1.927424020109471]
In this paper, we use argumentation to provide explanations in the domain of AI planning.
We present argument schemes to create arguments that explain a plan and its components.
We also present a set of critical questions that allow interaction between the arguments and enable the user to obtain further information.
arXiv Detail & Related papers (2020-05-12T15:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.