An Argumentation-based Approach for Explaining Goal Selection in
Intelligent Agents
- URL: http://arxiv.org/abs/2009.06131v1
- Date: Mon, 14 Sep 2020 01:10:13 GMT
- Title: An Argumentation-based Approach for Explaining Goal Selection in
Intelligent Agents
- Authors: Mariela Morveli-Espinoza, Cesar Augusto Tacla, and Henrique Jasinski
- Abstract summary: An intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve.
In the context of goals selection, agents should be able to explain the reasoning path that leads them to select (or not) a certain goal.
We propose two types of explanations: the partial one and the complete one and a set of explanatory schemes to generate pseudo-natural explanations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: During the first step of practical reasoning, i.e. deliberation or goals
selection, an intelligent agent generates a set of pursuable goals and then
selects which of them he commits to achieve. Explainable Artificial
Intelligence (XAI) systems, including intelligent agents, must be able to
explain their internal decisions. In the context of goals selection, agents
should be able to explain the reasoning path that leads them to select (or not)
a certain goal. In this article, we use an argumentation-based approach for
generating explanations about that reasoning path. Besides, we aim to enrich
the explanations with information about emerging conflicts during the selection
process and how such conflicts were resolved. We propose two types of
explanations: the partial one and the complete one and a set of explanatory
schemes to generate pseudo-natural explanations. Finally, we apply our proposal
to the cleaner world scenario.
Related papers
- Clash of the Explainers: Argumentation for Context-Appropriate
Explanations [6.8285745209093145]
There is no single approach that is best suited for a given context.
For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation.
We propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest.
arXiv Detail & Related papers (2023-12-12T09:52:30Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Generating Commonsense Explanation by Extracting Bridge Concepts from
Reasoning Paths [128.13034600968257]
We propose a method that first extracts the underlying concepts which are served as textitbridges in the reasoning chain.
To facilitate the reasoning process, we utilize external commonsense knowledge to build the connection between a statement and the bridge concepts.
We design a bridge concept extraction model that first scores the triples, routes the paths in the subgraph, and further selects bridge concepts with weak supervision.
arXiv Detail & Related papers (2020-09-24T15:27:20Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z) - An Argumentation-based Approach for Identifying and Dealing with
Incompatibilities among Procedural Goals [1.1744028458220426]
An intelligent agent may generate multiple pursuable goals, which may be incompatible among them.
In this paper, we focus on the definition, identification and resolution of these incompatibility.
arXiv Detail & Related papers (2020-09-11T01:01:34Z) - Argument Schemes for Explainable Planning [1.927424020109471]
In this paper, we use argumentation to provide explanations in the domain of AI planning.
We present argument schemes to create arguments that explain a plan and its components.
We also present a set of critical questions that allow interaction between the arguments and enable the user to obtain further information.
arXiv Detail & Related papers (2020-05-12T15:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.