An Argumentation-based Approach for Identifying and Dealing with
Incompatibilities among Procedural Goals
- URL: http://arxiv.org/abs/2009.05186v1
- Date: Fri, 11 Sep 2020 01:01:34 GMT
- Title: An Argumentation-based Approach for Identifying and Dealing with
Incompatibilities among Procedural Goals
- Authors: Mariela Morveli-Espinoza, Juan Carlos Nieves, Ayslan Possebom, Josep
Puyol-Gruart, and Cesar Augusto Tacla
- Abstract summary: An intelligent agent may generate multiple pursuable goals, which may be incompatible among them.
In this paper, we focus on the definition, identification and resolution of these incompatibility.
- Score: 1.1744028458220426
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: During the first step of practical reasoning, i.e. deliberation, an
intelligent agent generates a set of pursuable goals and then selects which of
them he commits to achieve. An intelligent agent may in general generate
multiple pursuable goals, which may be incompatible among them. In this paper,
we focus on the definition, identification and resolution of these
incompatibilities. The suggested approach considers the three forms of
incompatibility introduced by Castelfranchi and Paglieri, namely the terminal
incompatibility, the instrumental or resources incompatibility and the
superfluity. We characterise computationally these forms of incompatibility by
means of arguments that represent the plans that allow an agent to achieve his
goals. Thus, the incompatibility among goals is defined based on the conflicts
among their plans, which are represented by means of attacks in an
argumentation framework. We also work on the problem of goals selection; we
propose to use abstract argumentation theory to deal with this problem, i.e. by
applying argumentation semantics. We use a modified version of the "cleaner
world" scenario in order to illustrate the performance of our proposal.
Related papers
- A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs [63.26541167737355]
We present a framework to increase faithfulness and causality for knowledge-based reasoning.
Our framework outperforms all compared state-of-the-art approaches by large margins.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Safe Explicable Planning [3.3869539907606603]
We propose Safe Explicable Planning (SEP) to support the specification of a safety bound.
Our approach generalizes the consideration of multiple objectives stemming from multiple models.
We provide formal proofs that validate the desired theoretical properties of these methods.
arXiv Detail & Related papers (2023-04-04T21:49:02Z) - Formalizing the Problem of Side Effect Regularization [81.97441214404247]
We propose a formal criterion for side effect regularization via the assistance game framework.
In these games, the agent solves a partially observable Markov decision process.
We show that this POMDP is solved by trading off the proxy reward with the agent's ability to achieve a range of future tasks.
arXiv Detail & Related papers (2022-06-23T16:36:13Z) - Logically Consistent Adversarial Attacks for Soft Theorem Provers [110.17147570572939]
We propose a generative adversarial framework for probing and improving language models' reasoning capabilities.
Our framework successfully generates adversarial attacks and identifies global weaknesses.
In addition to effective probing, we show that training on the generated samples improves the target model's performance.
arXiv Detail & Related papers (2022-04-29T19:10:12Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Dealing with Incompatibilities among Procedural Goals under Uncertainty [1.2599533416395763]
We represent agent's plans by means of structured arguments whose premises are pervaded with uncertainty.
We measure the strength of these arguments in order to determine the set of compatible goals.
Considering our novel approach for measuring the strength of structured arguments, we propose a semantics for the selection of plans and goals.
arXiv Detail & Related papers (2020-09-17T00:56:45Z) - An Argumentation-based Approach for Explaining Goal Selection in
Intelligent Agents [0.0]
An intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve.
In the context of goals selection, agents should be able to explain the reasoning path that leads them to select (or not) a certain goal.
We propose two types of explanations: the partial one and the complete one and a set of explanatory schemes to generate pseudo-natural explanations.
arXiv Detail & Related papers (2020-09-14T01:10:13Z) - Resolving Resource Incompatibilities in Intelligent Agents [0.0]
In this paper, we focus on the incompatibilities that emerge due to resources limitations.
We give an algorithm for identifying resource incompatibilities from a set of pursued goals and, on the other hand, we propose two ways for selecting those goals that will continue to be pursued.
arXiv Detail & Related papers (2020-09-13T02:09:04Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.