Dealing with Incompatibilities among Procedural Goals under Uncertainty
- URL: http://arxiv.org/abs/2009.08776v1
- Date: Thu, 17 Sep 2020 00:56:45 GMT
- Title: Dealing with Incompatibilities among Procedural Goals under Uncertainty
- Authors: Mariela Morveli-Espinoza, Juan Carlos Nieves, Ayslan Trevizan
Possebom, and Cesar Augusto Tacla
- Abstract summary: We represent agent's plans by means of structured arguments whose premises are pervaded with uncertainty.
We measure the strength of these arguments in order to determine the set of compatible goals.
Considering our novel approach for measuring the strength of structured arguments, we propose a semantics for the selection of plans and goals.
- Score: 1.2599533416395763
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: By considering rational agents, we focus on the problem of selecting goals
out of a set of incompatible ones. We consider three forms of incompatibility
introduced by Castelfranchi and Paglieri, namely the terminal, the instrumental
(or based on resources), and the superfluity. We represent the agent's plans by
means of structured arguments whose premises are pervaded with uncertainty. We
measure the strength of these arguments in order to determine the set of
compatible goals. We propose two novel ways for calculating the strength of
these arguments, depending on the kind of incompatibility that exists between
them. The first one is the logical strength value, it is denoted by a
three-dimensional vector, which is calculated from a probabilistic interval
associated with each argument. The vector represents the precision of the
interval, the location of it, and the combination of precision and location.
This type of representation and treatment of the strength of a structured
argument has not been defined before by the state of the art. The second way
for calculating the strength of the argument is based on the cost of the plans
(regarding the necessary resources) and the preference of the goals associated
with the plans. Considering our novel approach for measuring the strength of
structured arguments, we propose a semantics for the selection of plans and
goals that is based on Dung's abstract argumentation theory. Finally, we make a
theoretical evaluation of our proposal.
Related papers
- Infinite Ends from Finite Samples: Open-Ended Goal Inference as Top-Down Bayesian Filtering of Bottom-Up Proposals [48.437581268398866]
We introduce a sequential Monte Carlo model of open-ended goal inference.
We validate this model in a goal inference task called Block Words.
Our experiments highlight the importance of uniting top-down and bottom-up models for explaining the speed, accuracy, and generality of human theory-of-mind.
arXiv Detail & Related papers (2024-07-23T18:04:40Z) - An Extension-based Approach for Computing and Verifying Preferences in Abstract Argumentation [1.7065454553786665]
We present an extension-based approach for computing and verifying preferences in an abstract argumentation system.
We show that the complexity of computing sets of preferences is exponential in the number of arguments.
We present novel algorithms for verifying (i.e., assessing) the computed preferences.
arXiv Detail & Related papers (2024-03-26T12:36:11Z) - Semi-Abstract Value-Based Argumentation Framework [0.0]
Phan Minh Dung proposed abstract argumentation framework, which models argumentation using directed graphs where structureless arguments are the nodes and attacks among the arguments are the edges.
This thesis showcases two such extensions -- value-based argumentation framework by Trevor Bench-Capon (2002) and semi-abstract argumentation framework by Esther Anna Corsi and Christian Ferm"
The contribution of this thesis is two-fold. Firstly, the new semi-abstract value-based argumentation framework is introduced. This framework maps propositional formulae associated with individual arguments to a set of ordered values.
arXiv Detail & Related papers (2023-09-25T13:10:56Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Admissibility in Strength-based Argumentation: Complexity and Algorithms
(Extended Version with Proofs) [1.5828697880068698]
We study the adaptation of admissibility-based semantics to Strength-based Argumentation Frameworks (StrAFs)
Especially, we show that the strong admissibility defined in the literature does not satisfy a desirable property, namely Dung's fundamental lemma.
We propose a translation in pseudo-Boolean constraints for computing (strong and weak) extensions.
arXiv Detail & Related papers (2022-07-05T18:42:04Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Measuring Association Between Labels and Free-Text Rationales [60.58672852655487]
In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales.
We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established.
arXiv Detail & Related papers (2020-10-24T03:40:56Z) - Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision [60.62434362997016]
We propose a differentiable training-framework to create models which output faithful rationales on a sentence level.
Our model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best.
arXiv Detail & Related papers (2020-10-07T12:54:28Z) - An Imprecise Probability Approach for Abstract Argumentation based on
Credal Sets [1.3764085113103217]
We tackle the problem of calculating the degree of uncertainty of the extensions considering that the probability values of the arguments are imprecise.
We use credal sets to model the uncertainty values of arguments and from these credal sets, we calculate the lower and upper bounds of the extensions.
arXiv Detail & Related papers (2020-09-16T00:52:18Z) - An Argumentation-based Approach for Identifying and Dealing with
Incompatibilities among Procedural Goals [1.1744028458220426]
An intelligent agent may generate multiple pursuable goals, which may be incompatible among them.
In this paper, we focus on the definition, identification and resolution of these incompatibility.
arXiv Detail & Related papers (2020-09-11T01:01:34Z) - Invariant Rationalization [84.1861516092232]
A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
arXiv Detail & Related papers (2020-03-22T00:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.