Preference Elicitation in Assumption-Based Argumentation
- URL: http://arxiv.org/abs/2005.05721v1
- Date: Tue, 12 May 2020 12:31:27 GMT
- Title: Preference Elicitation in Assumption-Based Argumentation
- Authors: Quratul-ain Mahesar, Nir Oren and Wamberto W. Vasconcelos
- Abstract summary: We consider an inverse of the standard reasoning problem, seeking to identify what preferences over assumptions could lead to a given set of conclusions being drawn.
We present an algorithm which computes and enumerates all possible sets of preferences over the assumptions in the system from which a desired conflict free set of conclusions can be obtained.
- Score: 3.0323642294813355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various structured argumentation frameworks utilize preferences as part of
their standard inference procedure to enable reasoning with preferences. In
this paper, we consider an inverse of the standard reasoning problem, seeking
to identify what preferences over assumptions could lead to a given set of
conclusions being drawn. We ground our work in the Assumption-Based
Argumentation (ABA) framework, and present an algorithm which computes and
enumerates all possible sets of preferences over the assumptions in the system
from which a desired conflict free set of conclusions can be obtained under a
given semantic. After describing our algorithm, we establish its soundness,
completeness and complexity.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Preference-Based Abstract Argumentation for Case-Based Reasoning (with Appendix) [9.5382175632919]
We introduce Preference-Based Abstract Argumentation for Case-Based Reasoning (which we call AA-CBR-P)
This allows users to define multiple approaches to compare cases with an ordering that specifies their preference over these comparison approaches.
We show empirically that our approach outperforms other interpretable machine learning models on a real-world medical dataset.
arXiv Detail & Related papers (2024-07-31T18:31:04Z) - An Extension-based Approach for Computing and Verifying Preferences in Abstract Argumentation [1.7065454553786665]
We present an extension-based approach for computing and verifying preferences in an abstract argumentation system.
We show that the complexity of computing sets of preferences is exponential in the number of arguments.
We present novel algorithms for verifying (i.e., assessing) the computed preferences.
arXiv Detail & Related papers (2024-03-26T12:36:11Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Admissibility in Strength-based Argumentation: Complexity and Algorithms
(Extended Version with Proofs) [1.5828697880068698]
We study the adaptation of admissibility-based semantics to Strength-based Argumentation Frameworks (StrAFs)
Especially, we show that the strong admissibility defined in the literature does not satisfy a desirable property, namely Dung's fundamental lemma.
We propose a translation in pseudo-Boolean constraints for computing (strong and weak) extensions.
arXiv Detail & Related papers (2022-07-05T18:42:04Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - Algorithmic Recourse in Partially and Fully Confounded Settings Through
Bounding Counterfactual Effects [0.6299766708197883]
Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.
Existing methods compute the effect of recourse actions using a causal model learnt from data under the assumption of no hidden confounding and modelling assumptions such as additive noise.
We propose an alternative approach for discrete random variables which relaxes these assumptions and allows for unobserved confounding and arbitrary structural equations.
arXiv Detail & Related papers (2021-06-22T15:07:49Z) - Invariant Rationalization [84.1861516092232]
A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
arXiv Detail & Related papers (2020-03-22T00:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.