Contrastive Explanations for Argumentation-Based Conclusions
- URL: http://arxiv.org/abs/2107.03265v1
- Date: Wed, 7 Jul 2021 15:00:47 GMT
- Title: Contrastive Explanations for Argumentation-Based Conclusions
- Authors: AnneMarie Borg and Floris Bex
- Abstract summary: We discuss contrastive explanations for formal argumentation.
We show under which conditions contrastive explanations are meaningful, and how argumentation allows us to make implicit foils explicit.
- Score: 5.1398743023989555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we discuss contrastive explanations for formal argumentation -
the question why a certain argument (the fact) can be accepted, whilst another
argument (the foil) cannot be accepted under various extension-based semantics.
The recent work on explanations for argumentation-based conclusions has mostly
focused on providing minimal explanations for the (non-)acceptance of
arguments. What is still lacking, however, is a proper argumentation-based
interpretation of contrastive explanations. We show under which conditions
contrastive explanations in abstract and structured argumentation are
meaningful, and how argumentation allows us to make implicit foils explicit.
Related papers
- Discussion Graph Semantics of First-Order Logic with Equality for Reasoning about Discussion and Argumentation [0.9790236766474198]
We formulate discussion graph semantics of first-order logic with equality for reasoning about discussion and argumentation.
We achieve the generality through a top-down formulation of the semantics of first-order logic (with equality) formulas.
arXiv Detail & Related papers (2024-06-18T00:32:00Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Stable Normative Explanations: From Argumentation to Deontic Logic [1.3272510644778104]
This paper examines how a notion of stable explanation can be expressed in the context of formal argumentation.
We show how to build from argumentation neighborhood structures for deontic logic where this notion of explanation can be characterised.
arXiv Detail & Related papers (2023-07-11T10:26:05Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Many-valued Argumentation, Conditionals and a Probabilistic Semantics
for Gradual Argumentation [3.9571744700171743]
We propose a general approach to define a many-valued preferential interpretation of gradual argumentation semantics.
As a proof of concept, in the finitely-valued case, an Answer set Programming approach is proposed for conditional reasoning.
The paper also develops and discusses a probabilistic semantics for gradual argumentation, which builds on the many-valued conditional semantics.
arXiv Detail & Related papers (2022-12-14T22:10:46Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Annotating Implicit Reasoning in Arguments with Causal Links [34.77514899468729]
We focus on identifying the implicit knowledge in the form of argumentation knowledge.
Being inspired by the Argument from Consequences scheme, we propose a semi-structured template to represent such argumentation knowledge.
We show how to collect and filter high-quality implicit reasonings via crowdsourcing.
arXiv Detail & Related papers (2021-10-26T13:28:53Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Necessary and Sufficient Explanations in Abstract Argumentation [3.9849889653167208]
We discuss necessary and sufficient explanations for formal argumentation.
We study necessity and sufficiency: what (sets of) arguments are necessary or sufficient for the (non-acceptance) of an argument?
arXiv Detail & Related papers (2020-11-04T17:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.