Explaining Arguments' Strength: Unveiling the Role of Attacks and Supports (Technical Report)
- URL: http://arxiv.org/abs/2404.14304v2
- Date: Fri, 10 May 2024 17:37:43 GMT
- Title: Explaining Arguments' Strength: Unveiling the Role of Attacks and Supports (Technical Report)
- Authors: Xiang Yin, Potyka Nico, Francesca Toni,
- Abstract summary: We propose a novel theory of Relation Attribution Explanations (RAEs)
RAEs offer fine-grained insights into the role of attacks and supports in quantitative bipolar argumentation towards obtaining the arguments' strength.
We show the application value of RAEs in fraud detection and large language models case studies.
- Score: 13.644164255651472
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Quantitatively explaining the strength of arguments under gradual semantics has recently received increasing attention. Specifically, several works in the literature provide quantitative explanations by computing the attribution scores of arguments. These works disregard the importance of attacks and supports, even though they play an essential role when explaining arguments' strength. In this paper, we propose a novel theory of Relation Attribution Explanations (RAEs), adapting Shapley values from game theory to offer fine-grained insights into the role of attacks and supports in quantitative bipolar argumentation towards obtaining the arguments' strength. We show that RAEs satisfy several desirable properties. We also propose a probabilistic algorithm to approximate RAEs efficiently. Finally, we show the application value of RAEs in fraud detection and large language models case studies.
Related papers
- LLM-based Argument Mining meets Argumentation and Description Logics: a Unified Framework for Reasoning about Debates [18.314315278861073]
Large Language Models (LLMs) achieve strong performance in analyzing and generating text.<n>They struggle with explicit, transparent, and verifiable reasoning over complex texts such as those containing debates.<n>We propose a framework that integrates learning-based argument mining with quantitative reasoning.
arXiv Detail & Related papers (2026-03-03T11:06:23Z) - Towards Generalizable Reasoning: Group Causal Counterfactual Policy Optimization for LLM Reasoning [50.352417879912515]
Large language models (LLMs) excel at complex tasks with advances in reasoning capabilities.<n>We propose Group Causal Counterfactual Policy Optimization to explicitly train LLMs to learn generalizable reasoning patterns.<n>We then construct token-level advantages from this reward and optimize the policy, encouraging LLMs to favor reasoning patterns that are process-valid and counterfactually robust.
arXiv Detail & Related papers (2026-02-06T08:03:11Z) - APR: Penalizing Structural Redundancy in Large Reasoning Models via Anchor-based Process Rewards [61.52322047892064]
Test-Time Scaling (TTS) has significantly enhanced the capabilities of Large Reasoning Models (LRMs)<n>We observe that LRMs frequently conduct repetitive self-verification without revision even after obtaining the final answer during the reasoning process.<n>We propose Anchor-based Process Reward (APR), a structure-aware reward shaping method that localizes the reasoning anchor and penalizes exclusively the post-anchor AST.
arXiv Detail & Related papers (2026-01-31T14:53:20Z) - SAD: A Large-Scale Strategic Argumentative Dialogue Dataset [60.33125467375306]
In practice, argumentation is often realized as multi-turn dialogue.<n>We present the first large-scale textbfStrategic textbfArgumentative textbfDialogue dataset, consisting of 392,822 examples.
arXiv Detail & Related papers (2026-01-12T11:11:37Z) - ARQUSUMM: Argument-aware Quantitative Summarization of Online Conversations [11.33923212079359]
We propose a novel task of argument-aware quantitative summarization to reveal the claim-reason structure of arguments in conversations.<n>For quantitative summarization, ARQUSUMM employs argument structure-aware clustering algorithms to aggregate arguments and quantify their support.
arXiv Detail & Related papers (2025-11-21T06:37:32Z) - Can LLMs Judge Debates? Evaluating Non-Linear Reasoning via Argumentation Theory Semantics [24.173784986846687]
We evaluate whether Large Language Models (LLMs) can approximate structured reasoning from Computational Argumentation Theory (CAT)<n>We use Quantitative Argumentation Debate (QuAD) semantics, which assigns acceptability scores to arguments based on their attack and support relations.
arXiv Detail & Related papers (2025-09-19T08:10:32Z) - Does More Inference-Time Compute Really Help Robustness? [50.47666612618054]
We show that small-scale, open-source models can benefit from inference-time scaling.<n>We identify an important security risk, intuitively motivated and empirically verified as an inverse scaling law.<n>We urge practitioners to carefully weigh these subtle trade-offs before applying inference-time scaling in security-sensitive, real-world applications.
arXiv Detail & Related papers (2025-07-21T18:08:38Z) - A Survey on Latent Reasoning [100.54120559169735]
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities.<n>CoT reasoning that verbalizes intermediate steps limits the model's expressive bandwidth.<n>Latent reasoning tackles this bottleneck by performing multi-step inference entirely in the model's continuous hidden state.
arXiv Detail & Related papers (2025-07-08T17:29:07Z) - Towards Comprehensive Argument Analysis in Education: Dataset, Tasks, and Method [14.718309497236694]
We propose 14 fine-grained relation types from both vertical and horizontal dimensions.<n>We conduct experiments on three tasks: argument component detection, relation prediction, and automated essay grading.<n>The findings highlight the importance of fine-grained argumentative annotations for argumentative writing quality assessment and encourage multi-dimensional argument analysis.
arXiv Detail & Related papers (2025-05-17T14:36:51Z) - Efficient Inference for Large Reasoning Models: A Survey [74.17203483365171]
Large Reasoning Models (LRMs) significantly improve the reasoning ability of Large Language Models (LLMs) by learning to reason.<n>However, their deliberative reasoning process leads to inefficiencies in token usage, memory consumption, and inference time.<n>This survey provides a review of efficient inference methods designed specifically for LRMs, focusing on mitigating token inefficiency while preserving the reasoning quality.
arXiv Detail & Related papers (2025-03-29T13:27:46Z) - Applying Attribution Explanations in Truth-Discovery Quantitative Bipolar Argumentation Frameworks [18.505289553533164]
Argument Attribution Explanations (AAEs) and Relation Attribution Explanations (RAEs) are used to explain the strength of arguments under gradual semantics.
We apply AAEs and RAEs to Truth Discovery QBAFs, which assess the trustworthiness of sources and their claims.
We find that both AAEs and RAEs can provide interesting explanations and can give non-trivial and surprising insights.
arXiv Detail & Related papers (2024-09-09T17:36:39Z) - Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - CASA: Causality-driven Argument Sufficiency Assessment [79.13496878681309]
We propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.
PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent.
Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments.
arXiv Detail & Related papers (2024-01-10T16:21:18Z) - Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals [70.22179850619519]
In many domains of argumentation, people's arguments are driven by so-called attitude roots.
Recent work in psychology suggests that instead of directly countering surface-level reasoning, one should follow an argumentation style inspired by the Jiu-Jitsu'soft' combat system.
We are the first to explore Jiu-Jitsu argumentation for peer review by proposing the novel task of attitude and theme-guided rebuttal generation.
arXiv Detail & Related papers (2023-11-07T13:54:01Z) - Argument Attribution Explanations in Quantitative Bipolar Argumentation
Frameworks (Technical Report) [17.9926469947157]
We propose a novel theory of Argument Explanations (AAEs) by incorporating the spirit of feature attribution from machine learning.
AAEs are used to determine the influence of arguments towards topic arguments of interest.
We study desirable properties of AAEs, including some new ones and some partially adapted from the literature to our setting.
arXiv Detail & Related papers (2023-07-25T15:36:33Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Exhaustivity and anti-exhaustivity in the RSA framework: Testing the
effect of prior beliefs [68.8204255655161]
We focus on cases when sensitivity to priors leads to counterintuitive predictions of the Rational Speech Act (RSA) framework.
We show that in the baseline RSA model, under certain conditions, anti-exhaustive readings are predicted.
We find no anti-exhaustivity effects, but observed that message choice is sensitive to priors, as predicted by the RSA framework overall.
arXiv Detail & Related papers (2022-02-14T20:35:03Z) - Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit
Argument Relations [70.35379323231241]
This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments.
We employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process.
Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods.
arXiv Detail & Related papers (2021-06-23T13:24:39Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Extracting Implicitly Asserted Propositions in Argumentation [8.20413690846954]
We study methods for extracting propositions implicitly asserted in questions, reported speech, and imperatives in argumentation.
Our study may inform future research on argument mining and the semantics of these rhetorical devices in argumentation.
arXiv Detail & Related papers (2020-10-06T12:03:47Z) - The Role of Pragmatic and Discourse Context in Determining Argument
Impact [39.70446357000737]
This paper presents a new dataset to initiate the study of this aspect of argumentation.
It consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims.
We propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.
arXiv Detail & Related papers (2020-04-06T23:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.