LLM-based Argument Mining meets Argumentation and Description Logics: a Unified Framework for Reasoning about Debates
- URL: http://arxiv.org/abs/2603.02858v1
- Date: Tue, 03 Mar 2026 11:06:23 GMT
- Title: LLM-based Argument Mining meets Argumentation and Description Logics: a Unified Framework for Reasoning about Debates
- Authors: Gianvincenzo Alfano, Sergio Greco, Lucio La Cava, Stefano Francesco Monea, Irina Trubitsyna,
- Abstract summary: Large Language Models (LLMs) achieve strong performance in analyzing and generating text.<n>They struggle with explicit, transparent, and verifiable reasoning over complex texts such as those containing debates.<n>We propose a framework that integrates learning-based argument mining with quantitative reasoning.
- Score: 18.314315278861073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) achieve strong performance in analyzing and generating text, yet they struggle with explicit, transparent, and verifiable reasoning over complex texts such as those containing debates. In particular, they lack structured representations that capture how arguments support or attack each other and how their relative strengths determine overall acceptability. We encompass these limitations by proposing a framework that integrates learning-based argument mining with quantitative reasoning and ontology-based querying. Starting from a raw debate text, the framework extracts a fuzzy argumentative knowledge base, where arguments are explicitly represented as entities, linked by attack and support relations, and annotated with initial fuzzy strengths reflecting plausibility w.r.t. the debate's context. Quantitative argumentation semantics are then applied to compute final argument strengths by propagating the effects of supports and attacks. These results are then embedded into a fuzzy description logic setting, enabling expressive query answering through efficient rewriting techniques. The proposed approach provides a transparent, explainable, and formally grounded method for analyzing debates, overcoming purely statistical LLM-based analyses.
Related papers
- Abstract Argumentation with Subargument Relations [0.8460698440162889]
Dung's abstract argumentation framework characterises argument acceptability solely via an attack relation.<n>This framework limits the ability to represent structural dependencies that are central in many structured argumentation formalisms.<n>We study abstract argumentation frameworks enriched with an explicit subargument relation, treated alongside attack as a basic relation.<n>This framework provides a principled abstraction of structural information and clarifies the role of subarguments in abstract acceptability reasoning.
arXiv Detail & Related papers (2026-01-17T12:54:10Z) - Understanding Textual Capability Degradation in Speech LLMs via Parameter Importance Analysis [54.53152524778821]
integration of speech into Large Language Models (LLMs) has substantially expanded their capabilities, but often at the cost of weakening their core textual competence.<n>We propose an analytical framework based on parameter importance estimation, which reveals that fine-tuning for speech introduces a textual importance distribution shift.<n>We investigate two mitigation strategies: layer-wise learning rate scheduling and Low-Rank Adaptation (LoRA)<n> Experimental results show that both approaches better maintain textual competence than full fine-tuning, while also improving downstream spoken question answering performance.
arXiv Detail & Related papers (2025-09-28T09:04:40Z) - Can LLMs Judge Debates? Evaluating Non-Linear Reasoning via Argumentation Theory Semantics [24.173784986846687]
We evaluate whether Large Language Models (LLMs) can approximate structured reasoning from Computational Argumentation Theory (CAT)<n>We use Quantitative Argumentation Debate (QuAD) semantics, which assigns acceptability scores to arguments based on their attack and support relations.
arXiv Detail & Related papers (2025-09-19T08:10:32Z) - Implicit Reasoning in Large Language Models: A Comprehensive Survey [67.53966514728383]
Large Language Models (LLMs) have demonstrated strong generalization across a wide range of tasks.<n>Recent studies have shifted attention from explicit chain-of-thought prompting toward implicit reasoning.<n>This survey introduces a taxonomy centered on execution paradigms, shifting the focus from representational forms to computational strategies.
arXiv Detail & Related papers (2025-09-02T14:16:02Z) - The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning [56.574829311863446]
Chain-of-Thought (CoT) prompting has been widely recognized for its ability to enhance reasoning capabilities in large language models (LLMs)<n>We demonstrate that CoT and its reasoning variants consistently underperform direct answering across varying model scales and benchmark complexities.<n>Our analysis uncovers a fundamental hybrid mechanism of explicit-implicit reasoning driving CoT's performance in pattern-based ICL.
arXiv Detail & Related papers (2025-04-07T13:51:06Z) - Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - Argumentative Large Language Models for Explainable and Contestable Claim Verification [13.045050015831903]
We introduce ArgLLMs, a method for augmenting large language models with argumentative reasoning.<n>ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making.<n>We evaluate ArgLLMs' performance experimentally in comparison with state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-03T13:12:28Z) - A Unifying Framework for Learning Argumentation Semantics [47.84663434179473]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.<n>Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - A Formalisation of Abstract Argumentation in Higher-Order Logic [77.34726150561087]
We present an approach for representing abstract argumentation frameworks based on an encoding into classical higher-order logic.
This provides a uniform framework for computer-assisted assessment of abstract argumentation frameworks using interactive and automated reasoning tools.
arXiv Detail & Related papers (2021-10-18T10:45:59Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.