Can Large Language Models perform Relation-based Argument Mining?
- URL: http://arxiv.org/abs/2402.11243v1
- Date: Sat, 17 Feb 2024 10:37:51 GMT
- Title: Can Large Language Models perform Relation-based Argument Mining?
- Authors: Deniz Gorur, Antonio Rago, Francesca Toni
- Abstract summary: Argument mining (AM) is the process of automatically extracting arguments, their components and/or relations amongst arguments and components from text.
Relation-based AM (RbAM) is a form of AM focusing on identifying agreement (support) and disagreement (attack) relations amongst arguments.
We show that general-purpose Large Language Models (LLMs), appropriately primed and prompted, can significantly outperform the best performing (RoBERTa-based) baseline.
- Score: 15.362683263839772
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Argument mining (AM) is the process of automatically extracting arguments,
their components and/or relations amongst arguments and components from text.
As the number of platforms supporting online debate increases, the need for AM
becomes ever more urgent, especially in support of downstream tasks.
Relation-based AM (RbAM) is a form of AM focusing on identifying agreement
(support) and disagreement (attack) relations amongst arguments. RbAM is a
challenging classification task, with existing methods failing to perform
satisfactorily. In this paper, we show that general-purpose Large Language
Models (LLMs), appropriately primed and prompted, can significantly outperform
the best performing (RoBERTa-based) baseline. Specifically, we experiment with
two open-source LLMs (Llama-2 and Mistral) with ten datasets.
Related papers
- End-to-End Argument Mining as Augmented Natural Language Generation [0.8213829427624407]
This work proposes a unified end-to-end framework based on a generative paradigm, in which the argumentative structures are framed into label-augmented text.
Through different marker-based fine-tuning strategies, we present an extensive study by integrating marker knowledge into our generative model.
The proposed framework achieves competitive results to the state-of-the-art (SoTA) model and outperforms several baselines.
arXiv Detail & Related papers (2024-06-12T19:22:29Z) - Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning [49.3242278912771]
We introduce a novel multimodal RAG framework named RMR (Retrieval Meets Reasoning)
The RMR framework employs a bi-modal retrieval module to identify the most relevant question-answer pairs.
It significantly boosts the performance of various vision-language models across a spectrum of benchmark datasets.
arXiv Detail & Related papers (2024-05-31T14:23:49Z) - MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - Assisted Debate Builder with Large Language Models [11.176301807521462]
We introduce ADBL2, an assisted debate builder tool.
It is based on the capability of large language models to generalise and perform relation-based argument mining.
As a by-product, we provide the first fine-tuned Mistral-7B large language model for relation-based argument mining.
arXiv Detail & Related papers (2024-05-14T13:42:12Z) - Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - DMON: A Simple yet Effective Approach for Argument Structure Learning [33.96187185638286]
Argument structure learning (ASL) entails predicting relations between arguments.
Despite its broad utilization, ASL remains a challenging task because it involves examining the complex relationships between the sentences in a potentially unstructured discourse.
We have developed a simple yet effective approach called Dual-tower Multi-scale cOnvolution neural Network(DMON) for the ASL task.
arXiv Detail & Related papers (2024-05-02T11:56:16Z) - Efficient argument classification with compact language models and ChatGPT-4 refinements [0.0]
This paper presents comparative studies between a few deep learning-based models in argument mining.
The main novelty of this paper is the ensemble model which is based on BERT architecture and ChatGPT-4 as fine tuning model.
The presented results show that BERT+ChatGPT-4 outperforms the rest of the models including other Transformer-based and LSTM-based models.
arXiv Detail & Related papers (2024-03-20T16:24:10Z) - CLadder: Assessing Causal Reasoning in Language Models [82.8719238178569]
We investigate whether large language models (LLMs) can coherently reason about causality.
We propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al.
arXiv Detail & Related papers (2023-12-07T15:12:12Z) - Exploring the Potential of Large Language Models in Computational Argumentation [54.85665903448207]
Large language models (LLMs) have demonstrated impressive capabilities in understanding context and generating natural language.
This work aims to embark on an assessment of LLMs, such as ChatGPT, Flan models, and LLaMA2 models, in both zero-shot and few-shot settings.
arXiv Detail & Related papers (2023-11-15T15:12:15Z) - Multimodal Chain-of-Thought Reasoning in Language Models [94.70184390935661]
We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework.
Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach.
arXiv Detail & Related papers (2023-02-02T07:51:19Z) - Full-Text Argumentation Mining on Scientific Publications [3.8754200816873787]
We introduce a sequential pipeline model combining ADUR and ARE for full-text SAM.
We provide a first analysis of the performance of pretrained language models (PLMs) on both subtasks.
Our detailed error analysis reveals that non-contiguous ADUs as well as the interpretation of discourse connectors pose major challenges.
arXiv Detail & Related papers (2022-10-24T10:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.