Aspect-Based Argument Mining
- URL: http://arxiv.org/abs/2011.00633v1
- Date: Sun, 1 Nov 2020 21:57:51 GMT
- Title: Aspect-Based Argument Mining
- Authors: Dietrich Trautmann
- Abstract summary: We present the task of Aspect-Based Argument Mining (ABAM) with the essential subtasks of Aspect Term Extraction (ATE) and Nested Term Extraction (NS)
We consider aspects as the main point(s) argument units are addressing.
This information is important for further downstream tasks such as argument ranking, argument summarization and generation, as well as the search for counter-arguments on the aspect-level.
- Score: 2.3148470932285665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational Argumentation in general and Argument Mining in particular are
important research fields. In previous works, many of the challenges to
automatically extract and to some degree reason over natural language arguments
were addressed. The tools to extract argument units are increasingly available
and further open problems can be addressed. In this work, we are presenting the
task of Aspect-Based Argument Mining (ABAM), with the essential subtasks of
Aspect Term Extraction (ATE) and Nested Segmentation (NS). At the first
instance, we create and release an annotated corpus with aspect information on
the token-level. We consider aspects as the main point(s) argument units are
addressing. This information is important for further downstream tasks such as
argument ranking, argument summarization and generation, as well as the search
for counter-arguments on the aspect-level. We present several experiments using
state-of-the-art supervised architectures and demonstrate their performance for
both of the subtasks. The annotated benchmark is available at
https://github.com/trtm/ABAM.
Related papers
- End-to-End Argument Mining as Augmented Natural Language Generation [0.8213829427624407]
This work proposes a unified end-to-end framework based on a generative paradigm, in which the argumentative structures are framed into label-augmented text.
Through different marker-based fine-tuning strategies, we present an extensive study by integrating marker knowledge into our generative model.
The proposed framework achieves competitive results to the state-of-the-art (SoTA) model and outperforms several baselines.
arXiv Detail & Related papers (2024-06-12T19:22:29Z) - Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - AutoAM: An End-To-End Neural Model for Automatic and Universal Argument
Mining [0.0]
We propose a novel neural model called AutoAM to solve these problems.
Our model is a universal end-to-end framework, which can analyze argument structure without constraints like tree structure.
arXiv Detail & Related papers (2023-09-17T15:26:21Z) - Retrieval-Augmented Generative Question Answering for Event Argument
Extraction [66.24622127143044]
We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction.
It retrieves the most similar QA pair and augments it as prompt to the current example's context, then decodes the arguments as answers.
Our approach outperforms substantially prior methods across various settings.
arXiv Detail & Related papers (2022-11-14T02:00:32Z) - Full-Text Argumentation Mining on Scientific Publications [3.8754200816873787]
We introduce a sequential pipeline model combining ADUR and ARE for full-text SAM.
We provide a first analysis of the performance of pretrained language models (PLMs) on both subtasks.
Our detailed error analysis reveals that non-contiguous ADUs as well as the interpretation of discourse connectors pose major challenges.
arXiv Detail & Related papers (2022-10-24T10:05:30Z) - Diversity Over Size: On the Effect of Sample and Topic Sizes for
Argument Mining Datasets [65.91772010586605]
Large Argument Mining datasets are rare and recognition of argumentative sentences requires expert knowledge.
Given the cost and complexity of creating large Argument Mining datasets, we ask whether it is necessary for acceptable performance to have datasets growing in size.
Our findings show that, when using carefully composed training samples and a model pretrained on related tasks, we can reach 95% of the maximum performance while reducing the training sample size by at least 85%.
arXiv Detail & Related papers (2022-05-23T17:14:32Z) - IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument
Mining Tasks [59.457948080207174]
In this work, we introduce a comprehensive and large dataset named IAM, which can be applied to a series of argument mining tasks.
Near 70k sentences in the dataset are fully annotated based on their argument properties.
We propose two new integrated argument mining tasks associated with the debate preparation process: (1) claim extraction with stance classification (CESC) and (2) claim-evidence pair extraction (CEPE)
arXiv Detail & Related papers (2022-03-23T08:07:32Z) - Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit
Argument Relations [70.35379323231241]
This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments.
We employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process.
Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods.
arXiv Detail & Related papers (2021-06-23T13:24:39Z) - From Arguments to Key Points: Towards Automatic Argument Summarization [17.875273745811775]
We show that a small number of key points per topic is typically sufficient for covering the vast majority of the arguments.
Furthermore, we found that a domain expert can often predict these key points in advance.
arXiv Detail & Related papers (2020-05-04T16:24:21Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.