Multi-Task Learning Improves Performance In Deep Argument Mining Models
- URL: http://arxiv.org/abs/2307.01401v1
- Date: Mon, 3 Jul 2023 23:42:29 GMT
- Title: Multi-Task Learning Improves Performance In Deep Argument Mining Models
- Authors: Amirhossein Farzam, Shashank Shekhar, Isaac Mehlhaff, Marco Morucci
- Abstract summary: We show that different argument mining tasks share common semantic and logical structure by implementing a multi-task approach to argument mining.
Our results are important for argument mining as they show that different tasks share substantial similarities and suggest a holistic approach to the extraction of argumentative techniques from text.
- Score: 2.2312474084968024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The successful analysis of argumentative techniques from user-generated text
is central to many downstream tasks such as political and market analysis.
Recent argument mining tools use state-of-the-art deep learning methods to
extract and annotate argumentative techniques from various online text corpora,
however each task is treated as separate and different bespoke models are
fine-tuned for each dataset. We show that different argument mining tasks share
common semantic and logical structure by implementing a multi-task approach to
argument mining that achieves better performance than state-of-the-art methods
for the same problems. Our model builds a shared representation of the input
text that is common to all tasks and exploits similarities between tasks in
order to further boost performance via parameter-sharing. Our results are
important for argument mining as they show that different tasks share
substantial similarities and suggest a holistic approach to the extraction of
argumentative techniques from text.
Related papers
- Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language
Understanding [51.31622274823167]
We propose a hierarchical framework with a coarse-to-fine paradigm, with the bottom level shared to all the tasks, the mid-level divided to different groups, and the top-level assigned to each of the tasks.
This allows our model to learn basic language properties from all tasks, boost performance on relevant tasks, and reduce the negative impact from irrelevant tasks.
arXiv Detail & Related papers (2022-08-19T02:46:20Z) - DiSparse: Disentangled Sparsification for Multitask Model Compression [92.84435347164435]
DiSparse is a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme.
Our experimental results demonstrate superior performance on various configurations and settings.
arXiv Detail & Related papers (2022-06-09T17:57:46Z) - Diversity Over Size: On the Effect of Sample and Topic Sizes for Topic-Dependent Argument Mining Datasets [49.65208986436848]
We investigate the effect of Argument Mining dataset composition in few- and zero-shot settings.
Our findings show that, while fine-tuning is mandatory to achieve acceptable model performance, using carefully composed training samples and reducing the training sample size by up to almost 90% can still yield 95% of the maximum performance.
arXiv Detail & Related papers (2022-05-23T17:14:32Z) - Can Unsupervised Knowledge Transfer from Social Discussions Help
Argument Mining? [25.43442712037725]
We propose a novel transfer learning strategy to overcome the challenges of unsupervised, argumentative discourse-aware knowledge.
We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge.
We introduce a novel prompt-based strategy for inter-component relation prediction that compliments our proposed finetuning method.
arXiv Detail & Related papers (2022-03-24T06:48:56Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - The Devil is in the Details: Evaluating Limitations of Transformer-based
Methods for Granular Tasks [19.099852869845495]
Contextual embeddings derived from transformer-based neural language models have shown state-of-the-art performance for various tasks.
We focus on the problem of textual similarity from two perspectives: matching documents on a granular level, and an abstract level.
We empirically demonstrate, across two datasets from different domains, that despite high performance in abstract document matching as expected, contextual embeddings are consistently (and at times, vastly) outperformed by simple baselines like TF-IDF for more granular tasks.
arXiv Detail & Related papers (2020-11-02T18:41:32Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.