UPB at SemEval-2021 Task 7: Adversarial Multi-Task Learning for
Detecting and Rating Humor and Offense
- URL: http://arxiv.org/abs/2104.06063v1
- Date: Tue, 13 Apr 2021 09:59:05 GMT
- Title: UPB at SemEval-2021 Task 7: Adversarial Multi-Task Learning for
Detecting and Rating Humor and Offense
- Authors: R\u{a}zvan-Alexandru Sm\u{a}du, Dumitru-Clementin Cercel, Mihai
Dascalu
- Abstract summary: We describe our adversarial multi-task network, AMTL-Humor, used to detect and rate humor and offensive texts.
Our best model consists of an ensemble of all tested configurations, and achieves a 95.66% F1-score and 94.70% accuracy for Task 1a.
- Score: 0.6404122934568858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting humor is a challenging task since words might share multiple
valences and, depending on the context, the same words can be even used in
offensive expressions. Neural network architectures based on Transformer obtain
state-of-the-art results on several Natural Language Processing tasks,
especially text classification. Adversarial learning, combined with other
techniques such as multi-task learning, aids neural models learn the intrinsic
properties of data. In this work, we describe our adversarial multi-task
network, AMTL-Humor, used to detect and rate humor and offensive texts from
Task 7 at SemEval-2021. Each branch from the model is focused on solving a
related task, and consists of a BiLSTM layer followed by Capsule layers, on top
of BERTweet used for generating contextualized embeddings. Our best model
consists of an ensemble of all tested configurations, and achieves a 95.66%
F1-score and 94.70% accuracy for Task 1a, while obtaining RMSE scores of 0.6200
and 0.5318 for Tasks 1b and 2, respectively.
Related papers
- Mavericks at ArAIEval Shared Task: Towards a Safer Digital Space --
Transformer Ensemble Models Tackling Deception and Persuasion [0.0]
We present our approaches for task 1-A and task 2-A of the shared task which focus on persuasion technique detection and disinformation detection respectively.
The tasks use multigenre snippets of tweets and news articles for the given binary classification problem.
We achieved a micro F1-score of 0.742 on task 1-A (8th rank on the leaderboard) and 0.901 on task 2-A (7th rank on the leaderboard) respectively.
arXiv Detail & Related papers (2023-11-30T17:26:57Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Effective Cross-Task Transfer Learning for Explainable Natural Language
Inference with T5 [50.574918785575655]
We compare sequential fine-tuning with a model for multi-task learning in the context of boosting performance on two tasks.
Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting.
arXiv Detail & Related papers (2022-10-31T13:26:08Z) - Multi-Task Meta Learning: learn how to adapt to unseen tasks [4.287114092271669]
This work proposes Multi-task Meta Learning (MTML), integrating two learning paradigms Multi-Task Learning (MTL) and meta learning.
The fundamental idea is to train a multi-task model, such that when an unseen task is introduced, it can learn in fewer steps whilst offering a performance at least as good as conventional single task learning.
MTML achieves state-of-the-art results for three out of four tasks for the NYU-v2 dataset and two out of four for the taskonomy dataset.
arXiv Detail & Related papers (2022-10-13T12:59:54Z) - DIALOG-22 RuATD Generated Text Detection [0.0]
Detectors that can distinguish between TGM-generated text and human-written ones play an important role in preventing abuse of TGM.
We describe our pipeline for the two DIALOG-22 RuATD tasks: detecting generated text (binary task) and classification of which model was used to generate text.
arXiv Detail & Related papers (2022-06-16T09:33:26Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Improving Multi-task Generalization Ability for Neural Text Matching via
Prompt Learning [54.66399120084227]
Recent state-of-the-art neural text matching models (PLMs) are hard to generalize to different tasks.
We adopt a specialization-generalization training strategy and refer to it as Match-Prompt.
In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens.
In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks.
arXiv Detail & Related papers (2022-04-06T11:01:08Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor
Based on Multi-Task Adversarial Training [4.691435917434472]
This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackathon: Detecting and Rating Humor and Offense.
This task aims to detect whether the text is humorous and how humorous it is.
We mainly present our solution, a multi-task learning model based on adversarial examples.
arXiv Detail & Related papers (2021-04-21T03:23:02Z) - TechTexC: Classification of Technical Texts using Convolution and
Bidirectional Long Short Term Memory Network [0.0]
A classification system (called 'TechTexC') is developed to perform the classification task using three techniques.
Results show that CNN with BiLSTM model outperforms the other techniques concerning task-1 of sub-tasks (a, b, c and g) and task-2a.
In the case of test set, the combined CNN with BiLSTM approach achieved that higher accuracy for the subtasks 1a (70.76%), 1b (79.97%), 1c (65.45%), 1g (49.23%) and 2a (70.14%)
arXiv Detail & Related papers (2020-12-21T15:22:47Z) - Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for
Offensive Language Detection [55.445023584632175]
We build an offensive language detection system, which combines multi-task learning with BERT-based models.
Our model achieves 91.51% F1 score in English Sub-task A, which is comparable to the first place.
arXiv Detail & Related papers (2020-04-28T11:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.