UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans
Detection
- URL: http://arxiv.org/abs/2104.08635v1
- Date: Sat, 17 Apr 2021 19:42:12 GMT
- Title: UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans
Detection
- Authors: Andrei Paraschiv, Dumitru-Clementin Cercel, Mihai Dascalu
- Abstract summary: Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of a subset of the Jigsaw Unintended Bias dataset.
For this task, participants had to automatically detect character spans in short comments that render the message as toxic.
Our model considers applying Virtual Adversarial Training in a semi-supervised setting during the fine-tuning process of several Transformer-based models.
- Score: 0.7197592390105455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The real-world impact of polarization and toxicity in the online sphere
marked the end of 2020 and the beginning of this year in a negative way.
Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of
a subset of the Jigsaw Unintended Bias dataset and is the first language
toxicity detection task dedicated to identifying the toxicity-level spans. For
this task, participants had to automatically detect character spans in short
comments that render the message as toxic. Our model considers applying Virtual
Adversarial Training in a semi-supervised setting during the fine-tuning
process of several Transformer-based models (i.e., BERT and RoBERTa), in
combination with Conditional Random Fields. Our approach leads to performance
improvements and more robust models, enabling us to achieve an F1-score of
65.73% in the official submission and an F1-score of 66.13% after further
tuning during post-evaluation.
Related papers
- Persistent Pre-Training Poisoning of LLMs [71.53046642099142]
Our work evaluates for the first time whether language models can also be compromised during pre-training.
We pre-train a series of LLMs from scratch to measure the impact of a potential poisoning adversary.
Our main result is that poisoning only 0.1% of a model's pre-training dataset is sufficient for three out of four attacks to persist through post-training.
arXiv Detail & Related papers (2024-10-17T16:27:13Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Segment-level Metric Learning for Few-shot Bioacoustic Event Detection [56.59107110017436]
We propose a segment-level few-shot learning framework that utilizes both the positive and negative events during model optimization.
Our system achieves an F-measure of 62.73 on the DCASE 2022 challenge task 5 (DCASE2022-T5) validation set, outperforming the performance of the baseline prototypical network 34.02 by a large margin.
arXiv Detail & Related papers (2022-07-15T22:41:30Z) - UoB at SemEval-2021 Task 5: Extending Pre-Trained Language Models to
Include Task and Domain-Specific Information for Toxic Span Prediction [0.8376091455761259]
Toxicity is pervasive in social media and poses a major threat to the health of online communities.
Recent introduction of pre-trained language models, which have achieved state-of-the-art results in many NLP tasks, has transformed the way in which we approach natural language processing.
arXiv Detail & Related papers (2021-10-07T18:29:06Z) - Two-Stream Consensus Network: Submission to HACS Challenge 2021
Weakly-Supervised Learning Track [78.64815984927425]
The goal of weakly-supervised temporal action localization is to temporally locate and classify action of interest in untrimmed videos.
We adopt the two-stream consensus network (TSCN) as the main framework in this challenge.
Our solution ranked 2rd in this challenge, and we hope our method can serve as a baseline for future academic research.
arXiv Detail & Related papers (2021-06-21T03:36:36Z) - UTNLP at SemEval-2021 Task 5: A Comparative Analysis of Toxic Span
Detection using Attention-based, Named Entity Recognition, and Ensemble
Models [6.562256987706127]
This paper presents our team's, UTNLP, methodology and results in the SemEval-2021 shared task 5 on toxic spans detection.
The experiments start with keyword-based models and are followed by attention-based, named entity-based, transformers-based, and ensemble models.
Our best approach, an ensemble model, achieves an F1 of 0.684 in the competition's evaluation phase.
arXiv Detail & Related papers (2021-04-10T13:56:03Z) - MIPT-NSU-UTMN at SemEval-2021 Task 5: Ensembling Learning with
Pre-trained Language Models for Toxic Spans Detection [0.0]
We developed ensemble models using BERT-based neural architectures and post-processing to combine tokens into spans.
We evaluated several pre-trained language models using various ensemble techniques for toxic span identification and achieved sizable improvements over our baseline fine-tuned BERT models.
arXiv Detail & Related papers (2021-04-10T11:27:32Z) - WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for
Detecting Toxic Spans [2.4737119633827174]
In recent years, the widespread use of social media has led to an increase in the generation of toxic and offensive content on online platforms.
Social media platforms have worked on developing automatic detection methods and employing human moderators to cope with this deluge of offensive content.
arXiv Detail & Related papers (2021-04-09T22:52:26Z) - NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based
Token Classification and Span Prediction Techniques [0.6850683267295249]
In this paper, we explore simple versions of Token Classification or Span Prediction approaches.
We use BERT-based models -- BERT, RoBERTa, and SpanBERT for both approaches.
To this end, we investigate results on four hybrid approaches -- Multi-Span, Span+Token, LSTM-CRF, and a combination of predicted offsets using union/intersection.
arXiv Detail & Related papers (2021-02-24T12:30:09Z) - RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
Models [93.151822563361]
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment.
We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.
arXiv Detail & Related papers (2020-09-24T03:17:19Z) - Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models
via Continual Learning [74.25168207651376]
Fine-tuning pre-trained language models to downstream cross-lingual tasks has shown promising results.
We leverage continual learning to preserve the cross-lingual ability of the pre-trained model when we fine-tune it to downstream tasks.
Our methods achieve better performance than other fine-tuning baselines on the zero-shot cross-lingual part-of-speech tagging and named entity recognition tasks.
arXiv Detail & Related papers (2020-04-29T14:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.