Transformer Ensembles for Sexism Detection
- URL: http://arxiv.org/abs/2110.15905v1
- Date: Fri, 29 Oct 2021 16:51:50 GMT
- Title: Transformer Ensembles for Sexism Detection
- Authors: Lily Davies, Marta Baldracchi, Carlo Alessandro Borella, and
Konstantinos Perifanos
- Abstract summary: This document presents in detail the work done for the sexism detection task at EXIST2021 workshop.
Our methodology is built on ensembles of Transformer-based models which are trained on different background and corpora.
We report accuracy of 0.767 for the binary classification task (task1), and f1 score 0.766, and for the multi-class task (task2) accuracy 0.623 and f1-score 0.535.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This document presents in detail the work done for the sexism detection task
at EXIST2021 workshop. Our methodology is built on ensembles of
Transformer-based models which are trained on different background and corpora
and fine-tuned on the provided dataset from the EXIST2021 workshop. We report
accuracy of 0.767 for the binary classification task (task1), and f1 score
0.766, and for the multi-class task (task2) accuracy 0.623 and f1-score 0.535.
Related papers
- Mavericks at ArAIEval Shared Task: Towards a Safer Digital Space --
Transformer Ensemble Models Tackling Deception and Persuasion [0.0]
We present our approaches for task 1-A and task 2-A of the shared task which focus on persuasion technique detection and disinformation detection respectively.
The tasks use multigenre snippets of tweets and news articles for the given binary classification problem.
We achieved a micro F1-score of 0.742 on task 1-A (8th rank on the leaderboard) and 0.901 on task 2-A (7th rank on the leaderboard) respectively.
arXiv Detail & Related papers (2023-11-30T17:26:57Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - IUST_NLP at SemEval-2023 Task 10: Explainable Detecting Sexism with
Transformers and Task-adaptive Pretraining [0.0]
This paper describes our system on SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS)
We propose a set of transformer-based pre-trained models with task-adaptive pretraining and ensemble learning.
On the test dataset, our system achieves F1-scores of 83%, 64%, and 47% on subtasks A, B, and C, respectively.
arXiv Detail & Related papers (2023-05-11T15:29:04Z) - Attention at SemEval-2023 Task 10: Explainable Detection of Online
Sexism (EDOS) [15.52876591707497]
We have worked on interpretability, trust, and understanding of the decisions made by models in the form of classification tasks.
The first task consists of determining Binary Sexism Detection.
The second task describes the Category of Sexism.
The third task describes a more Fine-grained Category of Sexism.
arXiv Detail & Related papers (2023-04-10T14:24:52Z) - X-PuDu at SemEval-2022 Task 7: A Replaced Token Detection Task
Pre-trained Model with Pattern-aware Ensembling for Identifying Plausible
Clarifications [13.945286351253717]
This paper describes our winning system on SemEval 2022 Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in instructional texts.
A replaced token detection pre-trained model is utilized with minorly different task-specific heads for SubTask-A: Multi-class Classification and SubTask-B: Ranking.
Our system achieves a 68.90% accuracy score and 0.8070 spearman's rank correlation score surpassing the 2nd place with a large margin by 2.7 and 2.2 percent points for SubTask-A and SubTask-B, respectively.
arXiv Detail & Related papers (2022-11-27T05:46:46Z) - Global Context Vision Transformers [78.5346173956383]
We propose global context vision transformer (GC ViT), a novel architecture that enhances parameter and compute utilization for computer vision.
We address the lack of the inductive bias in ViTs, and propose to leverage a modified fused inverted residual blocks in our architecture.
Our proposed GC ViT achieves state-of-the-art results across image classification, object detection and semantic segmentation tasks.
arXiv Detail & Related papers (2022-06-20T18:42:44Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - Automatic Sexism Detection with Multilingual Transformer Models [0.0]
This paper presents the contribution of the AIT_FHSTP team at the EXIST 2021 benchmark for two sEXism Identification in Social neTworks tasks.
To solve the tasks we applied two multilingual transformer models, one based on multilingual BERT and one based on XLM-R.
Our approach uses two different strategies to adapt the transformers to the detection of sexist content: first, unsupervised pre-training with additional data and second, supervised fine-tuning with additional and augmented data.
For both tasks our best model is XLM-R with unsupervised pre-training on the EXIST data and additional datasets
arXiv Detail & Related papers (2021-06-09T08:45:51Z) - LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document
Understanding [49.941806975280045]
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks.
We present text-bfLMv2 by pre-training text, layout and image in a multi-modal framework.
arXiv Detail & Related papers (2020-12-29T13:01:52Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.