Attention at SemEval-2023 Task 10: Explainable Detection of Online
Sexism (EDOS)
- URL: http://arxiv.org/abs/2304.04610v1
- Date: Mon, 10 Apr 2023 14:24:52 GMT
- Title: Attention at SemEval-2023 Task 10: Explainable Detection of Online
Sexism (EDOS)
- Authors: Debashish Roy, Manish Shrivastava
- Abstract summary: We have worked on interpretability, trust, and understanding of the decisions made by models in the form of classification tasks.
The first task consists of determining Binary Sexism Detection.
The second task describes the Category of Sexism.
The third task describes a more Fine-grained Category of Sexism.
- Score: 15.52876591707497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we have worked on interpretability, trust, and understanding
of the decisions made by models in the form of classification tasks. The task
is divided into 3 subtasks. The first task consists of determining Binary
Sexism Detection. The second task describes the Category of Sexism. The third
task describes a more Fine-grained Category of Sexism. Our work explores
solving these tasks as a classification problem by fine-tuning
transformer-based architecture. We have performed several experiments with our
architecture, including combining multiple transformers, using domain adaptive
pretraining on the unlabelled dataset provided by Reddit and Gab, Joint
learning, and taking different layers of transformers as input to a
classification head. Our system (with team name Attention) was able to achieve
a macro F1 score of 0.839 for task A, 0.5835 macro F1 score for task B and
0.3356 macro F1 score for task C at the Codalab SemEval Competition. Later we
improved the accuracy of Task B to 0.6228 and Task C to 0.3693 in the test set.
Related papers
- ThangDLU at #SMM4H 2024: Encoder-decoder models for classifying text data on social disorders in children and adolescents [49.00494558898933]
This paper describes our participation in Task 3 and Task 5 of the #SMM4H (Social Media Mining for Health) 2024 Workshop.
Task 3 is a multi-class classification task centered on tweets discussing the impact of outdoor environments on symptoms of social anxiety.
Task 5 involves a binary classification task focusing on tweets reporting medical disorders in children.
We applied transfer learning from pre-trained encoder-decoder models such as BART-base and T5-small to identify the labels of a set of given tweets.
arXiv Detail & Related papers (2024-04-30T17:06:20Z) - Mavericks at ArAIEval Shared Task: Towards a Safer Digital Space --
Transformer Ensemble Models Tackling Deception and Persuasion [0.0]
We present our approaches for task 1-A and task 2-A of the shared task which focus on persuasion technique detection and disinformation detection respectively.
The tasks use multigenre snippets of tweets and news articles for the given binary classification problem.
We achieved a micro F1-score of 0.742 on task 1-A (8th rank on the leaderboard) and 0.901 on task 2-A (7th rank on the leaderboard) respectively.
arXiv Detail & Related papers (2023-11-30T17:26:57Z) - HausaNLP at SemEval-2023 Task 10: Transfer Learning, Synthetic Data and
Side-Information for Multi-Level Sexism Classification [0.007696728525672149]
We present the findings of our participation in the SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS) task.
We investigated the effects of transferring two language models: XLM-T (sentiment classification) and HateBERT (same domain -- Reddit) for multi-level classification into Sexist or not Sexist.
arXiv Detail & Related papers (2023-04-28T20:03:46Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - Deep Models for Visual Sentiment Analysis of Disaster-related Multimedia
Content [4.284841324544116]
This paper presents a solutions for the MediaEval 2021 task namely "Visual Sentiment Analysis: A Natural Disaster Use-case"
The task aims to extract and classify sentiments perceived by viewers and the emotional message conveyed by natural disaster-related images shared on social media.
In our proposed solutions, we rely mainly on two different state-of-the-art models namely, Inception-v3 and VggNet-19, pre-trained on ImageNet.
arXiv Detail & Related papers (2021-11-30T10:22:41Z) - Transformer Ensembles for Sexism Detection [0.0]
This document presents in detail the work done for the sexism detection task at EXIST2021 workshop.
Our methodology is built on ensembles of Transformer-based models which are trained on different background and corpora.
We report accuracy of 0.767 for the binary classification task (task1), and f1 score 0.766, and for the multi-class task (task2) accuracy 0.623 and f1-score 0.535.
arXiv Detail & Related papers (2021-10-29T16:51:50Z) - Automatic Sexism Detection with Multilingual Transformer Models [0.0]
This paper presents the contribution of the AIT_FHSTP team at the EXIST 2021 benchmark for two sEXism Identification in Social neTworks tasks.
To solve the tasks we applied two multilingual transformer models, one based on multilingual BERT and one based on XLM-R.
Our approach uses two different strategies to adapt the transformers to the detection of sexist content: first, unsupervised pre-training with additional data and second, supervised fine-tuning with additional and augmented data.
For both tasks our best model is XLM-R with unsupervised pre-training on the EXIST data and additional datasets
arXiv Detail & Related papers (2021-06-09T08:45:51Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Device-Robust Acoustic Scene Classification Based on Two-Stage
Categorization and Data Augmentation [63.98724740606457]
We present a joint effort of four groups, namely GT, USTC, Tencent, and UKE, to tackle Task 1 - Acoustic Scene Classification (ASC) in the DCASE 2020 Challenge.
Task 1a focuses on ASC of audio signals recorded with multiple (real and simulated) devices into ten different fine-grained classes.
Task 1b concerns with classification of data into three higher-level classes using low-complexity solutions.
arXiv Detail & Related papers (2020-07-16T15:07:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.