RubCSG at SemEval-2022 Task 5: Ensemble learning for identifying
misogynous MEMEs
- URL: http://arxiv.org/abs/2204.03953v1
- Date: Fri, 8 Apr 2022 09:27:28 GMT
- Title: RubCSG at SemEval-2022 Task 5: Ensemble learning for identifying
misogynous MEMEs
- Authors: Wentao Yu, Benedikt Boenninghoff, Jonas Roehrig, Dorothea Kolossa
- Abstract summary: This work presents an ensemble system based on various uni-modal and bi-modal model architectures developed for the SemEval 2022 Task 5: MAMI-Multimedia Automatic Misogyny Identification.
- Score: 12.979213013465882
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This work presents an ensemble system based on various uni-modal and bi-modal
model architectures developed for the SemEval 2022 Task 5: MAMI-Multimedia
Automatic Misogyny Identification. The challenge organizers provide an English
meme dataset to develop and train systems for identifying and classifying
misogynous memes. More precisely, the competition is separated into two
sub-tasks: sub-task A asks for a binary decision as to whether a meme expresses
misogyny, while sub-task B is to classify misogynous memes into the potentially
overlapping sub-categories of stereotype, shaming, objectification, and
violence. For our submission, we implement a new model fusion network and
employ an ensemble learning approach for better performance. With this
structure, we achieve a 0.755 macroaverage F1-score (11th) in sub-task A and a
0.709 weighted-average F1-score (10th) in sub-task B.
Related papers
- GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - A multitask learning framework for leveraging subjectivity of annotators to identify misogyny [47.175010006458436]
We propose a multitask learning approach to enhance the performance of the misogyny identification systems.
We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups.
This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.
arXiv Detail & Related papers (2024-06-22T15:06:08Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - LCT-1 at SemEval-2023 Task 10: Pre-training and Multi-task Learning for
Sexism Detection and Classification [0.0]
SemEval-2023 Task 10 on Explainable Detection of Online Sexism aims at increasing explainability of the sexism detection.
Our system is based on further domain-adaptive pre-training.
In experiments, multi-task learning performs on par with standard fine-tuning for sexism detection.
arXiv Detail & Related papers (2023-06-08T09:56:57Z) - Mixed Autoencoder for Self-supervised Visual Representation Learning [95.98114940999653]
Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction.
This paper studies the prevailing mixing augmentation for MAE.
arXiv Detail & Related papers (2023-03-30T05:19:43Z) - UPB at SemEval-2022 Task 5: Enhancing UNITER with Image Sentiment and
Graph Convolutional Networks for Multimedia Automatic Misogyny Identification [0.3437656066916039]
We describe our classification systems submitted to the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification.
Our best model reaches an F1-score of 71.4% in Sub-task A and 67.3% for Sub-task B positioning our team in the upper third of the leaderboard.
arXiv Detail & Related papers (2022-05-29T21:12:36Z) - Unifying Language Learning Paradigms [96.35981503087567]
We present a unified framework for pre-training models that are universally effective across datasets and setups.
We show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective.
Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.
arXiv Detail & Related papers (2022-05-10T19:32:20Z) - TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the
Detection and Classification of Misogynous Memes [9.66022279280394]
We present a multimodal architecture that combines textual and visual features in order to detect misogynous meme content.
Our solution obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous.
arXiv Detail & Related papers (2022-04-13T11:03:21Z) - AMS_ADRN at SemEval-2022 Task 5: A Suitable Image-text Multimodal Joint
Modeling Method for Multi-task Misogyny Identification [3.5382535469099436]
Women are influential online, especially in image-based social media such as Twitter and Instagram.
In this paper, we describe the system developed by our team for SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification.
arXiv Detail & Related papers (2022-02-18T09:41:37Z) - Automatic Sexism Detection with Multilingual Transformer Models [0.0]
This paper presents the contribution of the AIT_FHSTP team at the EXIST 2021 benchmark for two sEXism Identification in Social neTworks tasks.
To solve the tasks we applied two multilingual transformer models, one based on multilingual BERT and one based on XLM-R.
Our approach uses two different strategies to adapt the transformers to the detection of sexist content: first, unsupervised pre-training with additional data and second, supervised fine-tuning with additional and augmented data.
For both tasks our best model is XLM-R with unsupervised pre-training on the EXIST data and additional datasets
arXiv Detail & Related papers (2021-06-09T08:45:51Z) - Meta-Learning across Meta-Tasks for Few-Shot Learning [107.44950540552765]
We argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning.
We consider the relationships defined over two types of meta-task pairs and propose different strategies to exploit them.
arXiv Detail & Related papers (2020-02-11T09:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.