MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos
- URL: http://arxiv.org/abs/2504.11169v1
- Date: Tue, 15 Apr 2025 13:16:46 GMT
- Title: MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos
- Authors: Laura De Grazia, Pol Pastells, Mauro Vázquez Chas, Desmond Elliott, Danae Sánchez Villegas, Mireia Farrús, Mariona Taulé,
- Abstract summary: We introduce MuSeD, a new Multimodal Spanish dataset for Sexism Detection consisting of $approx$ 11 hours of videos extracted from TikTok and BitChute.<n>We find that visual information plays a key role in labeling sexist content for both humans and models.
- Score: 12.555579923843641
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sexism is generally defined as prejudice and discrimination based on sex or gender, affecting every sector of society, from social institutions to relationships and individual behavior. Social media platforms amplify the impact of sexism by conveying discriminatory content not only through text but also across multiple modalities, highlighting the critical need for a multimodal approach to the analysis of sexism online. With the rise of social media platforms where users share short videos, sexism is increasingly spreading through video content. Automatically detecting sexism in videos is a challenging task, as it requires analyzing the combination of verbal, audio, and visual elements to identify sexist content. In this study, (1) we introduce MuSeD, a new Multimodal Spanish dataset for Sexism Detection consisting of $\approx$ 11 hours of videos extracted from TikTok and BitChute; (2) we propose an innovative annotation framework for analyzing the contribution of textual and multimodal labels in the classification of sexist and non-sexist content; and (3) we evaluate a range of large language models (LLMs) and multimodal LLMs on the task of sexism detection. We find that visual information plays a key role in labeling sexist content for both humans and models. Models effectively detect explicit sexism; however, they struggle with implicit cases, such as stereotypes, instances where annotators also show low agreement. This highlights the inherent difficulty of the task, as identifying implicit sexism depends on the social and cultural context.
Related papers
- Gender Bias in Text-to-Video Generation Models: A case study of Sora [63.064204206220936]
This study investigates the presence of gender bias in OpenAI's Sora, a text-to-video generation model.<n>We uncover significant evidence of bias by analyzing the generated videos from a diverse set of gender-neutral and stereotypical prompts.
arXiv Detail & Related papers (2024-12-30T18:08:13Z) - PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis [74.41260927676747]
This paper bridges the gaps by introducing a multimodal conversational Sentiment Analysis (ABSA)
To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale, multimodality, multilingualism, multi-scenarios, and covering both implicit and explicit sentiment elements.
To effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism.
arXiv Detail & Related papers (2024-08-18T13:51:01Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - A multitask learning framework for leveraging subjectivity of annotators to identify misogyny [47.175010006458436]
We propose a multitask learning approach to enhance the performance of the misogyny identification systems.
We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups.
This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.
arXiv Detail & Related papers (2024-06-22T15:06:08Z) - Bilingual Sexism Classification: Fine-Tuned XLM-RoBERTa and GPT-3.5 Few-Shot Learning [0.7874708385247352]
This study aims to improve sexism identification in bilingual contexts (English and Spanish) by leveraging natural language processing models.<n>We fine-tuned the XLM-RoBERTa model and separately used GPT-3.5 with few-shot learning prompts to classify sexist content.
arXiv Detail & Related papers (2024-06-11T14:15:33Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - SemEval-2023 Task 10: Explainable Detection of Online Sexism [5.542286527528687]
We introduce SemEval Task 10 on the Explainable Detection of Online Sexism (EDOS)
We make three main contributions: i) a novel hierarchical taxonomy of sexist content, which includes granular vectors of sexism to aid explainability; ii) a new dataset of 20,000 social media comments with fine-grained labels, along with larger unlabelled datasets for model adaptation; andiii) baseline models as well as an analysis of the methods, results and errors for participant submissions to our task.
arXiv Detail & Related papers (2023-03-07T20:28:39Z) - TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the
Detection and Classification of Misogynous Memes [9.66022279280394]
We present a multimodal architecture that combines textual and visual features in order to detect misogynous meme content.
Our solution obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous.
arXiv Detail & Related papers (2022-04-13T11:03:21Z) - SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection [9.443571652110663]
We propose the first Chinese sexism dataset -- Sina Weibo Sexism Review (SWSR) dataset -- and a large Chinese lexicon SexHateLex.
SWSR dataset provides labels at different levels of granularity including (i) sexism or non-sexism, (ii) sexism category and (iii) target type.
We conduct experiments for the three sexism classification tasks making use of state-of-the-art machine learning models.
arXiv Detail & Related papers (2021-08-06T12:06:40Z) - Gender bias in magazines oriented to men and women: a computational
approach [58.720142291102135]
We compare the content of a women-oriented magazine with that of a men-oriented one, both produced by the same editorial group over a decade.
With Topic Modelling techniques we identify the main themes discussed in the magazines and quantify how much the presence of these topics differs between magazines over time.
Our results show that the frequency of appearance of the topics Family, Business and Women as sex objects, present an initial bias that tends to disappear over time.
arXiv Detail & Related papers (2020-11-24T14:02:49Z) - "Call me sexist, but...": Revisiting Sexism Detection Using
Psychological Scales and Adversarial Samples [2.029924828197095]
We outline the different dimensions of sexism by grounding them in their implementation in psychological scales.
From the scales, we derive a codebook for sexism in social media, which we use to annotate existing and novel datasets.
Results indicate that current machine learning models pick up on a very narrow set of linguistic markers of sexism and do not generalize well to out-of-domain examples.
arXiv Detail & Related papers (2020-04-27T13:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.