An Adaptive Supervised Contrastive Learning Framework for Implicit Sexism Detection in Digital Social Networks
- URL: http://arxiv.org/abs/2507.05271v1
- Date: Thu, 03 Jul 2025 14:22:21 GMT
- Title: An Adaptive Supervised Contrastive Learning Framework for Implicit Sexism Detection in Digital Social Networks
- Authors: Mohammad Zia Ur Rehman, Aditya Shah, Nagendra Kumar,
- Abstract summary: We introduce an Adaptive Supervised Contrastive lEarning framework for implicit sexism detectioN (ASCEND)<n>A key innovation of our method is the incorporation of threshold-based contrastive learning.<n> Evaluations on the EXIST2021 and MLSC datasets demonstrate that ASCEND significantly outperforms existing methods.
- Score: 0.728258471592763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The global reach of social media has amplified the spread of hateful content, including implicit sexism, which is often overlooked by conventional detection methods. In this work, we introduce an Adaptive Supervised Contrastive lEarning framework for implicit sexism detectioN (ASCEND). A key innovation of our method is the incorporation of threshold-based contrastive learning: by computing cosine similarities between embeddings, we selectively treat only those sample pairs as positive if their similarity exceeds a learnable threshold. This mechanism refines the embedding space by robustly pulling together representations of semantically similar texts while pushing apart dissimilar ones, thus reducing false positives and negatives. The final classification is achieved by jointly optimizing a contrastive loss with a cross-entropy loss. Textual features are enhanced through a word-level attention module. Additionally, we employ sentiment, emotion, and toxicity features. Evaluations on the EXIST2021 and MLSC datasets demonstrate that ASCEND significantly outperforms existing methods, with average Macro F1 improvements of 9.86%, 29.63%, and 32.51% across multiple tasks, highlighting its efficacy in capturing the subtle cues of implicit sexist language.
Related papers
- Explaining Matters: Leveraging Definitions and Semantic Expansion for Sexism Detection [9.477601265462694]
We propose two prompt-based data augmentation techniques for sexism detection.<n>We also introduce an ensemble strategy that resolves prediction ties by aggregating complementary perspectives from multiple language models.<n>Our experimental evaluation on the EDOS dataset demonstrates state-of-the-art performance across all tasks.
arXiv Detail & Related papers (2025-06-06T16:58:12Z) - CL-ISR: A Contrastive Learning and Implicit Stance Reasoning Framework for Misleading Text Detection on Social Media [0.5999777817331317]
This paper proposes a novel framework - CL-ISR (Contrastive Learning and Implicit Stance Reasoning) to improve the detection accuracy of misleading texts on social media.<n>First, we use the contrastive learning algorithm to improve the model's learning ability of semantic differences between truthful and misleading texts.<n>Second, we introduce the implicit stance reasoning module, to explore the potential stance tendencies in the text and their relationships with related topics.
arXiv Detail & Related papers (2025-06-05T14:52:28Z) - EMO-Debias: Benchmarking Gender Debiasing Techniques in Multi-Label Speech Emotion Recognition [49.27067541740956]
EMO-Debias is a large-scale comparison of 13 debiasing methods applied to multi-label SER.<n>Our study encompasses techniques from pre-processing, regularization, adversarial learning, biased learners, and distributionally robust optimization.<n>Our analysis quantifies the trade-offs between fairness and accuracy, identifying which approaches consistently reduce gender performance gaps.
arXiv Detail & Related papers (2025-06-05T05:48:31Z) - Emotion-aware Dual Cross-Attentive Neural Network with Label Fusion for Stance Detection in Misinformative Social Media Content [0.37865171120254354]
This paper proposes a novel method for textbfStance textbfPrediction through a textbfLabel-fused dual cross-textbfAttentive textbfEmotion-aware neural textbfNetwork.<n>The proposed method employs a dual cross-attention mechanism and a hierarchical attention network to capture inter and intra-relationships.
arXiv Detail & Related papers (2025-05-27T15:38:50Z) - Estimating Commonsense Plausibility through Semantic Shifts [66.06254418551737]
We propose ComPaSS, a novel discriminative framework that quantifies commonsense plausibility by measuring semantic shifts.<n> Evaluations on two types of fine-grained commonsense plausibility estimation tasks show that ComPaSS consistently outperforms baselines.
arXiv Detail & Related papers (2025-02-19T06:31:06Z) - Identical and Fraternal Twins: Fine-Grained Semantic Contrastive
Learning of Sentence Representations [6.265789210037749]
We introduce a novel Identical and Fraternal Twins of Contrastive Learning framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques.
We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss.
arXiv Detail & Related papers (2023-07-20T15:02:42Z) - RaSa: Relation and Sensitivity Aware Representation Learning for
Text-based Person Search [51.09723403468361]
We propose a Relation and Sensitivity aware representation learning method (RaSa)
RaSa includes two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA)
Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on datasets.
arXiv Detail & Related papers (2023-05-23T03:53:57Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.