Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts
- URL: http://arxiv.org/abs/2409.02519v1
- Date: Wed, 4 Sep 2024 08:27:43 GMT
- Title: Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts
- Authors: Arianna Muti, Federico Ruggeri, Khalid Al-Khatib, Alberto Barrón-Cedeño, Tommaso Caselli,
- Abstract summary: We propose misogyny detection as an Argumentative Reasoning task.
We investigate the capacity of large language models to understand the implicit reasoning used to convey misogyny in both Italian and English.
- Score: 17.259767031006604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose misogyny detection as an Argumentative Reasoning task and we investigate the capacity of large language models (LLMs) to understand the implicit reasoning used to convey misogyny in both Italian and English. The central aim is to generate the missing reasoning link between a message and the implied meanings encoding the misogyny. Our study uses argumentation theory as a foundation to form a collection of prompts in both zero-shot and few-shot settings. These prompts integrate different techniques, including chain-of-thought reasoning and augmented knowledge. Our findings show that LLMs fall short on reasoning capabilities about misogynistic comments and that they mostly rely on their implicit knowledge derived from internalized common stereotypes about women to generate implied assumptions, rather than on inductive reasoning.
Related papers
- Going Whole Hog: A Philosophical Defense of AI Cognition [0.0]
We argue against prevailing methodologies in AI philosophy, rejecting starting points based on low-level computational details.
We employ 'Holistic Network Assumptions' to argue for the full suite of cognitive states.
We conclude by speculating on the possibility of LLMs possessing 'alien' contents beyond human conceptual schemes.
arXiv Detail & Related papers (2025-04-18T11:36:25Z) - Adaptable Moral Stances of Large Language Models on Sexist Content: Implications for Society and Gender Discourse [17.084339235658085]
We show that all eight models produce comprehensible and contextually relevant text.
Based on our observations, we caution against the potential misuse of LLMs to justify sexist language.
arXiv Detail & Related papers (2024-09-30T19:27:04Z) - A multitask learning framework for leveraging subjectivity of annotators to identify misogyny [47.175010006458436]
We propose a multitask learning approach to enhance the performance of the misogyny identification systems.
We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups.
This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.
arXiv Detail & Related papers (2024-06-22T15:06:08Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - PejorativITy: Disambiguating Pejorative Epithets to Improve Misogyny Detection in Italian Tweets [11.224028161937296]
We present PejorativITy, a novel corpus of 1,200 manually annotated Italian tweets for language at the word level and misogyny at the sentence level.
We evaluate the impact of injecting information about disambiguated words into a model targeting misogyny detection.
arXiv Detail & Related papers (2024-04-03T12:24:48Z) - Exploratory Data Analysis on Code-mixed Misogynistic Comments [0.0]
We present a novel dataset of YouTube comments in mix-code Hinglish.
These comments have been weak labelled as Misogynistic' and Non-misogynistic'
arXiv Detail & Related papers (2024-03-09T23:21:17Z) - Evaluating Gender Bias in Large Language Models via Chain-of-Thought
Prompting [87.30837365008931]
Large language models (LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate incremental predictions even on unscalable tasks.
This study examines the impact of LLMs' step-by-step predictions on gender bias in unscalable tasks.
arXiv Detail & Related papers (2024-01-28T06:50:10Z) - Measuring Misogyny in Natural Language Generation: Preliminary Results
from a Case Study on two Reddit Communities [7.499634046186994]
We consider the challenge of measuring misogyny in natural language generation.
We use data from two well-characterised Incel' communities on Reddit.
arXiv Detail & Related papers (2023-12-06T07:38:46Z) - Towards a Mechanistic Interpretation of Multi-Step Reasoning
Capabilities of Language Models [107.07851578154242]
Language models (LMs) have strong multi-step (i.e., procedural) reasoning capabilities.
It is unclear whether LMs perform tasks by cheating with answers memorized from pretraining corpus, or, via a multi-step reasoning mechanism.
We show that MechanisticProbe is able to detect the information of the reasoning tree from the model's attentions for most examples.
arXiv Detail & Related papers (2023-10-23T01:47:29Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Did they answer? Subjective acts and intents in conversational discourse [48.63528550837949]
We present the first discourse dataset with multiple and subjective interpretations of English conversation.
We show disagreements are nuanced and require a deeper understanding of the different contextual factors.
arXiv Detail & Related papers (2021-04-09T16:34:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.