The Impact of Persona-based Political Perspectives on Hateful Content Detection
- URL: http://arxiv.org/abs/2502.00385v1
- Date: Sat, 01 Feb 2025 09:53:17 GMT
- Title: The Impact of Persona-based Political Perspectives on Hateful Content Detection
- Authors: Stefano Civelli, Pietro Bernardelle, Gianluca Demartini,
- Abstract summary: Politically diverse language models require computational resources often inaccessible to many researchers and organizations.
Recent work has established that persona-based prompting can introduce political diversity in model outputs without additional training.
We investigate whether such prompting strategies can achieve results comparable to political pretraining for downstream tasks.
- Score: 4.04666623219944
- License:
- Abstract: While pretraining language models with politically diverse content has been shown to improve downstream task fairness, such approaches require significant computational resources often inaccessible to many researchers and organizations. Recent work has established that persona-based prompting can introduce political diversity in model outputs without additional training. However, it remains unclear whether such prompting strategies can achieve results comparable to political pretraining for downstream tasks. We investigate this question using persona-based prompting strategies in multimodal hate-speech detection tasks, specifically focusing on hate speech in memes. Our analysis reveals that when mapping personas onto a political compass and measuring persona agreement, inherent political positioning has surprisingly little correlation with classification decisions. Notably, this lack of correlation persists even when personas are explicitly injected with stronger ideological descriptors. Our findings suggest that while LLMs can exhibit political biases in their responses to direct political questions, these biases may have less impact on practical classification tasks than previously assumed. This raises important questions about the necessity of computationally expensive political pretraining for achieving fair performance in downstream tasks.
Related papers
- Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.
We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.
We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - Political-LLM: Large Language Models in Political Science [159.95299889946637]
Large language models (LLMs) have been widely adopted in political science tasks.
Political-LLM aims to advance the comprehensive understanding of integrating LLMs into computational political science.
arXiv Detail & Related papers (2024-12-09T08:47:50Z) - On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Generalizing Political Leaning Inference to Multi-Party Systems:
Insights from the UK Political Landscape [10.798766768721741]
An ability to infer the political leaning of social media users can help in gathering opinion polls.
We release a dataset comprising users labelled by their political leaning as well as interactions with one another.
We show that interactions in the form of retweets between users can be a very powerful feature to enable political leaning inference.
arXiv Detail & Related papers (2023-12-04T09:02:17Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Quantitative Analysis of Forecasting Models:In the Aspect of Online
Political Bias [0.0]
We propose a approach to classify social media posts into five distinct political leaning categories.
Our approach involves utilizing existing time series forecasting models on two social media datasets with different political ideologies.
arXiv Detail & Related papers (2023-09-11T16:17:24Z) - Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content
Moderation [5.470971742987594]
Social media companies have grappled with defining and enforcing content moderation policies surrounding political content on their platforms.
It is unclear how well human labelers perform at this task, or whether biases affect this process.
We experimentally evaluate the feasibility and practicality of using crowd workers to identify political content.
arXiv Detail & Related papers (2023-05-23T20:10:43Z) - A Machine Learning Pipeline to Examine Political Bias with Congressional
Speeches [0.3062386594262859]
We give machine learning approaches to study political bias in two ideologically diverse social media forums: Gab and Twitter.
Our proposed methods exploit the use of transcripts collected from political speeches in US congress to label the data.
We also present a machine learning approach that combines features from cascades and text to forecast cascade's political bias with an accuracy of about 85%.
arXiv Detail & Related papers (2021-09-18T21:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.