Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies
- URL: http://arxiv.org/abs/2311.09687v1
- Date: Thu, 16 Nov 2023 08:57:53 GMT
- Title: Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies
- Authors: Zihao He, Siyi Guo, Ashwin Rao, Kristina Lerman
- Abstract summary: This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
- Score: 5.958974943807783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms are rife with politically charged discussions.
Therefore, accurately deciphering and predicting partisan biases using Large
Language Models (LLMs) is increasingly critical. In this study, we address the
challenge of understanding political bias in digitized discourse using LLMs.
While traditional approaches often rely on finetuning separate models for each
political faction, our work innovates by employing a singular,
instruction-tuned LLM to reflect a spectrum of political ideologies. We present
a comprehensive analytical framework, consisting of Partisan Bias Divergence
Assessment and Partisan Class Tendency Prediction, to evaluate the model's
alignment with real-world political ideologies in terms of stances, emotions,
and moral foundations. Our findings reveal the model's effectiveness in
capturing emotional and moral nuances, albeit with some challenges in stance
detection, highlighting the intricacies and potential for refinement in NLP
tools for politically sensitive contexts. This research contributes
significantly to the field by demonstrating the feasibility and importance of
nuanced political understanding in LLMs, particularly for applications
requiring acute awareness of political bias.
Related papers
- Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Quantitative Analysis of Forecasting Models:In the Aspect of Online
Political Bias [0.0]
We propose a approach to classify social media posts into five distinct political leaning categories.
Our approach involves utilizing existing time series forecasting models on two social media datasets with different political ideologies.
arXiv Detail & Related papers (2023-09-11T16:17:24Z) - Examining Political Rhetoric with Epistemic Stance Detection [13.829628375546568]
We develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling.
We demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books.
arXiv Detail & Related papers (2022-12-29T23:47:14Z) - PAR: Political Actor Representation Learning with Social Context and
Expert Knowledge [45.215862050840116]
We propose textbfPAR, a textbfPolitical textbfActor textbfRepresentation learning framework.
We retrieve and extract factual statements about legislators to leverage social context information.
We then construct a heterogeneous information network to incorporate social context and use relational graph neural networks to learn legislator representations.
arXiv Detail & Related papers (2022-10-15T19:28:06Z) - Political Ideology and Polarization of Policy Positions: A
Multi-dimensional Approach [19.435030285532854]
We study the ideology of the policy under discussion teasing apart the nuanced co-existence of stance and ideology.
Aligned with the theoretical accounts in political science, we treat ideology as a multi-dimensional construct.
We showcase that this framework enables quantitative analysis of polarization, a temporal, multifaceted measure of ideological distance.
arXiv Detail & Related papers (2021-06-28T04:03:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.