Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies
- URL: http://arxiv.org/abs/2311.09687v1
- Date: Thu, 16 Nov 2023 08:57:53 GMT
- Title: Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies
- Authors: Zihao He, Siyi Guo, Ashwin Rao, Kristina Lerman
- Abstract summary: This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
- Score: 5.958974943807783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms are rife with politically charged discussions.
Therefore, accurately deciphering and predicting partisan biases using Large
Language Models (LLMs) is increasingly critical. In this study, we address the
challenge of understanding political bias in digitized discourse using LLMs.
While traditional approaches often rely on finetuning separate models for each
political faction, our work innovates by employing a singular,
instruction-tuned LLM to reflect a spectrum of political ideologies. We present
a comprehensive analytical framework, consisting of Partisan Bias Divergence
Assessment and Partisan Class Tendency Prediction, to evaluate the model's
alignment with real-world political ideologies in terms of stances, emotions,
and moral foundations. Our findings reveal the model's effectiveness in
capturing emotional and moral nuances, albeit with some challenges in stance
detection, highlighting the intricacies and potential for refinement in NLP
tools for politically sensitive contexts. This research contributes
significantly to the field by demonstrating the feasibility and importance of
nuanced political understanding in LLMs, particularly for applications
requiring acute awareness of political bias.
Related papers
- Mapping and Influencing the Political Ideology of Large Language Models using Synthetic Personas [5.237116285113809]
We map the political distribution of persona-based prompted large language models using the Political Compass Test (PCT)
Our experiments reveal that synthetic personas predominantly cluster in the left-libertarian quadrant, with models demonstrating varying degrees of responsiveness when prompted with explicit ideological descriptors.
While all models demonstrate significant shifts towards right-authoritarian positions, they exhibit more limited shifts towards left-libertarian positions, suggesting an asymmetric response to ideological manipulation that may reflect inherent biases in model training.
arXiv Detail & Related papers (2024-12-19T13:36:18Z) - Political-LLM: Large Language Models in Political Science [159.95299889946637]
Large language models (LLMs) have been widely adopted in political science tasks.
Political-LLM aims to advance the comprehensive understanding of integrating LLMs into computational political science.
arXiv Detail & Related papers (2024-12-09T08:47:50Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Examining Political Rhetoric with Epistemic Stance Detection [13.829628375546568]
We develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling.
We demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books.
arXiv Detail & Related papers (2022-12-29T23:47:14Z) - PAR: Political Actor Representation Learning with Social Context and
Expert Knowledge [45.215862050840116]
We propose textbfPAR, a textbfPolitical textbfActor textbfRepresentation learning framework.
We retrieve and extract factual statements about legislators to leverage social context information.
We then construct a heterogeneous information network to incorporate social context and use relational graph neural networks to learn legislator representations.
arXiv Detail & Related papers (2022-10-15T19:28:06Z) - Political Ideology and Polarization of Policy Positions: A
Multi-dimensional Approach [19.435030285532854]
We study the ideology of the policy under discussion teasing apart the nuanced co-existence of stance and ideology.
Aligned with the theoretical accounts in political science, we treat ideology as a multi-dimensional construct.
We showcase that this framework enables quantitative analysis of polarization, a temporal, multifaceted measure of ideological distance.
arXiv Detail & Related papers (2021-06-28T04:03:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.