Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
- URL: http://arxiv.org/abs/2403.18932v1
- Date: Wed, 27 Mar 2024 18:22:48 GMT
- Title: Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
- Authors: Yejin Bang, Delong Chen, Nayeon Lee, Pascale Fung,
- Abstract summary: We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues.
Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias.
- Score: 46.1845409187583
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and can lead to polarization and other harms in downstream applications. In order to provide transparency to users, we advocate that there should be fine-grained and explainable measures of political biases generated by LLMs. Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias. We measured the political bias in eleven open-sourced LLMs and showed that our proposed framework is easily scalable to other topics and is explainable.
Related papers
- Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models [52.00270888041742]
We introduce a novel dataset with neutral event descriptions and contrasting viewpoints from different countries.<n>Our findings show significant geopolitical biases, with models favoring specific national narratives.<n>Simple debiasing prompts had a limited effect on reducing these biases.
arXiv Detail & Related papers (2025-06-07T10:45:17Z) - Analyzing Political Bias in LLMs via Target-Oriented Sentiment Classification [4.352835414206441]
Political biases encoded by LLMs might have detrimental effects on downstream applications.<n>We propose a new approach leveraging the observation that LLM sentiment predictions vary with the target entity in the same sentence.<n>We insert 1319 demographically and politically diverse politician names in 450 political sentences and predict target-oriented sentiment using seven models in six widely spoken languages.
arXiv Detail & Related papers (2025-05-26T10:01:24Z) - Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters [0.571853823214391]
Large language models (LLMs) are predominantly used by many as a primary source of information for various topics.<n>LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions.<n>We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat.
arXiv Detail & Related papers (2025-05-07T13:18:41Z) - Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts [1.9782163071901029]
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world.<n>We show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics.<n>In a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts.
arXiv Detail & Related papers (2025-05-07T06:53:59Z) - Through the LLM Looking Glass: A Socratic Self-Assessment of Donkeys, Elephants, and Markets [42.55423041662188]
The study aims to directly measure the models' biases rather than relying on external interpretations.
Our results reveal a consistent preference of Democratic over Republican positions across all models.
biases vary among Western LLMs, while those developed in China lean more strongly toward socialism.
arXiv Detail & Related papers (2025-03-20T19:40:40Z) - Unpacking Political Bias in Large Language Models: A Cross-Model Comparison on U.S. Politics [6.253258189994455]
Political bias, as a universal phenomenon in human society, may be transferred to Large Language Models.
Political biases evolve with model scale and release date, and are also influenced by regional factors of LLMs.
arXiv Detail & Related papers (2024-12-21T19:42:40Z) - Political-LLM: Large Language Models in Political Science [159.95299889946637]
Large language models (LLMs) have been widely adopted in political science tasks.
Political-LLM aims to advance the comprehensive understanding of integrating LLMs into computational political science.
arXiv Detail & Related papers (2024-12-09T08:47:50Z) - Evaluating Implicit Bias in Large Language Models by Attacking From a Psychometric Perspective [66.34066553400108]
We conduct a rigorous evaluation of Large Language Models' implicit bias towards certain groups by attacking them with carefully crafted instructions to elicit biased responses.
We propose three attack approaches, i.e., Disguise, Deception, and Teaching, based on which we built evaluation datasets for four common bias types.
arXiv Detail & Related papers (2024-06-20T06:42:08Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - The Political Preferences of LLMs [0.0]
I administer 11 political orientation tests, designed to identify the political preferences of the test taker, to 24 state-of-the-art conversational LLMs.
Most conversational LLMs generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints.
I demonstrate that LLMs can be steered towards specific locations in the political spectrum through Supervised Fine-Tuning.
arXiv Detail & Related papers (2024-02-02T02:43:10Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content
Moderation [5.470971742987594]
Social media companies have grappled with defining and enforcing content moderation policies surrounding political content on their platforms.
It is unclear how well human labelers perform at this task, or whether biases affect this process.
We experimentally evaluate the feasibility and practicality of using crowd workers to identify political content.
arXiv Detail & Related papers (2023-05-23T20:10:43Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - Millions of Co-purchases and Reviews Reveal the Spread of Polarization
and Lifestyle Politics across Online Markets [68.8204255655161]
We study the pervasiveness of polarization and lifestyle politics over different product segments in a diverse market.
We sample 234.6 million relations among 21.8 million market entities to find product categories that are politically relevant, aligned, and polarized.
Cultural products are 4 times more polarized than any other segment.
arXiv Detail & Related papers (2022-01-17T18:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.