When Neutral Summaries are not that Neutral: Quantifying Political Neutrality in LLM-Generated News Summaries
- URL: http://arxiv.org/abs/2410.09978v1
- Date: Sun, 13 Oct 2024 19:44:39 GMT
- Title: When Neutral Summaries are not that Neutral: Quantifying Political Neutrality in LLM-Generated News Summaries
- Authors: Supriti Vijay, Aman Priyanshu, Ashique R. KhudaBukhsh,
- Abstract summary: This study presents a fresh perspective on quantifying the political neutrality of LLMs.
We consider five pressing issues in current US politics: abortion, gun control/rights, healthcare, immigration, and LGBTQ+ rights.
Our study reveals a consistent trend towards pro-Democratic biases in several well-known LLMs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In an era where societal narratives are increasingly shaped by algorithmic curation, investigating the political neutrality of LLMs is an important research question. This study presents a fresh perspective on quantifying the political neutrality of LLMs through the lens of abstractive text summarization of polarizing news articles. We consider five pressing issues in current US politics: abortion, gun control/rights, healthcare, immigration, and LGBTQ+ rights. Via a substantial corpus of 20,344 news articles, our study reveals a consistent trend towards pro-Democratic biases in several well-known LLMs, with gun control and healthcare exhibiting the most pronounced biases (max polarization differences of -9.49% and -6.14%, respectively). Further analysis uncovers a strong convergence in the vocabulary of the LLM outputs for these divisive topics (55% overlap for Democrat-leaning representations, 52% for Republican). Being months away from a US election of consequence, we consider our findings important.
Related papers
- Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models [52.00270888041742]
We introduce a novel dataset with neutral event descriptions and contrasting viewpoints from different countries.<n>Our findings show significant geopolitical biases, with models favoring specific national narratives.<n>Simple debiasing prompts had a limited effect on reducing these biases.
arXiv Detail & Related papers (2025-06-07T10:45:17Z) - Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters [0.571853823214391]
Large language models (LLMs) are predominantly used by many as a primary source of information for various topics.<n>LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions.<n>We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat.
arXiv Detail & Related papers (2025-05-07T13:18:41Z) - Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts [1.9782163071901029]
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world.<n>We show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics.<n>In a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts.
arXiv Detail & Related papers (2025-05-07T06:53:59Z) - Through the LLM Looking Glass: A Socratic Self-Assessment of Donkeys, Elephants, and Markets [42.55423041662188]
The study aims to directly measure the models' biases rather than relying on external interpretations.
Our results reveal a consistent preference of Democratic over Republican positions across all models.
biases vary among Western LLMs, while those developed in China lean more strongly toward socialism.
arXiv Detail & Related papers (2025-03-20T19:40:40Z) - Unpacking Political Bias in Large Language Models: A Cross-Model Comparison on U.S. Politics [6.253258189994455]
Political bias, as a universal phenomenon in human society, may be transferred to Large Language Models.
Political biases evolve with model scale and release date, and are also influenced by regional factors of LLMs.
arXiv Detail & Related papers (2024-12-21T19:42:40Z) - Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters [42.80511959871216]
We first demonstrate 18 open- and closed-weight LLMs' political preference for a Democratic nominee over a Republican nominee.
We show how this leaning towards the Democratic nominee becomes more pronounced in instruction-tuned models.
We further explore the potential impact of LLMs on voter choice by conducting an experiment with 935 U.S. registered voters.
arXiv Detail & Related papers (2024-10-31T17:51:00Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Measuring Political Bias in Large Language Models: What Is Said and How It Is Said [46.1845409187583]
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues.
Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias.
arXiv Detail & Related papers (2024-03-27T18:22:48Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Beyond prompt brittleness: Evaluating the reliability and consistency of political worldviews in LLMs [13.036825846417006]
We propose a series of tests to assess the reliability and consistency of large language models' stances on political statements.
We study models ranging in size from 7B to 70B parameters and find that their reliability increases with parameter count.
Larger models show overall stronger alignment with left-leaning parties but differ among policy programs.
arXiv Detail & Related papers (2024-02-27T16:19:37Z) - The Political Preferences of LLMs [0.0]
I administer 11 political orientation tests, designed to identify the political preferences of the test taker, to 24 state-of-the-art conversational LLMs.
Most conversational LLMs generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints.
I demonstrate that LLMs can be steered towards specific locations in the political spectrum through Supervised Fine-Tuning.
arXiv Detail & Related papers (2024-02-02T02:43:10Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.