Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters
- URL: http://arxiv.org/abs/2505.04393v1
- Date: Wed, 07 May 2025 13:18:41 GMT
- Title: Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters
- Authors: David Exler, Mark Schutera, Markus Reischl, Luca Rettenberger,
- Abstract summary: Large language models (LLMs) are predominantly used by many as a primary source of information for various topics.<n>LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions.<n>We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat.
- Score: 0.571853823214391
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing prevalence of artificial intelligence, careful evaluation of inherent biases needs to be conducted to form the basis for alleviating the effects these predispositions can have on users. Large language models (LLMs) are predominantly used by many as a primary source of information for various topics. LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions. Educating users on their risks is key to responsible use, as bias, unlike hallucinations, cannot be caught through data verification. We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat. This metric measures the alignment between an individual's political views and the positions of German political parties. We compare the models' alignment scores to identify factors influencing their political preferences. Doing so, we discover a bias toward left-leaning parties, most dominant in larger LLMs. Also, we find that the language we use to communicate with the models affects their political views. Additionally, we analyze the influence of a model's origin and release date and compare the results to the outcome of the recent vote of the Bundestag. Our results imply that LLMs are prone to exhibiting political bias. Large corporations with the necessary means to develop LLMs, thus, knowingly or unknowingly, have a responsibility to contain these biases, as they can influence each voter's decision-making process and inform public opinion in general and at scale.
Related papers
- Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models [52.00270888041742]
We introduce a novel dataset with neutral event descriptions and contrasting viewpoints from different countries.<n>Our findings show significant geopolitical biases, with models favoring specific national narratives.<n>Simple debiasing prompts had a limited effect on reducing these biases.
arXiv Detail & Related papers (2025-06-07T10:45:17Z) - Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts [1.9782163071901029]
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world.<n>We show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics.<n>In a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts.
arXiv Detail & Related papers (2025-05-07T06:53:59Z) - Through the LLM Looking Glass: A Socratic Probing of Donkeys, Elephants, and Markets [42.55423041662188]
The study aims to directly measure the models' biases rather than relying on external interpretations.<n>Our results reveal a consistent preference of Democratic over Republican positions across all models.<n>In economic topics, biases vary among Western LLMs, while those developed in China lean more strongly toward socialism.
arXiv Detail & Related papers (2025-03-20T19:40:40Z) - Unpacking Political Bias in Large Language Models: A Cross-Model Comparison on U.S. Politics [6.253258189994455]
Political bias, as a universal phenomenon in human society, may be transferred to Large Language Models.<n>Political biases evolve with model scale and release date, and are also influenced by regional factors of LLMs.
arXiv Detail & Related papers (2024-12-21T19:42:40Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.<n>This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in large language models (LLMs) on political opinions and decision-making.<n>We found that participants exposed to partisan biased models were significantly more likely to adopt opinions and make decisions which matched the LLM's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Bias in LLMs as Annotators: The Effect of Party Cues on Labelling Decision by Large Language Models [0.0]
We test similar biases in Large Language Models (LLMs) as annotators.
Unlike humans, who are only biased when faced with statements from extreme parties, LLMs exhibit significant bias even when prompted with statements from center-left and center-right parties.
arXiv Detail & Related papers (2024-08-28T16:05:20Z) - GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy [20.06753067241866]
We evaluate and compare the alignment of six LLMs by OpenAI, Anthropic, and Cohere with German party positions.
We conduct our prompt experiment for which we use the benchmark and sociodemographic data of leading German parliamentarians.
arXiv Detail & Related papers (2024-07-25T13:04:25Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.