A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
- URL: http://arxiv.org/abs/2502.15568v2
- Date: Mon, 07 Apr 2025 20:52:04 GMT
- Title: A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
- Authors: Ina Dormuth, Sven Franke, Marlies Hafer, Tim Katzke, Alexander Marx, Emmanuel Müller, Daniel Neider, Markus Pauly, Jérôme Rutinowski,
- Abstract summary: We examine the reliability of AI-based Voting Advice Applications (VAAs) and large language models (LLMs) in providing objective political information.<n>Our analysis is based upon a comparison with party responses to 38 statements of the Wahl-O-Mat.
- Score: 41.972629376586035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we examine the reliability of AI-based Voting Advice Applications (VAAs) and large language models (LLMs) in providing objective political information. Our analysis is based upon a comparison with party responses to 38 statements of the Wahl-O-Mat, a well-established German online tool that helps inform voters by comparing their views with political party positions. For the LLMs, we identify significant biases. They exhibit a strong alignment (over 75% on average) with left-wing parties and a substantially lower alignment with center-right (smaller 50%) and right-wing parties (around 30%). Furthermore, for the VAAs, intended to objectively inform voters, we found substantial deviations from the parties' stated positions in Wahl-O-Mat: While one VAA deviated in 25% of cases, another VAA showed deviations in more than 50% of cases. For the latter, we even observed that simple prompt injections led to severe hallucinations, including false claims such as non-existent connections between political parties and right-wing extremist ties.
Related papers
- Recommender Systems for Democracy: Toward Adversarial Robustness in Voting Advice Applications [18.95453617434051]
Voting advice applications (VAAs) help millions of voters understand which political parties or candidates best align with their views.<n>This paper explores the potential risks these applications pose to the democratic process when targeted by adversarial entities.
arXiv Detail & Related papers (2025-05-19T16:38:06Z) - Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters [0.571853823214391]
Large language models (LLMs) are predominantly used by many as a primary source of information for various topics.<n>LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions.<n>We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat.
arXiv Detail & Related papers (2025-05-07T13:18:41Z) - Leveraging AI and Sentiment Analysis for Forecasting Election Outcomes in Mauritius [0.0]
This study explores the use of AI-driven sentiment analysis as a novel tool for forecasting election outcomes, focusing on Mauritius' 2024 elections.
We analyze media sentiment toward two main political parties L'Alliance Lepep and L'Alliance Du Changement by classifying news articles from prominent Mauritian media outlets as positive, negative, or neutral.
Findings indicate that positive media sentiment strongly correlates with projected electoral gains, underscoring the role of media in shaping public perception.
arXiv Detail & Related papers (2024-10-28T09:21:15Z) - When Neutral Summaries are not that Neutral: Quantifying Political Neutrality in LLM-Generated News Summaries [0.0]
This study presents a fresh perspective on quantifying the political neutrality of LLMs.
We consider five pressing issues in current US politics: abortion, gun control/rights, healthcare, immigration, and LGBTQ+ rights.
Our study reveals a consistent trend towards pro-Democratic biases in several well-known LLMs.
arXiv Detail & Related papers (2024-10-13T19:44:39Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Who Would Chatbots Vote For? Political Preferences of ChatGPT and Gemini in the 2024 European Union Elections [0.0]
The research focused on the evaluation of political parties represented in the European Parliament across 27 EU Member States by these generative artificial intelligence (AI) systems.
The results revealed a stark contrast: while Gemini mostly refused to answer political questions, ChatGPT provided consistent ratings.
The study identified key factors influencing the ratings, including attitudes toward European integration and perceptions of democratic values.
arXiv Detail & Related papers (2024-09-01T13:40:13Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024 [22.471701390730185]
In light of the recent 2024 European Parliament elections, we are investigating if LLMs can be used as Voting Advice Applications (VAAs)
We evaluate MISTRAL and MIXTRAL models and evaluate their accuracy in predicting the stance of political parties based on the latest "EU and I" voting assistance questionnaire.
We find that MIXTRAL is highly accurate with an 82% accuracy on average with a significant performance disparity across different political groups.
arXiv Detail & Related papers (2024-07-11T13:29:28Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z) - Right and left, partisanship predicts (asymmetric) vulnerability to
misinformation [71.46564239895892]
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter.
We find that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
arXiv Detail & Related papers (2020-10-04T01:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.