Who Would Chatbots Vote For? Political Preferences of ChatGPT and Gemini in the 2024 European Union Elections
- URL: http://arxiv.org/abs/2409.00721v1
- Date: Sun, 1 Sep 2024 13:40:13 GMT
- Title: Who Would Chatbots Vote For? Political Preferences of ChatGPT and Gemini in the 2024 European Union Elections
- Authors: Michael Haman, Milan Školník,
- Abstract summary: The research focused on the evaluation of political parties represented in the European Parliament across 27 EU Member States by these generative artificial intelligence (AI) systems.
The results revealed a stark contrast: while Gemini mostly refused to answer political questions, ChatGPT provided consistent ratings.
The study identified key factors influencing the ratings, including attitudes toward European integration and perceptions of democratic values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study examines the political bias of chatbots powered by large language models, namely ChatGPT and Gemini, in the context of the 2024 European Parliament elections. The research focused on the evaluation of political parties represented in the European Parliament across 27 EU Member States by these generative artificial intelligence (AI) systems. The methodology involved daily data collection through standardized prompts on both platforms. The results revealed a stark contrast: while Gemini mostly refused to answer political questions, ChatGPT provided consistent ratings. The analysis showed a significant bias in ChatGPT in favor of left-wing and centrist parties, with the highest ratings for the Greens/European Free Alliance. In contrast, right-wing parties, particularly the Identity and Democracy group, received the lowest ratings. The study identified key factors influencing the ratings, including attitudes toward European integration and perceptions of democratic values. The findings highlight the need for a critical approach to information provided by generative AI systems in a political context and call for more transparency and regulation in this area.
Related papers
- From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs [18.836470390824633]
We audit Llama Chat in the context of EU politics to analyze the model's political knowledge and its ability to reason in context.
We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning.
arXiv Detail & Related papers (2024-03-20T13:42:57Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - In Generative AI we Trust: Can Chatbots Effectively Verify Political
Information? [39.58317527488534]
This article presents a comparative analysis of the ability of two large language model (LLM)-based chatbots, ChatGPT and Bing Chat, to detect veracity of political information.
We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ+ related debates.
The results show high performance of ChatGPT for the baseline veracity evaluation task, with 72 percent of the cases evaluated correctly on average across languages without pre-training.
arXiv Detail & Related papers (2023-12-20T15:17:03Z) - The Self-Perception and Political Biases of ChatGPT [0.0]
This contribution analyzes the self-perception and political biases of OpenAI's Large Language Model ChatGPT.
The political compass test revealed a bias towards progressive and libertarian views.
Political questionnaires for the G7 member states indicated a bias towards progressive views but no significant bias between authoritarian and libertarian views.
arXiv Detail & Related papers (2023-04-14T18:06:13Z) - The political ideology of conversational AI: Converging evidence on
ChatGPT's pro-environmental, left-libertarian orientation [0.0]
OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts.
This paper focuses on one of democratic society's most important decision-making processes: political elections.
We uncover ChatGPT's pro-environmental, left-libertarian ideology.
arXiv Detail & Related papers (2023-01-05T07:13:13Z) - Reaching the bubble may not be enough: news media role in online
political polarization [58.720142291102135]
A way of reducing polarization would be by distributing cross-partisan news among individuals with distinct political orientations.
This study investigates whether this holds in the context of nationwide elections in Brazil and Canada.
arXiv Detail & Related papers (2021-09-18T11:34:04Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Mundus vult decipi, ergo decipiatur: Visual Communication of Uncertainty
in Election Polls [56.8172499765118]
We discuss potential sources of bias in nowcasting and forecasting.
Concepts are presented to attenuate the issue of falsely perceived accuracy.
One key idea is the use of Probabilities of Events instead of party shares.
arXiv Detail & Related papers (2021-04-28T07:02:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.