Persona-driven Simulation of Voting Behavior in the European Parliament with Large Language Models
- URL: http://arxiv.org/abs/2506.11798v1
- Date: Fri, 13 Jun 2025 14:02:21 GMT
- Title: Persona-driven Simulation of Voting Behavior in the European Parliament with Large Language Models
- Authors: Maximilian Kreutner, Marlene Lutz, Markus Strohmaier,
- Abstract summary: We analyze whether zero-shot persona prompting with limited information can accurately predict individual voting decisions.<n>We find that we can simulate voting behavior of Members of the European Parliament reasonably well with a weighted F1 score of approximately 0.793.
- Score: 1.7990260056064977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) display remarkable capabilities to understand or even produce political discourse, but have been found to consistently display a progressive left-leaning bias. At the same time, so-called persona or identity prompts have been shown to produce LLM behavior that aligns with socioeconomic groups that the base model is not aligned with. In this work, we analyze whether zero-shot persona prompting with limited information can accurately predict individual voting decisions and, by aggregation, accurately predict positions of European groups on a diverse set of policies. We evaluate if predictions are stable towards counterfactual arguments, different persona prompts and generation methods. Finally, we find that we can simulate voting behavior of Members of the European Parliament reasonably well with a weighted F1 score of approximately 0.793. Our persona dataset of politicians in the 2024 European Parliament and our code are available at https://github.com/dess-mannheim/european_parliament_simulation.
Related papers
- Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions [4.234771450043289]
Large language models (LLMs) are increasingly capable of simulating human behavior.<n>We propose a novel methodology for constructing virtual personas with synthetic user backstories" generated as extended, multi-turn interview transcripts.<n>Our generated backstories are longer, rich in detail, and consistent in authentically describing a singular individual.
arXiv Detail & Related papers (2025-04-16T00:10:34Z) - Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models [6.549047699071195]
This study adopts a persona-free, topic-specific approach to evaluate political behavior in large language models.<n>We analyze responses from 43 large language models developed in the U.S., Europe, China, and the Middle East.<n>Findings show most models lean center-left or left ideologically and vary in their nonpartisan engagement patterns.
arXiv Detail & Related papers (2024-12-21T19:42:40Z) - ElectionSim: Massive Population Election Simulation Powered by Large Language Model Driven Agents [70.17229548653852]
We introduce ElectionSim, an innovative election simulation framework based on large language models.
We present a million-level voter pool sampled from social media platforms to support accurate individual simulation.
We also introduce PPE, a poll-based presidential election benchmark to assess the performance of our framework under the U.S. presidential election scenario.
arXiv Detail & Related papers (2024-10-28T05:25:50Z) - United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections [42.72938925647165]
"Synthetic samples" based on large language models (LLMs) have been argued to serve as efficient alternatives to surveys of humans.<n>"Synthetic samples" might exhibit bias due to training data and fine-tuning processes being unrepresentative of diverse contexts.<n>This study investigates if and under which conditions LLM-generated synthetic samples can be used for public opinion prediction.
arXiv Detail & Related papers (2024-08-29T16:01:06Z) - GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy [20.06753067241866]
We evaluate and compare the alignment of six LLMs by OpenAI, Anthropic, and Cohere with German party positions.
We conduct our prompt experiment for which we use the benchmark and sociodemographic data of leading German parliamentarians.
arXiv Detail & Related papers (2024-07-25T13:04:25Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Large Language Models (LLMs) as Agents for Augmented Democracy [6.491009626125319]
We explore an augmented democracy system built on off-the-shelf LLMs fine-tuned to augment data on citizen's preferences.
We use a train-test cross-validation setup to estimate the accuracy with which the LLMs predict both: a subject's individual political choices and the aggregate preferences of the full sample of participants.
arXiv Detail & Related papers (2024-05-06T13:23:57Z) - The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models [67.38144169029617]
We map the sociodemographics and stated preferences of 1,500 diverse participants from 75 countries, to their contextual preferences and fine-grained feedback in 8,011 live conversations with 21 Large Language Models (LLMs)<n>With PRISM, we contribute (i) wider geographic and demographic participation in feedback; (ii) census-representative samples for two countries (UK, US); and (iii) individualised ratings that link to detailed participant profiles, permitting personalisation and attribution of sample artefacts.<n>We use PRISM in three case studies to demonstrate the need for careful consideration of which humans provide what alignment data.
arXiv Detail & Related papers (2024-04-24T17:51:36Z) - Do Membership Inference Attacks Work on Large Language Models? [141.2019867466968]
Membership inference attacks (MIAs) attempt to predict whether a particular datapoint is a member of a target model's training data.
We perform a large-scale evaluation of MIAs over a suite of language models trained on the Pile, ranging from 160M to 12B parameters.
We find that MIAs barely outperform random guessing for most settings across varying LLM sizes and domains.
arXiv Detail & Related papers (2024-02-12T17:52:05Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Whose Opinions Do Language Models Reflect? [88.35520051971538]
We investigate the opinions reflected by language models (LMs) by leveraging high-quality public opinion polls and their associated human responses.
We find substantial misalignment between the views reflected by current LMs and those of US demographic groups.
Our analysis confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs.
arXiv Detail & Related papers (2023-03-30T17:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.