Large Language Models (LLMs) as Agents for Augmented Democracy
- URL: http://arxiv.org/abs/2405.03452v3
- Date: Tue, 30 Jul 2024 09:51:41 GMT
- Title: Large Language Models (LLMs) as Agents for Augmented Democracy
- Authors: Jairo Gudiño-Rosero, Umberto Grandi, César A. Hidalgo,
- Abstract summary: We explore an augmented democracy system built on off-the-shelf LLMs fine-tuned to augment data on citizen's preferences.
We use a train-test cross-validation setup to estimate the accuracy with which the LLMs predict both: a subject's individual political choices and the aggregate preferences of the full sample of participants.
- Score: 6.491009626125319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore an augmented democracy system built on off-the-shelf LLMs fine-tuned to augment data on citizen's preferences elicited over policies extracted from the government programs of the two main candidates of Brazil's 2022 presidential election. We use a train-test cross-validation setup to estimate the accuracy with which the LLMs predict both: a subject's individual political choices and the aggregate preferences of the full sample of participants. At the individual level, we find that LLMs predict out of sample preferences more accurately than a "bundle rule", which would assume that citizens always vote for the proposals of the candidate aligned with their self-reported political orientation. At the population level, we show that a probabilistic sample augmented by an LLM provides a more accurate estimate of the aggregate preferences of a population than the non-augmented probabilistic sample alone. Together, these results indicates that policy preference data augmented using LLMs can capture nuances that transcend party lines and represents a promising avenue of research for data augmentation.
Related papers
- Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations [49.908708778200115]
We are the first to specialize large language models (LLMs) for simulating survey response distributions.
As a testbed, we use country-level results from two global cultural surveys.
We devise a fine-tuning method based on first-token probabilities to minimize divergence between predicted and actual response distributions.
arXiv Detail & Related papers (2025-02-10T21:59:27Z) - A Large-scale Empirical Study on Large Language Models for Election Prediction [12.582222782098587]
We introduce a multi-step reasoning framework for election prediction, which integrates demographic, ideological, and time-sensitive factors.
We apply our approach to the 2024 U.S. presidential election, illustrating its ability to generalize beyond observed historical data.
We identify potential political biases embedded in pretrained corpora, examine how demographic patterns can become exaggerated, and suggest strategies for mitigating these issues.
arXiv Detail & Related papers (2024-12-19T07:10:51Z) - Algorithmic Fidelity of Large Language Models in Generating Synthetic German Public Opinions: A Case Study [23.458234676060716]
This study investigates the algorithmic fidelity of large language models (LLMs)
We prompt different LLMs to generate synthetic public opinions reflective of German subpopulations by incorporating demographic features into the persona prompts.
Our results show that Llama performs better than other LLMs at representing subpopulations, particularly when there is lower opinion diversity within those groups.
arXiv Detail & Related papers (2024-12-17T18:46:32Z) - ElectionSim: Massive Population Election Simulation Powered by Large Language Model Driven Agents [70.17229548653852]
We introduce ElectionSim, an innovative election simulation framework based on large language models.
We present a million-level voter pool sampled from social media platforms to support accurate individual simulation.
We also introduce PPE, a poll-based presidential election benchmark to assess the performance of our framework under the U.S. presidential election scenario.
arXiv Detail & Related papers (2024-10-28T05:25:50Z) - United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections [45.84205238554709]
Large language models (LLMs) are perceived by some as having the potential to revolutionize social science research.
In this study, we examine to what extent LLM-based predictions of public opinion exhibit context-dependent biases.
We predict voting behavior in the 2024 European Parliament elections using a state-of-the-art LLM.
arXiv Detail & Related papers (2024-08-29T16:01:06Z) - Vox Populi, Vox AI? Using Language Models to Estimate German Public Opinion [45.84205238554709]
We generate a synthetic sample of personas matching the individual characteristics of the 2017 German Longitudinal Election Study respondents.
We ask the LLM GPT-3.5 to predict each respondent's vote choice and compare these predictions to the survey-based estimates.
We find that GPT-3.5 does not predict citizens' vote choice accurately, exhibiting a bias towards the Green and Left parties.
arXiv Detail & Related papers (2024-07-11T14:52:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - LLM Voting: Human Choices and AI Collective Decision Making [0.0]
This paper investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2.
We observed that the choice of voting methods and the presentation order influenced LLM voting outcomes.
We found that varying the persona can reduce some of these biases and enhance alignment with human choices.
arXiv Detail & Related papers (2024-01-31T14:52:02Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.