Large Language Models (LLMs) as Agents for Augmented Democracy
- URL: http://arxiv.org/abs/2405.03452v3
- Date: Tue, 30 Jul 2024 09:51:41 GMT
- Title: Large Language Models (LLMs) as Agents for Augmented Democracy
- Authors: Jairo Gudiño-Rosero, Umberto Grandi, César A. Hidalgo,
- Abstract summary: We explore an augmented democracy system built on off-the-shelf LLMs fine-tuned to augment data on citizen's preferences.
We use a train-test cross-validation setup to estimate the accuracy with which the LLMs predict both: a subject's individual political choices and the aggregate preferences of the full sample of participants.
- Score: 6.491009626125319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore an augmented democracy system built on off-the-shelf LLMs fine-tuned to augment data on citizen's preferences elicited over policies extracted from the government programs of the two main candidates of Brazil's 2022 presidential election. We use a train-test cross-validation setup to estimate the accuracy with which the LLMs predict both: a subject's individual political choices and the aggregate preferences of the full sample of participants. At the individual level, we find that LLMs predict out of sample preferences more accurately than a "bundle rule", which would assume that citizens always vote for the proposals of the candidate aligned with their self-reported political orientation. At the population level, we show that a probabilistic sample augmented by an LLM provides a more accurate estimate of the aggregate preferences of a population than the non-augmented probabilistic sample alone. Together, these results indicates that policy preference data augmented using LLMs can capture nuances that transcend party lines and represents a promising avenue of research for data augmentation.
Related papers
- ElectionSim: Massive Population Election Simulation Powered by Large Language Model Driven Agents [70.17229548653852]
We introduce ElectionSim, an innovative election simulation framework based on large language models.
We present a million-level voter pool sampled from social media platforms to support accurate individual simulation.
We also introduce PPE, a poll-based presidential election benchmark to assess the performance of our framework under the U.S. presidential election scenario.
arXiv Detail & Related papers (2024-10-28T05:25:50Z) - United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections [45.84205238554709]
Large language models (LLMs) are perceived by some as having the potential to revolutionize social science research.
In this study, we examine to what extent LLM-based predictions of public opinion exhibit context-dependent biases.
We predict voting behavior in the 2024 European Parliament elections using a state-of-the-art LLM.
arXiv Detail & Related papers (2024-08-29T16:01:06Z) - Large language models can consistently generate high-quality content for election disinformation operations [2.98293101034582]
Large language models have raised concerns about their potential use in generating compelling election disinformation at scale.
This study presents a two-part investigation into the capabilities of LLMs to automate stages of an election disinformation operation.
arXiv Detail & Related papers (2024-08-13T08:45:34Z) - Vox Populi, Vox AI? Using Language Models to Estimate German Public Opinion [45.84205238554709]
We generate a synthetic sample of personas matching the individual characteristics of the 2017 German Longitudinal Election Study respondents.
We ask the LLM GPT-3.5 to predict each respondent's vote choice and compare these predictions to the survey-based estimates.
We find that GPT-3.5 does not predict citizens' vote choice accurately, exhibiting a bias towards the Green and Left parties.
arXiv Detail & Related papers (2024-07-11T14:52:18Z) - Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data [102.16105233826917]
Learning from preference labels plays a crucial role in fine-tuning large language models.
There are several distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning (RL), and contrastive learning.
arXiv Detail & Related papers (2024-04-22T17:20:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - LLM Voting: Human Choices and AI Collective Decision Making [0.0]
This paper investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2.
We observed that the choice of voting methods and the presentation order influenced LLM voting outcomes.
We found that varying the persona can reduce some of these biases and enhance alignment with human choices.
arXiv Detail & Related papers (2024-01-31T14:52:02Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.