Generative AI Voting: Fair Collective Choice is Resilient to LLM Biases and Inconsistencies
- URL: http://arxiv.org/abs/2406.11871v2
- Date: Sun, 18 Aug 2024 12:25:32 GMT
- Title: Generative AI Voting: Fair Collective Choice is Resilient to LLM Biases and Inconsistencies
- Authors: Srijoni Majumdar, Edith Elkind, Evangelos Pournaras,
- Abstract summary: We show for the first time in real-world a proportional representation of voters in direct democracy.
We also show that fair ballot aggregation methods, such as equal shares, prove to be a win-win: fairer voting outcomes for humans with fairer AI representation.
- Score: 21.444936180683147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scaling up deliberative and voting participation is a longstanding endeavor -- a cornerstone for direct democracy and legitimate collective choice. Recent breakthroughs in generative artificial intelligence (AI) and large language models (LLMs) unravel new capabilities for AI personal assistants to overcome cognitive bandwidth limitations of humans, providing decision support or even direct representation of human voters at large scale. However, the quality of this representation and what underlying biases manifest when delegating collective decision-making to LLMs is an alarming and timely challenge to tackle. By rigorously emulating with high realism more than >50K LLM voting personas in 81 real-world voting elections, we disentangle the nature of different biases in LLMS (GPT 3, GPT 3.5, and Llama2). Complex preferential ballot formats exhibit significant inconsistencies compared to simpler majoritarian elections that show higher consistency. Strikingly though, by demonstrating for the first time in real-world a proportional representation of voters in direct democracy, we are also able to show that fair ballot aggregation methods, such as equal shares, prove to be a win-win: fairer voting outcomes for humans with fairer AI representation. This novel underlying relationship proves paramount for democratic resilience in progressives scenarios with low voters turnout and voter fatigue supported by AI representatives: abstained voters are mitigated by recovering highly representative voting outcomes that are fairer. These interdisciplinary insights provide remarkable foundations for science, policymakers, and citizens to develop safeguards and resilience for AI risks in democratic innovations.
Related papers
- Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban [44.99833362998488]
We propose a framework for assessing AI's impact on elections.
We group AI-enabled campaigning uses into three categories -- campaign operations, voter outreach, and deception.
We provide the first systematic evidence from a preregistered representative survey.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - LLM Voting: Human Choices and AI Collective Decision Making [0.0]
This paper investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2.
We observed that the choice of voting methods and the presentation order influenced LLM voting outcomes.
We found that varying the persona can reduce some of these biases and enhance alignment with human choices.
arXiv Detail & Related papers (2024-01-31T14:52:02Z) - Candidate Incentive Distributions: How voting methods shape electoral incentives [0.0]
We find that Instant Runoff Voting incentivizes candidates to appeal to a wider range of voters than Plurality Voting.
We find that Condorcet methods and STAR (Score Then Automatic Runoff) Voting provide the most balanced incentives.
arXiv Detail & Related papers (2023-06-12T14:32:46Z) - Principle-Driven Self-Alignment of Language Models from Scratch with
Minimal Human Supervision [84.31474052176343]
Recent AI-assistant agents, such as ChatGPT, rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback to align the output with human intentions.
This dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision.
We propose a novel approach called SELF-ALIGN, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision.
arXiv Detail & Related papers (2023-05-04T17:59:28Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Modeling Voters in Multi-Winner Approval Voting [24.002910959494923]
We study voting behavior in single-winner and multi-winner approval voting scenarios with varying degrees of uncertainty.
We find that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation.
We propose a novel model that takes into account the size of the winning set and human cognitive constraints.
arXiv Detail & Related papers (2020-12-04T19:24:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.