Can LLMs advance democratic values?
- URL: http://arxiv.org/abs/2410.08418v2
- Date: Thu, 17 Oct 2024 22:04:48 GMT
- Title: Can LLMs advance democratic values?
- Authors: Seth Lazar, Lorenzo Manuali,
- Abstract summary: We argue that LLMs should be kept well clear of formal democratic decision-making processes.
They can be put to good use in strengthening the informal public sphere.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LLMs are among the most advanced tools ever devised for analysing and generating linguistic content. Democratic deliberation and decision-making involve, at several distinct stages, the production and analysis of language. So it is natural to ask whether our best tools for manipulating language might prove instrumental to one of our most important linguistic tasks. Researchers and practitioners have recently asked whether LLMs can support democratic deliberation by leveraging abilities to summarise content, as well as to aggregate opinion over summarised content, and indeed to represent voters by predicting their preferences over unseen choices. In this paper, we assess whether using LLMs to perform these and related functions really advances the democratic values that inspire these experiments. We suggest that the record is decidedly mixed. In the presence of background inequality of power and resources, as well as deep moral and political disagreement, we should be careful not to use LLMs in ways that automate non-instrumentally valuable components of the democratic process, or else threaten to supplant fair and transparent decision-making procedures that are necessary to reconcile competing interests and values. However, while we argue that LLMs should be kept well clear of formal democratic decision-making processes, we think that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
Related papers
- NewsInterview: a Dataset and a Playground to Evaluate LLMs' Ground Gap via Informational Interviews [65.35458530702442]
We focus on journalistic interviews, a domain rich in grounding communication and abundant in data.
We curate a dataset of 40,000 two-person informational interviews from NPR and CNN.
LLMs are significantly less likely than human interviewers to use acknowledgements and to pivot to higher-level questions.
arXiv Detail & Related papers (2024-11-21T01:37:38Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths? [2.321323878201932]
Political misinformation poses challenges to democratic processes, shaping public opinion and trust in media.
This study investigates the use of state-of-the-art large language models (LLMs) as reliable annotators for detecting political factuality in news articles.
arXiv Detail & Related papers (2024-11-08T18:36:33Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy [20.06753067241866]
We evaluate and compare the alignment of six LLMs by OpenAI, Anthropic, and Cohere with German party positions.
We conduct our prompt experiment for which we use the benchmark and sociodemographic data of leading German parliamentarians.
arXiv Detail & Related papers (2024-07-25T13:04:25Z) - Assessing Political Bias in Large Language Models [0.624709220163167]
We evaluate the political bias of open-source Large Language Models (LLMs) concerning political issues within the European Union (EU) from a German voter's perspective.
We show that larger models, such as Llama3-70B, tend to align more closely with left-leaning political parties, while smaller models often remain neutral.
arXiv Detail & Related papers (2024-05-17T15:30:18Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - LLM Voting: Human Choices and AI Collective Decision Making [0.0]
This paper investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2.
We observed that the choice of voting methods and the presentation order influenced LLM voting outcomes.
We found that varying the persona can reduce some of these biases and enhance alignment with human choices.
arXiv Detail & Related papers (2024-01-31T14:52:02Z) - How should the advent of large language models affect the practice of
science? [51.62881233954798]
How should the advent of large language models affect the practice of science?
We have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate.
arXiv Detail & Related papers (2023-12-05T10:45:12Z) - The ART of LLM Refinement: Ask, Refine, and Trust [85.75059530612882]
We propose a reasoning with refinement objective called ART: Ask, Refine, and Trust.
It asks necessary questions to decide when an LLM should refine its output.
It achieves a performance gain of +5 points over self-refinement baselines.
arXiv Detail & Related papers (2023-11-14T07:26:32Z) - Vox Populi, Vox ChatGPT: Large Language Models, Education and Democracy [0.0]
This paper explores the potential transformative impact of large language models (LLMs) on democratic societies.
The discussion emphasizes the essence of authorship, rooted in the unique human capacity for reason.
We advocate for an emphasis on education as a means to mitigate risks.
arXiv Detail & Related papers (2023-11-10T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.