A Framework to Assess the Persuasion Risks Large Language Model Chatbots Pose to Democratic Societies
- URL: http://arxiv.org/abs/2505.00036v1
- Date: Tue, 29 Apr 2025 16:02:51 GMT
- Title: A Framework to Assess the Persuasion Risks Large Language Model Chatbots Pose to Democratic Societies
- Authors: Zhongren Chen, Joshua Kalla, Quan Le, Shinpei Nakamura-Sakai, Jasjeet Sekhon, Ruixiao Wang,
- Abstract summary: Large Language Models (LLMs) pose to democratic societies through their persuasive capabilities.<n>We conduct two survey experiments and a real-world simulation exercise to determine whether it is more cost effective to persuade a large number of voters using LLMs.<n>We estimate that LLM-based persuasion costs between $48-$74 per persuaded voter compared to $100 for traditional campaign methods.
- Score: 1.1819975950139372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, significant concern has emerged regarding the potential threat that Large Language Models (LLMs) pose to democratic societies through their persuasive capabilities. We expand upon existing research by conducting two survey experiments and a real-world simulation exercise to determine whether it is more cost effective to persuade a large number of voters using LLM chatbots compared to standard political campaign practice, taking into account both the "receive" and "accept" steps in the persuasion process (Zaller 1992). These experiments improve upon previous work by assessing extended interactions between humans and LLMs (instead of using single-shot interactions) and by assessing both short- and long-run persuasive effects (rather than simply asking users to rate the persuasiveness of LLM-produced content). In two survey experiments (N = 10,417) across three distinct political domains, we find that while LLMs are about as persuasive as actual campaign ads once voters are exposed to them, political persuasion in the real-world depends on both exposure to a persuasive message and its impact conditional on exposure. Through simulations based on real-world parameters, we estimate that LLM-based persuasion costs between \$48-\$74 per persuaded voter compared to \$100 for traditional campaign methods, when accounting for the costs of exposure. However, it is currently much easier to scale traditional campaign persuasion methods than LLM-based persuasion. While LLMs do not currently appear to have substantially greater potential for large-scale political persuasion than existing non-LLM methods, this may change as LLM capabilities continue to improve and it becomes easier to scalably encourage exposure to persuasive LLMs.
Related papers
- Can (A)I Change Your Mind? [0.6990493129893112]
The study was conducted entirely in Hebrew with 200 participants.<n>It assessed the persuasive effects of both LLM and human interlocutors on controversial civil policy topics.
arXiv Detail & Related papers (2025-03-03T18:59:54Z) - Tailored Truths: Optimizing LLM Persuasion with Personalization and Fabricated Statistics [0.0]
Large Language Models (LLMs) are becoming increasingly persuasive.<n>LLMs can personalize arguments in conversation with humans by leveraging their personal data.<n>This may have serious impacts on the scale and effectiveness of disinformation campaigns.
arXiv Detail & Related papers (2025-01-28T20:06:09Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Transforming Scholarly Landscapes: Influence of Large Language Models on Academic Fields beyond Computer Science [77.31665252336157]
Large Language Models (LLMs) have ushered in a transformative era in Natural Language Processing (NLP)
This work empirically examines the influence and use of LLMs in fields beyond NLP.
arXiv Detail & Related papers (2024-09-29T01:32:35Z) - Measuring and Benchmarking Large Language Models' Capabilities to Generate Persuasive Language [41.052284715017606]
We study the ability of Large Language Models (LLMs) to produce persuasive text.<n>As opposed to prior work which focuses on particular domains or types of persuasion, we conduct a general study across various domains.<n>We construct the new dataset Persuasive-Pairs of pairs of pairs of a short text and its rewrite by an LLM to amplify or diminish persuasive language.
arXiv Detail & Related papers (2024-06-25T17:40:47Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Generative Echo Chamber? Effects of LLM-Powered Search Systems on
Diverse Information Seeking [49.02867094432589]
Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people.
We investigate whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect.
arXiv Detail & Related papers (2024-02-08T18:14:33Z) - How should the advent of large language models affect the practice of
science? [51.62881233954798]
How should the advent of large language models affect the practice of science?
We have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate.
arXiv Detail & Related papers (2023-12-05T10:45:12Z) - Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits [1.2818275315985972]
We conduct a user study engaging over 40 individuals across all age groups in price negotiations with an LLM.
We show that the negotiated prices humans manage to achieve span a broad range, which points to a literacy gap in effectively interacting with LLMs.
arXiv Detail & Related papers (2023-11-26T08:44:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.