Persuasion at Play: Understanding Misinformation Dynamics in Demographic-Aware Human-LLM Interactions
- URL: http://arxiv.org/abs/2503.02038v1
- Date: Mon, 03 Mar 2025 20:30:22 GMT
- Title: Persuasion at Play: Understanding Misinformation Dynamics in Demographic-Aware Human-LLM Interactions
- Authors: Angana Borah, Rada Mihalcea, Verónica Pérez-Rosas,
- Abstract summary: Large language models (LLMs) generate persuasive content at scale and reinforce existing biases.<n>This study investigates the bidirectional persuasion dynamics between LLMs and humans when exposed to misinformative content.<n>Our findings show that demographic factors influence susceptibility to misinformation in LLMs, closely reflecting the demographic-based patterns seen in human susceptibility.
- Score: 27.38030183605309
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing challenges in misinformation exposure and susceptibility vary across demographic groups, as some populations are more vulnerable to misinformation than others. Large language models (LLMs) introduce new dimensions to these challenges through their ability to generate persuasive content at scale and reinforcing existing biases. This study investigates the bidirectional persuasion dynamics between LLMs and humans when exposed to misinformative content. We analyze human-to-LLM influence using human-stance datasets and assess LLM-to-human influence by generating LLM-based persuasive arguments. Additionally, we use a multi-agent LLM framework to analyze the spread of misinformation under persuasion among demographic-oriented LLM agents. Our findings show that demographic factors influence susceptibility to misinformation in LLMs, closely reflecting the demographic-based patterns seen in human susceptibility. We also find that, similar to human demographic groups, multi-agent LLMs exhibit echo chamber behavior. This research explores the interplay between humans and LLMs, highlighting demographic differences in the context of misinformation and offering insights for future interventions.
Related papers
- Can (A)I Change Your Mind? [0.6990493129893112]
The study was conducted entirely in Hebrew with 200 participants.
It assessed the persuasive effects of both LLM and human interlocutors on controversial civil policy topics.
arXiv Detail & Related papers (2025-03-03T18:59:54Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Large Language Models Reflect the Ideology of their Creators [71.65505524599888]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
This paper shows that the ideological stance of an LLM appears to reflect the worldview of its creators.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Hate Personified: Investigating the role of LLMs in content moderation [64.26243779985393]
For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model's (LLM) ability to represent diverse groups is unclear.
By including additional context in prompts, we analyze LLM's sensitivity to geographical priming, persona attributes, and numerical information to assess how well the needs of various groups are reflected.
arXiv Detail & Related papers (2024-10-03T16:43:17Z) - Beyond Demographics: Aligning Role-playing LLM-based Agents Using Human Belief Networks [5.76230391989518]
Using data from a human survey, we estimated a belief network encompassing 64 topics loading on nine non-overlapping latent factors.
We then seeded LLM-based agents with an opinion on one topic, and assessed the alignment of its expressed opinions on remaining test topics with corresponding human data.
Role-playing based on demographic information alone did not align LLM and human opinions, but seeding the agent with a single belief greatly improved alignment for topics related in the belief network, and not for topics outside the network.
arXiv Detail & Related papers (2024-06-25T02:37:29Z) - Modeling Human Subjectivity in LLMs Using Explicit and Implicit Human Factors in Personas [14.650234624251716]
Large language models (LLMs) are increasingly being used in human-centered social scientific tasks.
These tasks are highly subjective and dependent on human factors, such as one's environment, attitudes, beliefs, and lived experiences.
We examine the role of prompting LLMs with human-like personas and ask the models to answer as if they were a specific human.
arXiv Detail & Related papers (2024-06-20T16:24:07Z) - Explaining Large Language Models Decisions Using Shapley Values [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - The Wisdom of Partisan Crowds: Comparing Collective Intelligence in
Humans and LLM-based Agents [7.986590413263814]
"Wisdom of partisan crowds" is a phenomenon known as the "wisdom of partisan crowds"
We find that partisan crowds display human-like partisan biases, but also converge to more accurate beliefs through deliberation as humans do.
We identify several factors that interfere with convergence, including the use of chain-of-thought prompt and lack of details in personas.
arXiv Detail & Related papers (2023-11-16T08:30:15Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z) - Influence of External Information on Large Language Models Mirrors
Social Cognitive Patterns [51.622612759892775]
Social cognitive theory explains how people learn and acquire knowledge through observing others.
Recent years have witnessed the rapid development of large language models (LLMs)
LLMs, as AI agents, can observe external information, which shapes their cognition and behaviors.
arXiv Detail & Related papers (2023-05-08T16:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.