Simulating Misinformation Vulnerabilities With Agent Personas
- URL: http://arxiv.org/abs/2511.04697v1
- Date: Fri, 31 Oct 2025 18:44:00 GMT
- Title: Simulating Misinformation Vulnerabilities With Agent Personas
- Authors: David Farr, Lynnette Hui Xian Ng, Stephen Prochaska, Iain J. Cruickshank, Jevin West,
- Abstract summary: We develop an agent-based simulation using Large Language Models to model responses to misinformation.<n>We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines.<n>Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses.
- Score: 1.0120858915885353
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Disinformation campaigns can distort public perception and destabilize institutions. Understanding how different populations respond to information is crucial for designing effective interventions, yet real-world experimentation is impractical and ethically challenging. To address this, we develop an agent-based simulation using Large Language Models (LLMs) to model responses to misinformation. We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines. Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses. We also find that mental schemas, more than professional background, influence how agents interpret misinformation. This work provides a validation of LLMs to be used as agents in an agent-based model of an information network for analyzing trust, polarization, and susceptibility to deceptive content in complex social systems.
Related papers
- Thinking Makes LLM Agents Introverted: How Mandatory Thinking Can Backfire in User-Engaged Agents [23.785816075149484]
Eliciting reasoning has emerged as a powerful technique for improving the performance of large language models (LLMs) on complex tasks by inducing thinking.<n>We conduct a comprehensive study on the effect of explicit thinking in user-engaged LLM agents.<n>We find that mandatory thinking often backfires on agents in user-engaged settings, causing anomalous performance degradation.
arXiv Detail & Related papers (2026-02-08T03:23:22Z) - On the Role of Contextual Information and Ego States in LLM Agent Behavior for Transactional Analysis Dialogues [0.0]
This paper proposes a Multi-Agent System inspired by Transactional Analysis (TA) theory.<n>In the proposed system, each agent is divided into three ego states - Parent, Adult, and Child.<n>The results are promising and open up new directions for exploring how psychologically grounded structures can enrich agent behavior.
arXiv Detail & Related papers (2025-12-18T20:53:31Z) - Are Your Agents Upward Deceivers? [73.1073084327614]
Large Language Model (LLM)-based agents are increasingly used as autonomous subordinates that carry out tasks for users.<n>This raises the question of whether they may also engage in deception, similar to how individuals in human organizations lie to superiors to create a good image or avoid punishment.<n>We observe and define agentic upward deception, a phenomenon in which an agent facing environmental constraints conceals its failure and performs actions that were not requested without reporting.
arXiv Detail & Related papers (2025-12-04T14:47:05Z) - Simulating Misinformation Propagation in Social Networks using Large Language Models [4.285464959472458]
Misinformation on social media thrives on surprise, emotion, and identity-driven reasoning, often amplified through human cognitive biases.<n>To investigate these mechanisms, we model large language model (LLM) personas as synthetic agents that mimic user-level biases, ideological alignments, and trust misinformations.<n>Within this setup, we introduce an auditor-conditioned-node framework to simulate and analyze how misinformation evolves as it circulates through networks of such agents.
arXiv Detail & Related papers (2025-11-13T15:01:19Z) - A Survey on Agentic Multimodal Large Language Models [84.18778056010629]
We present a comprehensive survey on Agentic Multimodal Large Language Models (Agentic MLLMs)<n>We explore the emerging paradigm of agentic MLLMs, delineating their conceptual foundations and distinguishing characteristics from conventional MLLM-based agents.<n>To further accelerate research in this area for the community, we compile open-source training frameworks, training and evaluation datasets for developing agentic MLLMs.
arXiv Detail & Related papers (2025-10-13T04:07:01Z) - FAIRGAME: a Framework for AI Agents Bias Recognition using Game Theory [51.96049148869987]
We present FAIRGAME, a Framework for AI Agents Bias Recognition using Game Theory.<n>We describe its implementation and usage, and we employ it to uncover biased outcomes in popular games among AI agents.<n>Overall, FAIRGAME allows users to reliably and easily simulate their desired games and scenarios.
arXiv Detail & Related papers (2025-04-19T15:29:04Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Large Language Model-driven Multi-Agent Simulation for News Diffusion Under Different Network Structures [36.45109260662318]
This work employs a large language model (LLM)-driven multi-agent simulation to replicate complex interactions within information ecosystems.
We investigate key factors that facilitate news propagation, such as agent personalities and network structures.
We evaluate three countermeasure strategies, discovering brute-force blocking influential agents in the network or announcing news accuracy can effectively mitigate misinformation.
arXiv Detail & Related papers (2024-10-16T23:58:26Z) - Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance [95.03771007780976]
We tackle the challenge of developing proactive agents capable of anticipating and initiating tasks without explicit human instructions.<n>First, we collect real-world human activities to generate proactive task predictions.<n>These predictions are labeled by human annotators as either accepted or rejected.<n>The labeled data is used to train a reward model that simulates human judgment.
arXiv Detail & Related papers (2024-10-16T08:24:09Z) - AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents [27.10147264744531]
We study how Large Language Models (LLM)-based agents navigate scenarios in a multi-turn interactive setting.<n>We develop a truthfulness detector inspired by psychological literature to assess the agents' responses.<n>Our experiment demonstrates that all models are truthful less than 50% of the time, though truthfulness and goal achievement (utility) rates vary across models.
arXiv Detail & Related papers (2024-09-13T17:41:12Z) - Mental Modeling of Reinforcement Learning Agents by Language Models [14.668006477454616]
This study empirically examines, for the first time, how well large language models can build a mental model of agents.
This research may unveil the potential of leveraging LLMs for elucidating RL agent behaviour.
arXiv Detail & Related papers (2024-06-26T17:14:45Z) - Knowledge Boundary and Persona Dynamic Shape A Better Social Media Agent [69.12885360755408]
We construct a social media agent based on personalized knowledge and dynamic persona information.
For personalized knowledge, we add external knowledge sources and match them with the persona information of agents, thereby giving the agent personalized world knowledge.
For dynamic persona information, we use current action information to internally retrieve the persona information of the agent, thereby reducing the interference of diverse persona information on the current action.
arXiv Detail & Related papers (2024-03-28T10:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.