Simulating Misinformation Propagation in Social Networks using Large Language Models
- URL: http://arxiv.org/abs/2511.10384v1
- Date: Fri, 14 Nov 2025 01:48:18 GMT
- Title: Simulating Misinformation Propagation in Social Networks using Large Language Models
- Authors: Raj Gaurav Maurya, Vaibhav Shukla, Raj Abhijit Dandekar, Rajat Dandekar, Sreedath Panat,
- Abstract summary: Misinformation on social media thrives on surprise, emotion, and identity-driven reasoning, often amplified through human cognitive biases.<n>To investigate these mechanisms, we model large language model (LLM) personas as synthetic agents that mimic user-level biases, ideological alignments, and trust misinformations.<n>Within this setup, we introduce an auditor-conditioned-node framework to simulate and analyze how misinformation evolves as it circulates through networks of such agents.
- Score: 4.285464959472458
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misinformation on social media thrives on surprise, emotion, and identity-driven reasoning, often amplified through human cognitive biases. To investigate these mechanisms, we model large language model (LLM) personas as synthetic agents that mimic user-level biases, ideological alignments, and trust heuristics. Within this setup, we introduce an auditor--node framework to simulate and analyze how misinformation evolves as it circulates through networks of such agents. News articles are propagated across networks of persona-conditioned LLM nodes, each rewriting received content. A question--answering-based auditor then measures factual fidelity at every step, offering interpretable, claim-level tracking of misinformation drift. We formalize a misinformation index and a misinformation propagation rate to quantify factual degradation across homogeneous and heterogeneous branches of up to 30 sequential rewrites. Experiments with 21 personas across 10 domains reveal that identity- and ideology-based personas act as misinformation accelerators, especially in politics, marketing, and technology. By contrast, expert-driven personas preserve factual stability. Controlled-random branch simulations further show that once early distortions emerge, heterogeneous persona interactions rapidly escalate misinformation to propaganda-level distortion. Our taxonomy of misinformation severity -- spanning factual errors, lies, and propaganda -- connects observed drift to established theories in misinformation studies. These findings demonstrate the dual role of LLMs as both proxies for human-like biases and as auditors capable of tracing information fidelity. The proposed framework provides an interpretable, empirically grounded approach for studying, simulating, and mitigating misinformation diffusion in digital ecosystems.
Related papers
- Simulating Misinformation Vulnerabilities With Agent Personas [1.0120858915885353]
We develop an agent-based simulation using Large Language Models to model responses to misinformation.<n>We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines.<n>Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses.
arXiv Detail & Related papers (2025-10-31T18:44:00Z) - MPCG: Multi-Round Persona-Conditioned Generation for Modeling the Evolution of Misinformation with LLMs [13.91292293823499]
Current misinformation detection approaches implicitly assume that misinformation is static.<n>We introduce MPCG, a multi-round, persona-conditioned framework that simulates how claims are iteratively reinterpreted by agents with distinct ideological perspectives.
arXiv Detail & Related papers (2025-09-20T07:40:48Z) - Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking [7.326813521586858]
Large Language Models (LLMs) have shown strong performance across fact-checking tasks.<n>This paper investigates whether generative agents can meaningfully contribute to fact-checking tasks traditionally reserved for human crowds.<n>Agent crowds outperform human crowds in truthfulness classification, exhibit higher internal consistency, and show reduced susceptibility to social and cognitive biases.
arXiv Detail & Related papers (2025-04-24T18:49:55Z) - Epidemiology-informed Network for Robust Rumor Detection [59.89351792706995]
We propose a novel Epidemiology-informed Network (EIN) that integrates epidemiological knowledge to enhance performance.<n>To adapt epidemiology theory to rumor detection, it is expected that each users stance toward the source information will be annotated.<n>Our experimental results demonstrate that the proposed EIN not only outperforms state-of-the-art methods on real-world datasets but also exhibits enhanced robustness across varying tree depths.
arXiv Detail & Related papers (2024-11-20T00:43:32Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News [38.990330255607276]
We introduce a Fake news propagation simulation framework based on large language models (LLMs)<n>Our simulation results uncover patterns in fake news propagation related to topic relevance, and individual traits, aligning with real-world observations.
arXiv Detail & Related papers (2024-03-14T15:40:13Z) - Addressing contingency in algorithmic (mis)information classification:
Toward a responsible machine learning agenda [0.9659642285903421]
Data scientists need to take a stance on the objectivity, authoritativeness and legitimacy of the sources of truth" used for model training and testing.
Despite (and due to) their reported high accuracy and performance, ML-driven moderation systems have the potential to shape online public debate and create downstream negative impacts such as undue censorship and the reinforcing of false beliefs.
arXiv Detail & Related papers (2022-10-05T17:34:51Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.