From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
- URL: http://arxiv.org/abs/2403.09498v1
- Date: Thu, 14 Mar 2024 15:40:13 GMT
- Title: From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
- Authors: Yuhan Liu, Xiuying Chen, Xiaoqing Zhang, Xing Gao, Ji Zhang, Rui Yan,
- Abstract summary: We introduce a Fake news propagation simulation framework based on large language models (LLMs)
Our simulation results uncover patterns in fake news propagation related to topic relevance, and individual traits, aligning with real-world observations.
- Score: 38.990330255607276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the digital era, the rapid propagation of fake news and rumors via social networks brings notable societal challenges and impacts public opinion regulation. Traditional fake news modeling typically forecasts the general popularity trends of different groups or numerically represents opinions shift. However, these methods often oversimplify real-world complexities and overlook the rich semantic information of news text. The advent of large language models (LLMs) provides the possibility of modeling subtle dynamics of opinion. Consequently, in this work, we introduce a Fake news Propagation Simulation framework (FPS) based on LLM, which studies the trends and control of fake news propagation in detail. Specifically, each agent in the simulation represents an individual with a distinct personality. They are equipped with both short-term and long-term memory, as well as a reflective mechanism to mimic human-like thinking. Every day, they engage in random opinion exchanges, reflect on their thinking, and update their opinions. Our simulation results uncover patterns in fake news propagation related to topic relevance, and individual traits, aligning with real-world observations. Additionally, we evaluate various intervention strategies and demonstrate that early and appropriately frequent interventions strike a balance between governance cost and effectiveness, offering valuable insights for practical applications. Our study underscores the significant utility and potential of LLMs in combating fake news.
Related papers
- Fake News Detection and Manipulation Reasoning via Large Vision-Language Models [38.457805116130004]
This paper introduces a benchmark for fake news detection and manipulation reasoning, referred to as Human-centric and Fact-related Fake News (HFFN)
The benchmark highlights the centrality of human and the high factual relevance, with detailed manual annotations.
A Multi-modal news Detection and Reasoning langUage Model (M-DRUM) is presented not only to judge on the authenticity of multi-modal news, but also raise analytical reasoning about potential manipulations.
arXiv Detail & Related papers (2024-07-02T08:16:43Z) - Belief Revision: The Adaptability of Large Language Models Reasoning [63.0281286287648]
We introduce Belief-R, a new dataset designed to test LMs' belief revision ability when presented with new evidence.
Inspired by how humans suppress prior inferences, this task assesses LMs within the newly proposed delta reasoning framework.
We evaluate $sim$30 LMs across diverse prompting strategies and found that LMs generally struggle to appropriately revise their beliefs in response to new information.
arXiv Detail & Related papers (2024-06-28T09:09:36Z) - Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation [43.46328146533669]
Social media has emerged as a cornerstone of social movements, wielding significant influence in driving societal change.
We introduce a hybrid framework HiSim for social media user simulation, wherein users are categorized into two types.
We construct a Twitter-like environment to replicate their response dynamics following trigger events.
arXiv Detail & Related papers (2024-02-26T06:28:54Z) - Simulating Opinion Dynamics with Networks of LLM-based Agents [7.697132934635411]
We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs)
Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality.
After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research.
arXiv Detail & Related papers (2023-11-16T07:01:48Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Improving Generalization for Multimodal Fake News Detection [8.595270610973586]
State-of-the-art approaches are usually trained on datasets of smaller size or with a limited set of specific topics.
We propose three models that adopt and fine-tune state-of-the-art multimodal transformers for multimodal fake news detection.
arXiv Detail & Related papers (2023-05-29T20:32:22Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - A Multi-Policy Framework for Deep Learning-Based Fake News Detection [0.31498833540989407]
This work introduces Multi-Policy Statement Checker (MPSC), a framework that automates fake news detection.
MPSC uses deep learning techniques to analyze a statement itself and its related news articles, predicting whether it is seemingly credible or suspicious.
arXiv Detail & Related papers (2022-06-01T21:25:21Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.