Modeling the spread of fake news on Twitter
- URL: http://arxiv.org/abs/2007.14059v2
- Date: Tue, 27 Apr 2021 05:15:52 GMT
- Title: Modeling the spread of fake news on Twitter
- Authors: Taichi Murayama, Shoko Wakamiya, Eiji Aramaki and Ryota Kobayashi
- Abstract summary: We propose a point process model of the spread of fake news on Twitter.
We show that the proposed model is superior to the current state-of-the-art methods in accurately predicting the evolution of the spread of a fake news item.
The proposed model contributes to understanding the dynamics of the spread of fake news on social media.
- Score: 2.7910505923792637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fake news can have a significant negative impact on society because of the
growing use of mobile devices and the worldwide increase in Internet access. It
is therefore essential to develop a simple mathematical model to understand the
online dissemination of fake news. In this study, we propose a point process
model of the spread of fake news on Twitter. The proposed model describes the
spread of a fake news item as a two-stage process: initially, fake news spreads
as a piece of ordinary news; then, when most users start recognizing the
falsity of the news item, that itself spreads as another news story. We
validate this model using two datasets of fake news items spread on Twitter. We
show that the proposed model is superior to the current state-of-the-art
methods in accurately predicting the evolution of the spread of a fake news
item. Moreover, a text analysis suggests that our model appropriately infers
the correction time, i.e., the moment when Twitter users start realizing the
falsity of the news item. The proposed model contributes to understanding the
dynamics of the spread of fake news on social media. Its ability to extract a
compact representation of the spreading pattern could be useful in the
detection and mitigation of fake news.
Related papers
- From a Tiny Slip to a Giant Leap: An LLM-Based Simulation for Fake News Evolution [35.82418316346851]
We propose a Fake News evolUtion Simulation framEwork based on large language models (LLMs)
We define four types of agents commonly observed in daily interactions: spreaders, who propagate information; commentators, who provide opinions and interpretations; verifiers, who check the accuracy of information and bystanders, who passively observe without engaging.
Given the lack of prior work in this area, we developed a FUSE-EVAL evaluation framework to measure the deviation from true news during the fake news evolution process.
arXiv Detail & Related papers (2024-10-24T18:17:16Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - It's All in the Embedding! Fake News Detection Using Document Embeddings [0.6091702876917281]
We propose a new approach that uses document embeddings to build multiple models that accurately label news articles as reliable or fake.
We also present a benchmark on different architectures that detect fake news using binary or multi-labeled classification.
arXiv Detail & Related papers (2023-04-16T13:30:06Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Automated Fake News Detection using cross-checking with reliable sources [0.0]
We use natural human behavior to cross-check new information with reliable sources.
We implement this for Twitter and build a model that flags fake tweets.
Our implementation of this approach gives a $70%$ accuracy which outperforms other generic fake-news classification models.
arXiv Detail & Related papers (2022-01-01T00:59:58Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Connecting the Dots Between Fact Verification and Fake News Detection [21.564628184287173]
We propose a simple yet effective approach to connect the dots between fact verification and fake news detection.
Our approach makes use of the recent success of fact verification models and enables zero-shot fake news detection.
arXiv Detail & Related papers (2020-10-11T09:28:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.