MVAN: Multi-View Attention Networks for Fake News Detection on Social Media
- URL: http://arxiv.org/abs/2506.01627v1
- Date: Mon, 02 Jun 2025 13:05:23 GMT
- Title: MVAN: Multi-View Attention Networks for Fake News Detection on Social Media
- Authors: Shiwen Ni, Jiawen Li, Hung-Yu Kao,
- Abstract summary: Existing fake news detection methods focus on finding clues from Long text content.<n>This paper solves the problem of fake news detection in more realistic scenarios.<n>We develop a novel neural network based model, textbfMulti-textbfView textbfAttention textbfNetworks.
- Score: 24.17395475682138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fake news on social media is a widespread and serious problem in today's society. Existing fake news detection methods focus on finding clues from Long text content, such as original news articles and user comments. This paper solves the problem of fake news detection in more realistic scenarios. Only source shot-text tweet and its retweet users are provided without user comments. We develop a novel neural network based model, \textbf{M}ulti-\textbf{V}iew \textbf{A}ttention \textbf{N}etworks (MVAN) to detect fake news and provide explanations on social media. The MVAN model includes text semantic attention and propagation structure attention, which ensures that our model can capture information and clues both of source tweet content and propagation structure. In addition, the two attention mechanisms in the model can find key clue words in fake news texts and suspicious users in the propagation structure. We conduct experiments on two real-world datasets, and the results demonstrate that MVAN can significantly outperform state-of-the-art methods by 2.5\% in accuracy on average, and produce a reasonable explanation.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming
Sentences with Contextualized Social Wisdom [13.92421433941043]
We investigate a novel task in the field of fake news debunking, which involves detecting sentence-level misinformation.
Inspired by the Multiple Instance Learning (MIL) approach, we propose a model called Weakly Supervised Detection of Misinforming Sentences (WSDMS)
We evaluate WSDMS on three real-world benchmarks and demonstrate that it outperforms existing state-of-the-art baselines in debunking fake news at both the sentence and article levels.
arXiv Detail & Related papers (2023-10-25T12:06:55Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.<n>To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.<n>Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection [15.386007761649251]
We propose a novel Title-Text similarity and emotion-aware Fake news detection (TieFake) method by jointly modeling the multi-modal context information and the author sentiment.
Specifically, we employ BERT and ResNeSt to learn the representations for text and images, and utilize publisher emotion extractor to capture the author's subjective emotion in the news content.
arXiv Detail & Related papers (2023-04-19T04:47:36Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - FR-Detect: A Multi-Modal Framework for Early Fake News Detection on
Social Media Using Publishers Features [0.0]
Despite the advantages of these media in the news field, the lack of any control and verification mechanism has led to the spread of fake news.
We suggest a high accurate multi-modal framework, namely FR-Detect, using user-related and content-related features with early detection capability.
Experiments have shown that the publishers' features can improve the performance of content-based models by up to 13% and 29% in accuracy and F1-score.
arXiv Detail & Related papers (2021-09-10T12:39:00Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Multimodal Fusion with BERT and Attention Mechanism for Fake News
Detection [0.0]
We present a novel method for detecting fake news by fusing multimodal features derived from textual and visual data.
Experimental results showed that our approach performs better than the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.
arXiv Detail & Related papers (2021-04-23T08:47:54Z) - Modeling the spread of fake news on Twitter [2.7910505923792637]
We propose a point process model of the spread of fake news on Twitter.
We show that the proposed model is superior to the current state-of-the-art methods in accurately predicting the evolution of the spread of a fake news item.
The proposed model contributes to understanding the dynamics of the spread of fake news on social media.
arXiv Detail & Related papers (2020-07-28T08:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.