Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News
- URL: http://arxiv.org/abs/2004.01732v1
- Date: Fri, 3 Apr 2020 18:26:33 GMT
- Title: Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News
- Authors: Kai Shu, Guoqing Zheng, Yichuan Li, Subhabrata Mukherjee, Ahmed Hassan
Awadallah, Scott Ruston, Huan Liu
- Abstract summary: Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
- Score: 67.53424807783414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media has greatly enabled people to participate in online activities
at an unprecedented rate. However, this unrestricted access also exacerbates
the spread of misinformation and fake news online which might cause confusion
and chaos unless being detected early for its mitigation. Given the rapidly
evolving nature of news events and the limited amount of annotated data,
state-of-the-art systems on fake news detection face challenges due to the lack
of large numbers of annotated training instances that are hard to come by for
early detection. In this work, we exploit multiple weak signals from different
sources given by user and content engagements (referred to as weak social
supervision), and their complementary utilities to detect fake news. We jointly
leverage the limited amount of clean data along with weak signals from social
engagements to train deep neural networks in a meta-learning framework to
estimate the quality of different weak instances. Experiments on realworld
datasets demonstrate that the proposed framework outperforms state-of-the-art
baselines for early detection of fake news without using any user engagements
at prediction time.
Related papers
- Revisiting Fake News Detection: Towards Temporality-aware Evaluation by Leveraging Engagement Earliness [22.349521957987672]
Social graph-based fake news detection aims to identify news articles containing false information by utilizing social contexts.
We formalize a more realistic evaluation scheme that mimics real-world scenarios.
We show that the discriminative capabilities of conventional methods decrease sharply under this new setting.
arXiv Detail & Related papers (2024-11-19T05:08:00Z) - A Semi-supervised Fake News Detection using Sentiment Encoding and LSTM with Self-Attention [0.0]
We propose a semi-supervised self-learning method in which a sentiment analysis is acquired by some state-of-the-art pretrained models.
Our learning model is trained in a semi-supervised fashion and incorporates LSTM with self-attention layers.
We benchmark our model on a dataset with 20,000 news content along with their feedback, which shows better performance in precision, recall, and measures compared to competitive methods in fake news detection.
arXiv Detail & Related papers (2024-07-27T20:00:10Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - SOK: Fake News Outbreak 2021: Can We Stop the Viral Spread? [5.64512235559998]
Social Networks' omnipresence and ease of use has revolutionized the generation and distribution of information in today's world.
Unlike traditional media channels, social networks facilitate faster and wider spread of disinformation and misinformation.
Viral spread of false information has serious implications on the behaviors, attitudes and beliefs of the public.
arXiv Detail & Related papers (2021-05-22T09:26:13Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Weak Supervision for Fake News Detection via Reinforcement Learning [34.448503443582396]
We propose a weakly-supervised fake news detection framework, i.e., WeFEND.
The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector.
We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports.
arXiv Detail & Related papers (2019-12-28T21:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.