Graph Neural Networks with Continual Learning for Fake News Detection
from Social Media
- URL: http://arxiv.org/abs/2007.03316v2
- Date: Fri, 14 Aug 2020 07:49:20 GMT
- Title: Graph Neural Networks with Continual Learning for Fake News Detection
from Social Media
- Authors: Yi Han, Shanika Karunasekera, Christopher Leckie
- Abstract summary: We use graph neural networks (GNNs) to differentiate between the propagation patterns of fake and real news on social media.
Without relying on any text information, we show that GNNs can achieve comparable or superior performance without any text information.
We propose a method that achieves balanced performance on both existing and new datasets, by using techniques from continual learning to train GNNs incrementally.
- Score: 18.928184473686567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although significant effort has been applied to fact-checking, the prevalence
of fake news over social media, which has profound impact on justice, public
trust and our society, remains a serious problem. In this work, we focus on
propagation-based fake news detection, as recent studies have demonstrated that
fake news and real news spread differently online. Specifically, considering
the capability of graph neural networks (GNNs) in dealing with non-Euclidean
data, we use GNNs to differentiate between the propagation patterns of fake and
real news on social media. In particular, we concentrate on two questions: (1)
Without relying on any text information, e.g., tweet content, replies and user
descriptions, how accurately can GNNs identify fake news? Machine learning
models are known to be vulnerable to adversarial attacks, and avoiding the
dependence on text-based features can make the model less susceptible to the
manipulation of advanced fake news fabricators. (2) How to deal with new,
unseen data? In other words, how does a GNN trained on a given dataset perform
on a new and potentially vastly different dataset? If it achieves
unsatisfactory performance, how do we solve the problem without re-training the
model on the entire data from scratch? We study the above questions on two
datasets with thousands of labelled news items, and our results show that: (1)
GNNs can achieve comparable or superior performance without any text
information to state-of-the-art methods. (2) GNNs trained on a given dataset
may perform poorly on new, unseen data, and direct incremental training cannot
solve the problem---this issue has not been addressed in the previous work that
applies GNNs for fake news detection. In order to solve the problem, we propose
a method that achieves balanced performance on both existing and new datasets,
by using techniques from continual learning to train GNNs incrementally.
Related papers
- A Semi-supervised Fake News Detection using Sentiment Encoding and LSTM with Self-Attention [0.0]
We propose a semi-supervised self-learning method in which a sentiment analysis is acquired by some state-of-the-art pretrained models.
Our learning model is trained in a semi-supervised fashion and incorporates LSTM with self-attention layers.
We benchmark our model on a dataset with 20,000 news content along with their feedback, which shows better performance in precision, recall, and measures compared to competitive methods in fake news detection.
arXiv Detail & Related papers (2024-07-27T20:00:10Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Combination Of Convolution Neural Networks And Deep Neural Networks For
Fake News Detection [0.0]
We have described the Fake News Challenge stage #1 dataset and given an overview of the competitive attempts to build a fake news detection system.
The proposed system detects all the categories with high accuracy except the disagree category.
As a result, the system achieves up to 84.6 % accuracy, classifying it as the second ranking based on other competitive studies.
arXiv Detail & Related papers (2022-10-15T16:32:51Z) - Fake News Quick Detection on Dynamic Heterogeneous Information Networks [3.599616699656401]
We propose a novel Dynamic Heterogeneous Graph Neural Network (DHGNN) for fake news quick detection.
We first implement BERT and fine-tuned BERT to get a semantic representation of the news article contents and author profiles.
Then, we construct the heterogeneous news-author graph to reflect contextual information and relationships.
arXiv Detail & Related papers (2022-05-14T11:23:25Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - A comparative analysis of Graph Neural Networks and commonly used
machine learning algorithms on fake news detection [0.0]
Low cost, simple accessibility via social platforms, and a plethora of low-budget online news sources are some of the factors that contribute to the spread of false news.
Most of the existing fake news detection algorithms are solely focused on the news content only.
engaged users prior posts or social activities provide a wealth of information about their views on news and have significant ability to improve fake news identification.
arXiv Detail & Related papers (2022-03-26T18:40:03Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Adversarial Active Learning based Heterogeneous Graph Neural Network for
Fake News Detection [18.847254074201953]
We propose a novel fake news detection framework, namely Adversarial Active Learning-based Heterogeneous Graph Neural Network (AA-HGNN)
AA-HGNN utilizes an active learning framework to enhance learning performance, especially when facing the paucity of labeled data.
Experiments with two real-world fake news datasets show that our model can outperform text-based models and other graph-based models.
arXiv Detail & Related papers (2021-01-27T05:05:25Z) - Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged
Fraudsters [78.53851936180348]
We introduce two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage.
Existing GNNs have not addressed these two camouflages, which results in their poor performance in fraud detection problems.
We propose a new model named CAmouflage-REsistant GNN (CARE-GNN) to enhance the GNN aggregation process with three unique modules against camouflages.
arXiv Detail & Related papers (2020-08-19T22:33:12Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.