Automated Fake News Detection using cross-checking with reliable sources
- URL: http://arxiv.org/abs/2201.00083v1
- Date: Sat, 1 Jan 2022 00:59:58 GMT
- Title: Automated Fake News Detection using cross-checking with reliable sources
- Authors: Zahra Ghadiri, Milad Ranjbar, Fakhteh Ghanbarnejad, Sadegh Raeisi
- Abstract summary: We use natural human behavior to cross-check new information with reliable sources.
We implement this for Twitter and build a model that flags fake tweets.
Our implementation of this approach gives a $70%$ accuracy which outperforms other generic fake-news classification models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decade, fake news and misinformation have turned into a major
problem that has impacted different aspects of our lives, including politics
and public health. Inspired by natural human behavior, we present an approach
that automates the detection of fake news. Natural human behavior is to
cross-check new information with reliable sources. We use Natural Language
Processing (NLP) and build a machine learning (ML) model that automates the
process of cross-checking new information with a set of predefined reliable
sources. We implement this for Twitter and build a model that flags fake
tweets. Specifically, for a given tweet, we use its text to find relevant news
from reliable news agencies. We then train a Random Forest model that checks if
the textual content of the tweet is aligned with the trusted news. If it is
not, the tweet is classified as fake. This approach can be generally applied to
any kind of information and is not limited to a specific news story or a
category of information. Our implementation of this approach gives a $70\%$
accuracy which outperforms other generic fake-news classification models. These
results pave the way towards a more sensible and natural approach to fake news
detection.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - How Vulnerable Are Automatic Fake News Detection Methods to Adversarial
Attacks? [0.6882042556551611]
This paper shows that it is possible to automatically attack state-of-the-art models that have been trained to detect Fake News.
The results show that it is possible to automatically bypass Fake News detection mechanisms, leading to implications concerning existing policy initiatives.
arXiv Detail & Related papers (2021-07-16T15:36:03Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - How does Truth Evolve into Fake News? An Empirical Study of Fake News
Evolution [55.27685924751459]
We present the Fake News Evolution dataset: a new dataset tracking the fake news evolution process.
Our dataset is composed of 950 paired data, each of which consists of articles representing the truth, the fake news, and the evolved fake news.
We observe the features during the evolution and they are the disinformation techniques, text similarity, top 10 keywords, classification accuracy, parts of speech, and sentiment properties.
arXiv Detail & Related papers (2021-03-10T09:01:34Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Connecting the Dots Between Fact Verification and Fake News Detection [21.564628184287173]
We propose a simple yet effective approach to connect the dots between fact verification and fake news detection.
Our approach makes use of the recent success of fact verification models and enables zero-shot fake news detection.
arXiv Detail & Related papers (2020-10-11T09:28:52Z) - Modeling the spread of fake news on Twitter [2.7910505923792637]
We propose a point process model of the spread of fake news on Twitter.
We show that the proposed model is superior to the current state-of-the-art methods in accurately predicting the evolution of the spread of a fake news item.
The proposed model contributes to understanding the dynamics of the spread of fake news on social media.
arXiv Detail & Related papers (2020-07-28T08:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.