A Systematic Review on the Detection of Fake News Articles
- URL: http://arxiv.org/abs/2110.11240v1
- Date: Mon, 18 Oct 2021 21:29:11 GMT
- Title: A Systematic Review on the Detection of Fake News Articles
- Authors: Nathaniel Hoy, Theodora Koulouri
- Abstract summary: It has been argued that fake news and the spread of false information pose a threat to societies throughout the world.
To combat this threat, a number of Natural Language Processing (NLP) approaches have been developed.
This paper aims to delineate the approaches for fake news detection that are most performant, identify limitations with existing approaches, and suggest ways these can be mitigated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It has been argued that fake news and the spread of false information pose a
threat to societies throughout the world, from influencing the results of
elections to hindering the efforts to manage the COVID-19 pandemic. To combat
this threat, a number of Natural Language Processing (NLP) approaches have been
developed. These leverage a number of datasets, feature extraction/selection
techniques and machine learning (ML) algorithms to detect fake news before it
spreads. While these methods are well-documented, there is less evidence
regarding their efficacy in this domain. By systematically reviewing the
literature, this paper aims to delineate the approaches for fake news detection
that are most performant, identify limitations with existing approaches, and
suggest ways these can be mitigated. The analysis of the results indicates that
Ensemble Methods using a combination of news content and socially-based
features are currently the most effective. Finally, it is proposed that future
research should focus on developing approaches that address generalisability
issues (which, in part, arise from limitations with current datasets),
explainability and bias.
Related papers
- Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting [14.70496845511859]
We develop an algorithm to quickly and accurately determine whether a source is a large language model (LLM) or a human.
We use the techniques of sequential hypothesis testing by betting to build on existing offline detection techniques.
Experiments were conducted to demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-10-29T17:55:14Z) - Exploring the Deceptive Power of LLM-Generated Fake News: A Study of Real-World Detection Challenges [21.425647152424585]
We propose a strong fake news attack method called conditional Variational-autoencoder-Like Prompt (VLPrompt)
Unlike current methods, VLPrompt eliminates the need for additional data collection while maintaining contextual coherence.
Our experiments, including various detection methods and novel human study metrics, were conducted to assess their performance on our dataset.
arXiv Detail & Related papers (2024-03-27T04:39:18Z) - Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors [38.75533934195315]
Large Language Models (LLMs) are known for their remarkable reasoning and generative capabilities.
We introduce a novel, retrieval-augmented LLMs framework--the first of its kind to automatically and strategically extract key evidence from web sources for claim verification.
Our framework ensures the acquisition of sufficient, relevant evidence, thereby enhancing performance.
arXiv Detail & Related papers (2024-03-14T00:35:39Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - A Multi-Policy Framework for Deep Learning-Based Fake News Detection [0.31498833540989407]
This work introduces Multi-Policy Statement Checker (MPSC), a framework that automates fake news detection.
MPSC uses deep learning techniques to analyze a statement itself and its related news articles, predicting whether it is seemingly credible or suspicious.
arXiv Detail & Related papers (2022-06-01T21:25:21Z) - Automated Evidence Collection for Fake News Detection [11.324403127916877]
We propose a novel approach that improves over the current automatic fake news detection approaches.
Our approach extracts supporting evidence from the web articles and then selects appropriate text to be treated as evidence sets.
Our experiments, using both machine learning and deep learning-based methods, help perform an extensive evaluation of our approach.
arXiv Detail & Related papers (2021-12-13T09:38:41Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.