A Multi-Policy Framework for Deep Learning-Based Fake News Detection
- URL: http://arxiv.org/abs/2206.11866v1
- Date: Wed, 1 Jun 2022 21:25:21 GMT
- Title: A Multi-Policy Framework for Deep Learning-Based Fake News Detection
- Authors: Jo\~ao Vitorino, Tiago Dias, Tiago Fonseca, Nuno Oliveira, Isabel
Pra\c{c}a
- Abstract summary: This work introduces Multi-Policy Statement Checker (MPSC), a framework that automates fake news detection.
MPSC uses deep learning techniques to analyze a statement itself and its related news articles, predicting whether it is seemingly credible or suspicious.
- Score: 0.31498833540989407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Connectivity plays an ever-increasing role in modern society, with people all
around the world having easy access to rapidly disseminated information.
However, a more interconnected society enables the spread of intentionally
false information. To mitigate the negative impacts of fake news, it is
essential to improve detection methodologies. This work introduces Multi-Policy
Statement Checker (MPSC), a framework that automates fake news detection by
using deep learning techniques to analyze a statement itself and its related
news articles, predicting whether it is seemingly credible or suspicious. The
proposed framework was evaluated using four merged datasets containing real and
fake news. Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU) and
Bidirectional Encoder Representations from Transformers (BERT) models were
trained to utilize both lexical and syntactic features, and their performance
was evaluated. The obtained results demonstrate that a multi-policy analysis
reliably identifies suspicious statements, which can be advantageous for fake
news detection.
Related papers
- Fake News Detection and Manipulation Reasoning via Large Vision-Language Models [38.457805116130004]
This paper introduces a benchmark for fake news detection and manipulation reasoning, referred to as Human-centric and Fact-related Fake News (HFFN)
The benchmark highlights the centrality of human and the high factual relevance, with detailed manual annotations.
A Multi-modal news Detection and Reasoning langUage Model (M-DRUM) is presented not only to judge on the authenticity of multi-modal news, but also raise analytical reasoning about potential manipulations.
arXiv Detail & Related papers (2024-07-02T08:16:43Z) - MSynFD: Multi-hop Syntax aware Fake News Detection [27.046529059563863]
Social media platforms have fueled the rapid dissemination of fake news, posing threats to our real-life society.
Existing methods use multimodal data or contextual information to enhance the detection of fake news.
We propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news.
arXiv Detail & Related papers (2024-02-18T05:40:33Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Applying Automatic Text Summarization for Fake News Detection [4.2177790395417745]
The distribution of fake news is not a new but a rapidly growing problem.
We present an approach to the problem that combines the power of transformer-based language models.
Our framework, CMTR-BERT, combines multiple text representations and enables the incorporation of contextual information.
arXiv Detail & Related papers (2022-04-04T21:00:55Z) - Explainable Tsetlin Machine framework for fake news detection with
credibility score assessment [16.457778420360537]
We propose a novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine (TM)
We use the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text.
For evaluation, we conduct experiments on two publicly available datasets, PolitiFact and GossipCop, and demonstrate that the TM framework significantly outperforms previously published baselines by at least $5%$ in terms of accuracy.
arXiv Detail & Related papers (2021-05-19T13:18:02Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.