A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for
Explainable Fake News Detection
- URL: http://arxiv.org/abs/2209.14642v1
- Date: Thu, 29 Sep 2022 09:05:47 GMT
- Title: A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for
Explainable Fake News Detection
- Authors: Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang
- Abstract summary: Existing fake news detection methods aim to classify a piece of news as true or false and provide explanations, achieving remarkable performances.
When a piece of news has not yet been fact-checked or debunked, certain amounts of relevant raw reports are usually disseminated on various media outlets.
We propose a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection based on such raw reports.
- Score: 15.517424861844317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing fake news detection methods aim to classify a piece of news as true
or false and provide veracity explanations, achieving remarkable performances.
However, they often tailor automated solutions on manual fact-checked reports,
suffering from limited news coverage and debunking delays. When a piece of news
has not yet been fact-checked or debunked, certain amounts of relevant raw
reports are usually disseminated on various media outlets, containing the
wisdom of crowds to verify the news claim and explain its verdict. In this
paper, we propose a novel Coarse-to-fine Cascaded Evidence-Distillation
(CofCED) neural network for explainable fake news detection based on such raw
reports, alleviating the dependency on fact-checked ones. Specifically, we
first utilize a hierarchical encoder for web text representation, and then
develop two cascaded selectors to select the most explainable sentences for
verdicts on top of the selected top-K reports in a coarse-to-fine manner.
Besides, we construct two explainable fake news datasets, which are publicly
available. Experimental results demonstrate that our model significantly
outperforms state-of-the-art baselines and generates high-quality explanations
from diverse evaluation perspectives.
Related papers
- Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom [19.027690459807197]
We propose a novel defense-based explainable fake news detection framework.
Specifically, we first propose an evidence extraction module to split the wisdom of crowds into two competing parties and respectively detect salient evidences.
We then design a prompt-based module that utilizes a large language model to generate justifications by inferring reasons towards two possible veracities.
arXiv Detail & Related papers (2024-05-06T11:24:13Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection [20.282527436527765]
We propose a unified Graph-based sEmantic sTructure mining framework, namely GET in short.
We model claims and evidences as graph-structured data and capture the long-distance semantic dependency.
After obtaining contextual semantic information, our model reduces information redundancy by performing graph structure learning.
arXiv Detail & Related papers (2022-01-18T11:28:36Z) - Explainable Tsetlin Machine framework for fake news detection with
credibility score assessment [16.457778420360537]
We propose a novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine (TM)
We use the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text.
For evaluation, we conduct experiments on two publicly available datasets, PolitiFact and GossipCop, and demonstrate that the TM framework significantly outperforms previously published baselines by at least $5%$ in terms of accuracy.
arXiv Detail & Related papers (2021-05-19T13:18:02Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Hierarchical Multi-head Attentive Network for Evidence-aware Fake News
Detection [11.990139228037124]
We propose a Hierarchical Multi-head Attentive Network to fact-check textual claims.
Our model jointly combines multi-head word-level attention and multi-head document-level attention, which aid explanation in both word-level and evidence-level.
arXiv Detail & Related papers (2021-02-04T15:18:44Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Weak Supervision for Fake News Detection via Reinforcement Learning [34.448503443582396]
We propose a weakly-supervised fake news detection framework, i.e., WeFEND.
The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector.
We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports.
arXiv Detail & Related papers (2019-12-28T21:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.