Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection
- URL: http://arxiv.org/abs/2201.06885v1
- Date: Tue, 18 Jan 2022 11:28:36 GMT
- Title: Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection
- Authors: Weizhi Xu, Junfei Wu, Qiang Liu, Shu Wu, Liang Wang
- Abstract summary: We propose a unified Graph-based sEmantic sTructure mining framework, namely GET in short.
We model claims and evidences as graph-structured data and capture the long-distance semantic dependency.
After obtaining contextual semantic information, our model reduces information redundancy by performing graph structure learning.
- Score: 20.282527436527765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevalence and perniciousness of fake news has been a critical issue on
the Internet, which stimulates the development of automatic fake news detection
in turn. In this paper, we focus on the evidence-based fake news detection,
where several evidences are utilized to probe the veracity of news (i.e., a
claim). Most previous methods first employ sequential models to embed the
semantic information and then capture the claim-evidence interaction based on
different attention mechanisms. Despite their effectiveness, they still suffer
from two main weaknesses. Firstly, due to the inherent drawbacks of sequential
models, they fail to integrate the relevant information that is scattered far
apart in evidences for veracity checking. Secondly, they neglect much redundant
information contained in evidences that may be useless or even harmful. To
solve these problems, we propose a unified Graph-based sEmantic sTructure
mining framework, namely GET in short. Specifically, different from the
existing work that treats claims and evidences as sequences, we model them as
graph-structured data and capture the long-distance semantic dependency among
dispersed relevant snippets via neighborhood propagation. After obtaining
contextual semantic information, our model reduces information redundancy by
performing graph structure learning. Finally, the fine-grained semantic
representations are fed into the downstream claim-evidence interaction module
for predictions. Comprehensive experiments have demonstrated the superiority of
GET over the state-of-the-arts.
Related papers
- Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables [22.18384189336634]
HeterFC is a word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information.
We perform information propagation via a relational graph neural network, interactions between claims and evidence.
We introduce a multitask loss function to account for potential inaccuracies in evidence retrieval.
arXiv Detail & Related papers (2024-02-20T14:10:40Z) - MSynFD: Multi-hop Syntax aware Fake News Detection [27.046529059563863]
Social media platforms have fueled the rapid dissemination of fake news, posing threats to our real-life society.
Existing methods use multimodal data or contextual information to enhance the detection of fake news.
We propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news.
arXiv Detail & Related papers (2024-02-18T05:40:33Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Adversarial Contrastive Learning for Evidence-aware Fake News Detection
with Graph Neural Networks [20.282527436527765]
We propose a unified Graph-based sEmantic structure mining framework with ConTRAstive Learning, namely GETRAL in short.
We first model claims and evidences as graph-structured data to capture the long-distance semantic dependency.
Then the fine-grained semantic representations are fed into the claim-evidence interaction module for predictions.
arXiv Detail & Related papers (2022-10-11T14:54:37Z) - A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for
Explainable Fake News Detection [15.517424861844317]
Existing fake news detection methods aim to classify a piece of news as true or false and provide explanations, achieving remarkable performances.
When a piece of news has not yet been fact-checked or debunked, certain amounts of relevant raw reports are usually disseminated on various media outlets.
We propose a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection based on such raw reports.
arXiv Detail & Related papers (2022-09-29T09:05:47Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - An Adversarial Benchmark for Fake News Detection Models [0.065268245109828]
We formulate adversarial attacks that target three aspects of "understanding"
We test our benchmark using BERT classifiers fine-tuned on the LIAR arXiv:arch-ive/1705648 and Kaggle Fake-News datasets.
arXiv Detail & Related papers (2022-01-03T23:51:55Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.