Adversarial Contrastive Learning for Evidence-aware Fake News Detection
with Graph Neural Networks
- URL: http://arxiv.org/abs/2210.05498v1
- Date: Tue, 11 Oct 2022 14:54:37 GMT
- Title: Adversarial Contrastive Learning for Evidence-aware Fake News Detection
with Graph Neural Networks
- Authors: Junfei Wu, Weizhi Xu, Qiang Liu, Shu Wu, Liang Wang
- Abstract summary: We propose a unified Graph-based sEmantic structure mining framework with ConTRAstive Learning, namely GETRAL in short.
We first model claims and evidences as graph-structured data to capture the long-distance semantic dependency.
Then the fine-grained semantic representations are fed into the claim-evidence interaction module for predictions.
- Score: 20.282527436527765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevalence and perniciousness of fake news have been a critical issue on
the Internet, which stimulates the development of automatic fake news detection
in turn. In this paper, we focus on evidence-based fake news detection, where
several evidences are utilized to probe the veracity of news (i.e., a claim).
Most previous methods first employ sequential models to embed the semantic
information and then capture the claim-evidence interaction based on attention
mechanisms. Despite their effectiveness, they still suffer from three
weaknesses. Firstly, sequential models fail to integrate the relevant
information that is scattered far apart in evidences. Secondly, they
underestimate much redundant information in evidences may be useless or
harmful. Thirdly, insufficient data utilization limits the separability and
reliability of representations captured by the model. To solve these problems,
we propose a unified Graph-based sEmantic structure mining framework with
ConTRAstive Learning, namely GETRAL in short. Specifically, we first model
claims and evidences as graph-structured data to capture the long-distance
semantic dependency. Consequently, we reduce information redundancy by
performing graph structure learning. Then the fine-grained semantic
representations are fed into the claim-evidence interaction module for
predictions. Finally, an adversarial contrastive learning module is applied to
make full use of data and strengthen representation learning. Comprehensive
experiments have demonstrated the superiority of GETRAL over the
state-of-the-arts and validated the efficacy of semantic mining with graph
structure and contrastive learning.
Related papers
- G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables [22.18384189336634]
HeterFC is a word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information.
We perform information propagation via a relational graph neural network, interactions between claims and evidence.
We introduce a multitask loss function to account for potential inaccuracies in evidence retrieval.
arXiv Detail & Related papers (2024-02-20T14:10:40Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - The Devil is in the Conflict: Disentangled Information Graph Neural
Networks for Fraud Detection [17.254383007779616]
We argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
We propose a simple and effective method that uses the attention mechanism to adaptively fuse two views.
Our model can significantly outperform stateof-the-art baselines on real-world fraud detection datasets.
arXiv Detail & Related papers (2022-10-22T08:21:49Z) - DAGAD: Data Augmentation for Graph Anomaly Detection [57.92471847260541]
This paper devises a novel Data Augmentation-based Graph Anomaly Detection (DAGAD) framework for attributed graphs.
A series of experiments on three datasets prove that DAGAD outperforms ten state-of-the-art baseline detectors concerning various mostly-used metrics.
arXiv Detail & Related papers (2022-10-18T11:28:21Z) - Mining Fine-grained Semantics via Graph Neural Networks for
Evidence-based Fake News Detection [20.282527436527765]
We propose a unified Graph-based sEmantic sTructure mining framework, namely GET in short.
We model claims and evidences as graph-structured data and capture the long-distance semantic dependency.
After obtaining contextual semantic information, our model reduces information redundancy by performing graph structure learning.
arXiv Detail & Related papers (2022-01-18T11:28:36Z) - An Adversarial Benchmark for Fake News Detection Models [0.065268245109828]
We formulate adversarial attacks that target three aspects of "understanding"
We test our benchmark using BERT classifiers fine-tuned on the LIAR arXiv:arch-ive/1705648 and Kaggle Fake-News datasets.
arXiv Detail & Related papers (2022-01-03T23:51:55Z) - Deconfounded Training for Graph Neural Networks [98.06386851685645]
We present a new paradigm of decon training (DTP) that better mitigates the confounding effect and latches on the critical information.
Specifically, we adopt the attention modules to disentangle the critical subgraph and trivial subgraph.
It allows GNNs to capture a more reliable subgraph whose relation with the label is robust across different distributions.
arXiv Detail & Related papers (2021-12-30T15:22:35Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.