Towards Fine-Grained Reasoning for Fake News Detection
- URL: http://arxiv.org/abs/2110.15064v4
- Date: Sat, 5 Mar 2022 06:44:10 GMT
- Title: Towards Fine-Grained Reasoning for Fake News Detection
- Authors: Yiqiao Jin, Xiting Wang, Ruichao Yang, Yizhou Sun, Wei Wang, Hao Liao,
Xing Xie
- Abstract summary: We move towards fine-grained reasoning for fake news detection by better reflecting the logical processes of human thinking.
In particular, we propose a fine-grained reasoning framework by following the human information-processing model.
- Score: 43.497126436856426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The detection of fake news often requires sophisticated reasoning skills,
such as logically combining information by considering word-level subtle clues.
In this paper, we move towards fine-grained reasoning for fake news detection
by better reflecting the logical processes of human thinking and enabling the
modeling of subtle clues. In particular, we propose a fine-grained reasoning
framework by following the human information-processing model, introduce a
mutual-reinforcement-based method for incorporating human knowledge about which
evidence is more important, and design a prior-aware bi-channel kernel graph
network to model subtle differences between pieces of evidence. Extensive
experiments show that our model outperforms the state-of-the-art methods and
demonstrate the explainability of our approach.
Related papers
- Detecting misinformation through Framing Theory: the Frame Element-based
Model [6.4618518529384765]
We focus on the nuanced manipulation of narrative frames - an under-explored area within the AI community.
We propose an innovative approach leveraging the power of pre-trained Large Language Models and deep neural networks to detect misinformation.
arXiv Detail & Related papers (2024-02-19T21:50:42Z) - Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement [58.9768112704998]
Disentangled representation learning strives to extract the intrinsic factors within observed data.
We introduce a new perspective and framework, demonstrating that diffusion models with cross-attention can serve as a powerful inductive bias.
This is the first work to reveal the potent disentanglement capability of diffusion models with cross-attention, requiring no complex designs.
arXiv Detail & Related papers (2024-02-15T05:07:54Z) - No Place to Hide: Dual Deep Interaction Channel Network for Fake News
Detection based on Data Augmentation [16.40196904371682]
We propose a novel framework for fake news detection from perspectives of semantic, emotion and data enhancement.
A dual deep interaction channel network of semantic and emotion is designed to obtain a more comprehensive and fine-grained news representation.
Experiments show that the proposed approach outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-03-31T13:33:53Z) - Exploring Faithful Rationale for Multi-hop Fact Verification via
Salience-Aware Graph Learning [13.72453491358488]
We use graph convolutional network (GCN) with salience-aware graph learning to solve multi-hop fact verification.
Results show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.
arXiv Detail & Related papers (2022-12-02T09:54:05Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Social Commonsense Reasoning with Multi-Head Knowledge Attention [24.70946979449572]
Social Commonsense Reasoning requires understanding of text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills.
We propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell.
arXiv Detail & Related papers (2020-10-12T10:24:40Z) - Towards Interpretable Reasoning over Paragraph Effects in Situation [126.65672196760345]
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect.
We propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules.
In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model.
arXiv Detail & Related papers (2020-10-03T04:03:52Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.