Exploring Faithful Rationale for Multi-hop Fact Verification via
Salience-Aware Graph Learning
- URL: http://arxiv.org/abs/2212.01060v1
- Date: Fri, 2 Dec 2022 09:54:05 GMT
- Title: Exploring Faithful Rationale for Multi-hop Fact Verification via
Salience-Aware Graph Learning
- Authors: Jiasheng Si, Yingjie Zhu, Deyu Zhou
- Abstract summary: We use graph convolutional network (GCN) with salience-aware graph learning to solve multi-hop fact verification.
Results show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.
- Score: 13.72453491358488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The opaqueness of the multi-hop fact verification model imposes imperative
requirements for explainability. One feasible way is to extract rationales, a
subset of inputs, where the performance of prediction drops dramatically when
being removed. Though being explainable, most rationale extraction methods for
multi-hop fact verification explore the semantic information within each piece
of evidence individually, while ignoring the topological information
interaction among different pieces of evidence. Intuitively, a faithful
rationale bears complementary information being able to extract other
rationales through the multi-hop reasoning process. To tackle such
disadvantages, we cast explainable multi-hop fact verification as subgraph
extraction, which can be solved based on graph convolutional network (GCN) with
salience-aware graph learning. In specific, GCN is utilized to incorporate the
topological interaction information among multiple pieces of evidence for
learning evidence representation. Meanwhile, to alleviate the influence of
noisy evidence, the salience-aware graph perturbation is induced into the
message passing of GCN. Moreover, the multi-task model with three diagnostic
properties of rationale is elaborately designed to improve the quality of an
explanation without any explicit annotations. Experimental results on the
FEVEROUS benchmark show significant gains over previous state-of-the-art
methods for both rationale extraction and fact verification.
Related papers
- Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables [22.18384189336634]
HeterFC is a word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information.
We perform information propagation via a relational graph neural network, interactions between claims and evidence.
We introduce a multitask loss function to account for potential inaccuracies in evidence retrieval.
arXiv Detail & Related papers (2024-02-20T14:10:40Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation [110.71955853831707]
We view LMs as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
We formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs.
Experiments and analysis on multiple KG and CoT datasets reveal the effect of training on random walk paths.
arXiv Detail & Related papers (2024-02-05T18:25:51Z) - Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
Fact Verification [13.72453491358488]
This paper explores the viability of multi-granular rationale extraction with consistency and faithfulness for explainable multi-hop fact verification.
In particular, given a pretrained veracity prediction model, both the token-level explainer and sentence-level explainer are trained simultaneously to obtain multi-granular rationales.
Experimental results on three multi-hop fact verification datasets show that the proposed approach outperforms some state-of-the-art baselines.
arXiv Detail & Related papers (2023-05-16T12:31:53Z) - MEGAN: Multi-Explanation Graph Attention Network [1.1470070927586016]
We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
arXiv Detail & Related papers (2022-11-23T16:10:13Z) - Adversarial Contrastive Learning for Evidence-aware Fake News Detection
with Graph Neural Networks [20.282527436527765]
We propose a unified Graph-based sEmantic structure mining framework with ConTRAstive Learning, namely GETRAL in short.
We first model claims and evidences as graph-structured data to capture the long-distance semantic dependency.
Then the fine-grained semantic representations are fed into the claim-evidence interaction module for predictions.
arXiv Detail & Related papers (2022-10-11T14:54:37Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Towards Fine-Grained Reasoning for Fake News Detection [43.497126436856426]
We move towards fine-grained reasoning for fake news detection by better reflecting the logical processes of human thinking.
In particular, we propose a fine-grained reasoning framework by following the human information-processing model.
arXiv Detail & Related papers (2021-09-13T15:45:36Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Dynamic Semantic Graph Construction and Reasoning for Explainable
Multi-hop Science Question Answering [50.546622625151926]
We propose a new framework to exploit more valid facts while obtaining explainability for multi-hop QA.
Our framework contains three new ideas: (a) tt AMR-SG, an AMR-based Semantic Graph, constructed by candidate fact AMRs to uncover any hop relations among question, answer and multiple facts, (b) a novel path-based fact analytics approach exploiting tt AMR-SG to extract active facts from a large fact pool to answer questions, and (c) a fact-level relation modeling leveraging graph convolution network (GCN) to guide the reasoning process.
arXiv Detail & Related papers (2021-05-25T09:14:55Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.