Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
Fact Verification
- URL: http://arxiv.org/abs/2305.09400v1
- Date: Tue, 16 May 2023 12:31:53 GMT
- Title: Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
Fact Verification
- Authors: Jiasheng Si, Yingjie Zhu, Deyu Zhou
- Abstract summary: This paper explores the viability of multi-granular rationale extraction with consistency and faithfulness for explainable multi-hop fact verification.
In particular, given a pretrained veracity prediction model, both the token-level explainer and sentence-level explainer are trained simultaneously to obtain multi-granular rationales.
Experimental results on three multi-hop fact verification datasets show that the proposed approach outperforms some state-of-the-art baselines.
- Score: 13.72453491358488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep learning models on multi-hop fact verification has
prompted researchers to understand the behavior behind their veracity. One
possible way is erasure search: obtaining the rationale by entirely removing a
subset of input without compromising the veracity prediction. Although
extensively explored, existing approaches fall within the scope of the
single-granular (tokens or sentences) explanation, which inevitably leads to
explanation redundancy and inconsistency. To address such issues, this paper
explores the viability of multi-granular rationale extraction with consistency
and faithfulness for explainable multi-hop fact verification. In particular,
given a pretrained veracity prediction model, both the token-level explainer
and sentence-level explainer are trained simultaneously to obtain
multi-granular rationales via differentiable masking. Meanwhile, three
diagnostic properties (fidelity, consistency, salience) are introduced and
applied to the training process, to ensure that the extracted rationales
satisfy faithfulness and consistency. Experimental results on three multi-hop
fact verification datasets show that the proposed approach outperforms some
state-of-the-art baselines.
Related papers
- Uncertainty Quantification via Hölder Divergence for Multi-View Representation Learning [18.419742575630217]
This paper introduces a novel algorithm based on H"older Divergence (HD) to enhance the reliability of multi-view learning.
Through the Dempster-Shafer theory, integration of uncertainty from different modalities, thereby generating a comprehensive result.
Mathematically, HD proves to better measure the distance'' between real data distribution and predictive distribution of the model.
arXiv Detail & Related papers (2024-10-29T04:29:44Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Exploring Faithful Rationale for Multi-hop Fact Verification via
Salience-Aware Graph Learning [13.72453491358488]
We use graph convolutional network (GCN) with salience-aware graph learning to solve multi-hop fact verification.
Results show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.
arXiv Detail & Related papers (2022-12-02T09:54:05Z) - FASTER-CE: Fast, Sparse, Transparent, and Robust Counterfactual
Explanations [9.532938405575788]
We present FASTER-CE: a novel set of algorithms to generate fast, sparse, and robust counterfactual explanations.
The key idea is to efficiently find promising search directions for counterfactuals in a latent space that is specified via an autoencoder.
The ability to quickly examine combinations of the most promising gradient directions as well as to incorporate additional user-defined constraints allows us to generate multiple counterfactual explanations.
arXiv Detail & Related papers (2022-10-12T20:41:17Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - ReasonBERT: Pre-trained to Reason with Distant Supervision [17.962648165675684]
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.
Different types of reasoning are simulated, including intersecting multiple pieces of evidence, bridging from one piece of evidence to another, and detecting unanswerable cases.
arXiv Detail & Related papers (2021-09-10T14:49:44Z) - Counterfactual Evaluation for Explainable AI [21.055319253405603]
We propose a new methodology to evaluate the faithfulness of explanations from the textitcounterfactual reasoning perspective.
We introduce two algorithms to find the proper counterfactuals in both discrete and continuous scenarios and then use the acquired counterfactuals to measure faithfulness.
arXiv Detail & Related papers (2021-09-05T01:38:49Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.