Causal Walk: Debiasing Multi-Hop Fact Verification with Front-Door
Adjustment
- URL: http://arxiv.org/abs/2403.02698v1
- Date: Tue, 5 Mar 2024 06:28:02 GMT
- Title: Causal Walk: Debiasing Multi-Hop Fact Verification with Front-Door
Adjustment
- Authors: Congzhi Zhang, Linhai Zhang, Deyu Zhou
- Abstract summary: Causal Walk is a novel method for debiasing multi-hop fact verification from a causal perspective.
Results show that Causal Walk outperforms some previous debiasing methods on both existing datasets and newly constructed datasets.
- Score: 27.455646975256986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional multi-hop fact verification models are prone to rely on spurious
correlations from the annotation artifacts, leading to an obvious performance
decline on unbiased datasets. Among the various debiasing works, the causal
inference-based methods become popular by performing theoretically guaranteed
debiasing such as casual intervention or counterfactual reasoning. However,
existing causal inference-based debiasing methods, which mainly formulate fact
verification as a single-hop reasoning task to tackle shallow bias patterns,
cannot deal with the complicated bias patterns hidden in multiple hops of
evidence. To address the challenge, we propose Causal Walk, a novel method for
debiasing multi-hop fact verification from a causal perspective with front-door
adjustment. Specifically, in the structural causal model, the reasoning path
between the treatment (the input claim-evidence graph) and the outcome (the
veracity label) is introduced as the mediator to block the confounder. With the
front-door adjustment, the causal effect between the treatment and the outcome
is decomposed into the causal effect between the treatment and the mediator,
which is estimated by applying the idea of random walk, and the causal effect
between the mediator and the outcome, which is estimated with normalized
weighted geometric mean approximation. To investigate the effectiveness of the
proposed method, an adversarial multi-hop fact verification dataset and a
symmetric multi-hop fact verification dataset are proposed with the help of the
large language model. Experimental results show that Causal Walk outperforms
some previous debiasing methods on both existing datasets and the newly
constructed datasets. Code and data will be released at
https://github.com/zcccccz/CausalWalk.
Related papers
- Looking at Model Debiasing through the Lens of Anomaly Detection [11.113718994341733]
Deep neural networks are sensitive to bias in the data.
We propose a new bias identification method based on anomaly detection.
We reach state-of-the-art performance on synthetic and real benchmark datasets.
arXiv Detail & Related papers (2024-07-24T17:30:21Z) - DINER: Debiasing Aspect-based Sentiment Analysis with Multi-variable Causal Inference [21.929902181609936]
We propose a novel framework based on multi-variable causal inference for debiasing ABSA.
For the review branch, the bias is modeled as indirect confounding from context, where backdoor adjustment intervention is employed for debiasing.
For the aspect branch, the bias is described as a direct correlation with labels, where counterfactual reasoning is adopted for debiasing.
arXiv Detail & Related papers (2024-03-02T10:38:31Z) - Federated Causal Discovery from Heterogeneous Data [70.31070224690399]
We propose a novel FCD method attempting to accommodate arbitrary causal models and heterogeneous data.
These approaches involve constructing summary statistics as a proxy of the raw data to protect data privacy.
We conduct extensive experiments on synthetic and real datasets to show the efficacy of our method.
arXiv Detail & Related papers (2024-02-20T18:53:53Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Disentangled Representation for Causal Mediation Analysis [25.114619307838602]
Causal mediation analysis is a method that is often used to reveal direct and indirect effects.
Deep learning shows promise in mediation analysis, but the current methods only assume latent confounders that affect treatment, mediator and outcome simultaneously.
We propose the Disentangled Mediation Analysis Variational AutoEncoder (DMAVAE), which disentangles the representations of latent confounders into three types to accurately estimate the natural direct effect, natural indirect effect and total effect.
arXiv Detail & Related papers (2023-02-19T23:37:17Z) - Towards Disentangling Relevance and Bias in Unbiased Learning to Rank [40.604145263955765]
Unbiased learning to rank (ULTR) studies the problem of mitigating various biases from implicit user feedback data such as clicks.
We propose three methods to mitigate the negative confounding effects by better disentangling relevance and bias.
arXiv Detail & Related papers (2022-12-28T16:29:52Z) - Valid Inference After Causal Discovery [73.87055989355737]
We develop tools for valid post-causal-discovery inference.
We show that a naive combination of causal discovery and subsequent inference algorithms leads to highly inflated miscoverage rates.
arXiv Detail & Related papers (2022-08-11T17:40:45Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.