Scalable Contrastive Causal Discovery under Unknown Soft Interventions
- URL: http://arxiv.org/abs/2603.03411v1
- Date: Tue, 03 Mar 2026 18:16:16 GMT
- Title: Scalable Contrastive Causal Discovery under Unknown Soft Interventions
- Authors: Mingxuan Zhang, Khushi Desai, Sopho Kevlishvili, Elham Azizi,
- Abstract summary: We propose a scalable causal discovery model for paired observational and interventional settings with shared underlying causal structure and unknown soft interventions.<n> Experiments on synthetic data demonstrate improved causal structure recovery, generalization to unseen graphs with held-out causal mechanisms, and scalability to larger graphs.
- Score: 3.165716101116899
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Observational causal discovery is only identifiable up to the Markov equivalence class. While interventions can reduce this ambiguity, in practice interventions are often soft with multiple unknown targets. In many realistic scenarios, only a single intervention regime is observed. We propose a scalable causal discovery model for paired observational and interventional settings with shared underlying causal structure and unknown soft interventions. The model aggregates subset-level PDAGs and applies contrastive cross-regime orientation rules to construct a globally consistent maximal PDAG under Meek closure, enabling generalization to both in-distribution and out-of-distribution settings. Theoretically, we prove that our model is sound with respect to a restricted $Ψ$ equivalence class induced solely by the information available in the subset-restricted setting. We further show that the model asymptotically recovers the corresponding identifiable PDAG and can orient additional edges compared to non-contrastive subset-restricted methods. Experiments on synthetic data demonstrate improved causal structure recovery, generalization to unseen graphs with held-out causal mechanisms, and scalability to larger graphs, with ablations supporting the theoretical results.
Related papers
- FlexCausal: Flexible Causal Disentanglement via Structural Flow Priors and Manifold-Aware Interventions [1.7114074082429929]
Causal Disentangled Representation Learning aims to learn and disentangle low dimensional representations from observations.<n>We propose FlexCausal, a novel CDRL framework based on a block-diagonal covariance VAE.<n>Our framework ensures a precise structural correspondence between the learned latent subspaces and the ground-truth causal relations.
arXiv Detail & Related papers (2026-01-29T11:30:53Z) - Fair Deepfake Detectors Can Generalize [51.21167546843708]
We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
arXiv Detail & Related papers (2025-07-03T14:10:02Z) - Improving Group Robustness on Spurious Correlation via Evidential Alignment [26.544938760265136]
Deep neural networks often learn and rely on spurious correlations, i.e., superficial associations between non-causal features and the targets.<n>Existing methods typically mitigate this issue by using external group annotations or auxiliary deterministic models.<n>We propose Evidential Alignment, a novel framework that leverages uncertainty quantification to understand the behavior of the biased models.
arXiv Detail & Related papers (2025-06-12T22:47:21Z) - Identifiability Guarantees for Causal Disentanglement from Soft
Interventions [26.435199501882806]
Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.
In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable.
When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions.
arXiv Detail & Related papers (2023-07-12T15:39:39Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Modeling Causal Mechanisms with Diffusion Models for Interventional and Counterfactual Queries [10.818661865303518]
We consider the problem of answering observational, interventional, and counterfactual queries in a causally sufficient setting.
We introduce diffusion-based causal models (DCM) to learn causal mechanisms, that generate unique latent encodings.
Our empirical evaluations demonstrate significant improvements over existing state-of-the-art methods for answering causal queries.
arXiv Detail & Related papers (2023-02-02T04:08:08Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Partially Intervenable Causal Models [22.264327945288642]
We build on a unification of graphical and potential outcomes approaches to causality to define graphical models with a restricted set of allowed interventions.
A corollary of our results is a complete identification theory for causal effects in another graphical framework with a restricted set of interventions.
arXiv Detail & Related papers (2021-10-24T22:24:57Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.