Causal Graph Learning via Distributional Invariance of Cause-Effect Relationship
- URL: http://arxiv.org/abs/2602.03353v1
- Date: Tue, 03 Feb 2026 10:26:16 GMT
- Title: Causal Graph Learning via Distributional Invariance of Cause-Effect Relationship
- Authors: Nang Hung Nguyen, Phi Le Nguyen, Thao Nguyen Truong, Trong Nghia Hoang, Masashi Sugiyama,
- Abstract summary: We develop an algorithm that efficiently uncovers causal relationships with quadratic complexity in the number of observational variables.<n>Our experiments on a varied benchmark of large-scale datasets show superior or equivalent performance compared to existing works.
- Score: 54.575090553659074
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a new framework for recovering causal graphs from observational data, leveraging the observation that the distribution of an effect, conditioned on its causes, remains invariant to changes in the prior distribution of those causes. This insight enables a direct test for potential causal relationships by checking the variance of their corresponding effect-cause conditional distributions across multiple downsampled subsets of the data. These subsets are selected to reflect different prior cause distributions, while preserving the effect-cause conditional relationships. Using this invariance test and exploiting an (empirical) sparsity of most causal graphs, we develop an algorithm that efficiently uncovers causal relationships with quadratic complexity in the number of observational variables, reducing the processing time by up to 25x compared to state-of-the-art methods. Our empirical experiments on a varied benchmark of large-scale datasets show superior or equivalent performance compared to existing works, while achieving enhanced scalability.
Related papers
- Moment Matters: Mean and Variance Causal Graph Discovery from Heteroscedastic Observational Data [2.436681150766912]
Heteroscedasticity -- where the variance of a variable changes with other variables -- is pervasive in real data.<n>Standard causal discovery does not reveal which causes act on the mean versus the variance, as it returns a single moment-agnostic graph.<n>We propose a Bayesian, moment-driven causal discovery framework that infers separate textitmean and textit variance causal graphs from observational heteroscedastic data.
arXiv Detail & Related papers (2026-02-27T02:13:03Z) - Effects of Distributional Biases on Gradient-Based Causal Discovery in the Bivariate Categorical Case [0.4339839287869652]
We show that gradient-based causal discovery can be susceptible to distributional biases in the data they are trained on.<n>We employ two simple models that derive causal factorizations by learning marginal or conditional data distributions.<n>An empirical evaluation of two related approaches indicates that eliminating competition between possible causal factorizations can make models robust to the presented biases.
arXiv Detail & Related papers (2025-09-01T17:08:03Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - A Sparsity Principle for Partially Observable Causal Representation Learning [28.25303444099773]
Causal representation learning aims at identifying high-level causal variables from perceptual data.
We focus on learning from unpaired observations from a dataset with an instance-dependent partial observability pattern.
We propose two methods for estimating the underlying causal variables by enforcing sparsity in the inferred representation.
arXiv Detail & Related papers (2024-03-13T08:40:49Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.<n>One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Towards Causal Representation Learning and Deconfounding from Indefinite
Data [17.793702165499298]
Non-statistical data (e.g., images, text, etc.) encounters significant conflicts in terms of properties and methods with traditional causal data.
We redefine causal data from two novel perspectives and then propose three data paradigms.
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data.
arXiv Detail & Related papers (2023-05-04T08:20:37Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.