Latent Variable Causal Discovery under Selection Bias
- URL: http://arxiv.org/abs/2512.11219v1
- Date: Fri, 12 Dec 2025 02:00:01 GMT
- Title: Latent Variable Causal Discovery under Selection Bias
- Authors: Haoyue Dai, Yiwen Qiu, Ignavier Ng, Xinshuai Dong, Peter Spirtes, Kun Zhang,
- Abstract summary: We study rank constraints, which as a generalization to conditional independence constraints, exploits the ranks of covariance submatrices in linear Gaussian models.<n>We show that although selection can significantly complicate the joint distribution, the ranks in the biased covariance matrices still preserve meaningful information about both causal structures and selection mechanisms.<n>Using this tool, we demonstrate that the one-factor model, a classical latent variable model, can be identified under selection bias.
- Score: 31.127374830119454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Addressing selection bias in latent variable causal discovery is important yet underexplored, largely due to a lack of suitable statistical tools: While various tools beyond basic conditional independencies have been developed to handle latent variables, none have been adapted for selection bias. We make an attempt by studying rank constraints, which, as a generalization to conditional independence constraints, exploits the ranks of covariance submatrices in linear Gaussian models. We show that although selection can significantly complicate the joint distribution, interestingly, the ranks in the biased covariance matrices still preserve meaningful information about both causal structures and selection mechanisms. We provide a graph-theoretic characterization of such rank constraints. Using this tool, we demonstrate that the one-factor model, a classical latent variable model, can be identified under selection bias. Simulations and real-world experiments confirm the effectiveness of using our rank constraints.
Related papers
- Causal Graph Learning via Distributional Invariance of Cause-Effect Relationship [54.575090553659074]
We develop an algorithm that efficiently uncovers causal relationships with quadratic complexity in the number of observational variables.<n>Our experiments on a varied benchmark of large-scale datasets show superior or equivalent performance compared to existing works.
arXiv Detail & Related papers (2026-02-03T10:26:16Z) - Diffusion-Driven High-Dimensional Variable Selection [6.993247097440294]
We propose a resample-aggregate framework that exploits diffusion models' ability to generate high-fidelity synthetic data.<n>We show that the proposed method is selection consistent under mild assumptions.<n>Our method advances variable selection methodology and broadens the toolkit for interpretable, statistically rigorous analysis.
arXiv Detail & Related papers (2025-08-19T14:54:20Z) - A Complete Decomposition of KL Error using Refined Information and Mode Interaction Selection [11.994525728378603]
We revisit the classical formulation of the log-linear model with a focus on higher-order mode interactions.
We find that our learned distributions are able to more efficiently use the finite amount of data which is available in practice.
arXiv Detail & Related papers (2024-10-15T18:08:32Z) - Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - On the Strong Correlation Between Model Invariance and Generalization [54.812786542023325]
Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
arXiv Detail & Related papers (2022-07-14T17:08:25Z) - Causal Discovery in Heterogeneous Environments Under the Sparse
Mechanism Shift Hypothesis [7.895866278697778]
Machine learning approaches commonly rely on the assumption of independent and identically distributed (i.i.d.) data.
In reality, this assumption is almost always violated due to distribution shifts between environments.
We propose the Mechanism Shift Score (MSS), a score-based approach amenable to various empirical estimators.
arXiv Detail & Related papers (2022-06-04T15:39:30Z) - Local Constraint-Based Causal Discovery under Selection Bias [8.465604845422238]
We consider the problem of discovering causal relations from independence constraints selection bias in addition to confounding is present.
We focus instead on local patterns of independence relations, where we find no sound method for only three variable that can include background knowledge.
Y-Structure patterns are shown to be sound in predicting causal relations from data under selection bias, where cycles may be present.
arXiv Detail & Related papers (2022-03-03T16:52:22Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.