BUILD with Precision: Bottom-Up Inference of Linear DAGs
- URL: http://arxiv.org/abs/2512.16111v1
- Date: Thu, 18 Dec 2025 03:06:12 GMT
- Title: BUILD with Precision: Bottom-Up Inference of Linear DAGs
- Authors: Hamed Ajorlou, Samuel Rey, Gonzalo Mateos, Geert Leus, Antonio G. Marques,
- Abstract summary: Learning the structure of directed acyclic graphs (DAGs) from observational data is a central problem in causal discovery, statistical signal processing, and machine learning.<n>We show that the ensemble precision matrix of the observations exhibits a distinctive structure that facilitates DAG recovery.<n>We propose BUILD, a deterministic stepwise algorithm that identifies leaf nodes and their parents, then prunes the leaves by removing incident edges to proceed to the next step, exactly reconstructing the DAG from the true precision matrix.
- Score: 35.95692184008531
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning the structure of directed acyclic graphs (DAGs) from observational data is a central problem in causal discovery, statistical signal processing, and machine learning. Under a linear Gaussian structural equation model (SEM) with equal noise variances, the problem is identifiable and we show that the ensemble precision matrix of the observations exhibits a distinctive structure that facilitates DAG recovery. Exploiting this property, we propose BUILD (Bottom-Up Inference of Linear DAGs), a deterministic stepwise algorithm that identifies leaf nodes and their parents, then prunes the leaves by removing incident edges to proceed to the next step, exactly reconstructing the DAG from the true precision matrix. In practice, precision matrices must be estimated from finite data, and ill-conditioning may lead to error accumulation across BUILD steps. As a mitigation strategy, we periodically re-estimate the precision matrix (with less variables as leaves are pruned), trading off runtime for enhanced robustness. Reproducible results on challenging synthetic benchmarks demonstrate that BUILD compares favorably to state-of-the-art DAG learning algorithms, while offering an explicit handle on complexity.
Related papers
- Nonlinear Causal Discovery through a Sequential Edge Orientation Approach [5.807183284468881]
We propose a sequential procedure to orient undirected edges in a completed partial DAG.<n>We prove that this procedure can recover the true causal DAG assuming a restricted ANM.<n>We develop a novel constraint-based algorithm for learning causal DAGs under nonlinear ANMs.
arXiv Detail & Related papers (2025-06-05T21:08:13Z) - LOCAL: Learning with Orientation Matrix to Infer Causal Structure from Time Series Data [51.47827479376251]
LOCAL is a highly efficient, easy-to-implement, and constraint-free method for recovering dynamic causal structures.<n>Asymptotic Causal Learning Mask (ACML) and Dynamic Graph Learning (DGPL)<n>Experiments on synthetic and real-world datasets demonstrate that LOCAL significantly outperforms existing methods.
arXiv Detail & Related papers (2024-10-25T10:48:41Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Non-negative Weighted DAG Structure Learning [12.139158398361868]
We address the problem of learning the true DAGs from nodal observations.
We propose a DAG recovery algorithm based on the method that is guaranteed to return ar.
arXiv Detail & Related papers (2024-09-12T09:41:29Z) - Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix [21.698480201955213]
We propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data.
On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-11-01T17:27:49Z) - Discovering Dynamic Causal Space for DAG Structure Learning [64.763763417533]
We propose a dynamic causal space for DAG structure learning, coined CASPER.
It integrates the graph structure into the score function as a new measure in the causal space to faithfully reflect the causal distance between estimated and ground truth DAG.
arXiv Detail & Related papers (2023-06-05T12:20:40Z) - dotears: Scalable, consistent DAG estimation using observational and
interventional data [1.220743263007369]
Causal gene regulatory networks can be represented by directed acyclic graph (DAG)
We present $texttdotears$ [doo-tairs], a continuous optimization framework to infer a single causal structure.
We show that $texttdotears$ is a provably consistent estimator of the true DAG under mild assumptions.
arXiv Detail & Related papers (2023-05-30T17:03:39Z) - Learning Large Causal Structures from Inverse Covariance Matrix via
Sparse Matrix Decomposition [2.403264213118039]
This paper focuses on learning causal structures from the inverse covariance matrix.
The proposed method, called ICID, is based on continuous optimization of a matrix decomposition model.
We show that ICID efficiently identifies the sought directed acyclic graph (DAG) assuming the knowledge of noise variances.
arXiv Detail & Related papers (2022-11-25T16:32:56Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.