Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix
- URL: http://arxiv.org/abs/2311.00674v1
- Date: Wed, 1 Nov 2023 17:27:49 GMT
- Title: Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix
- Authors: Yunfeng Cai, Xu Li, Minging Sun, Ping Li
- Abstract summary: We propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data.
On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance.
- Score: 21.698480201955213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discovering the causal relationship via recovering the directed acyclic graph
(DAG) structure from the observed data is a well-known challenging
combinatorial problem. When there are latent variables, the problem becomes
even more difficult. In this paper, we first propose a DAG structure recovering
algorithm, which is based on the Cholesky factorization of the covariance
matrix of the observed data. The algorithm is fast and easy to implement and
has theoretical grantees for exact recovery. On synthetic and real-world
datasets, the algorithm is significantly faster than previous methods and
achieves the state-of-the-art performance. Furthermore, under the equal error
variances assumption, we incorporate an optimization procedure into the
Cholesky factorization based algorithm to handle the DAG recovering problem
with latent variables. Numerical simulations show that the modified "Cholesky +
optimization" algorithm is able to recover the ground truth graph in most cases
and outperforms existing algorithms.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Non-negative Weighted DAG Structure Learning [12.139158398361868]
We address the problem of learning the true DAGs from nodal observations.
We propose a DAG recovery algorithm based on the method that is guaranteed to return ar.
arXiv Detail & Related papers (2024-09-12T09:41:29Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Learning Large Causal Structures from Inverse Covariance Matrix via
Sparse Matrix Decomposition [2.403264213118039]
This paper focuses on learning causal structures from the inverse covariance matrix.
The proposed method, called ICID, is based on continuous optimization of a matrix decomposition model.
We show that ICID efficiently identifies the sought directed acyclic graph (DAG) assuming the knowledge of noise variances.
arXiv Detail & Related papers (2022-11-25T16:32:56Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing [49.73889315176884]
We present the first provable Least-Squares Value Iteration (LSVI) algorithms that have runtime complexity sublinear in the number of actions.
We build the connections between the theory of approximate maximum inner product search and the regret analysis of reinforcement learning.
arXiv Detail & Related papers (2021-05-18T05:23:53Z) - Learning DAGs without imposing acyclicity [0.6526824510982799]
We show that it is possible to learn a directed acyclic graph (DAG) from data without imposing the acyclicity constraint.
This approach is computationally efficient and is not affected by the explosion of complexity as in classical structural learning algorithms.
arXiv Detail & Related papers (2020-06-04T16:52:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.