Learning DAGs without imposing acyclicity
- URL: http://arxiv.org/abs/2006.03005v1
- Date: Thu, 4 Jun 2020 16:52:01 GMT
- Title: Learning DAGs without imposing acyclicity
- Authors: Gherardo Varando
- Abstract summary: We show that it is possible to learn a directed acyclic graph (DAG) from data without imposing the acyclicity constraint.
This approach is computationally efficient and is not affected by the explosion of complexity as in classical structural learning algorithms.
- Score: 0.6526824510982799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore if it is possible to learn a directed acyclic graph (DAG) from
data without imposing explicitly the acyclicity constraint. In particular, for
Gaussian distributions, we frame structural learning as a sparse matrix
factorization problem and we empirically show that solving an
$\ell_1$-penalized optimization yields to good recovery of the true graph and,
in general, to almost-DAG graphs. Moreover, this approach is computationally
efficient and is not affected by the explosion of combinatorial complexity as
in classical structural learning algorithms.
Related papers
- Non-negative Weighted DAG Structure Learning [12.139158398361868]
We address the problem of learning the true DAGs from nodal observations.
We propose a DAG recovery algorithm based on the method that is guaranteed to return ar.
arXiv Detail & Related papers (2024-09-12T09:41:29Z) - Learning Directed Acyclic Graphs from Partial Orderings [9.387234607473054]
directed acyclic graphs (DAGs) are commonly used to model causal relationships among random variables.
In this paper, we consider the intermediate problem of learning DAGs when a partial causal ordering of variables is available.
We propose a general estimation framework for leveraging the partial ordering and present efficient estimation algorithms for low- and high-dimensional problems.
arXiv Detail & Related papers (2024-03-24T06:14:50Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix [21.698480201955213]
We propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data.
On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-11-01T17:27:49Z) - Constraint-Free Structure Learning with Smooth Acyclic Orientations [16.556484521585197]
We introduce COSMO, a constraint-free continuous optimization scheme for acyclic structure learning.
Despite the absence of explicit constraints, we prove that COSMO always converges to an acyclic solution.
arXiv Detail & Related papers (2023-09-15T14:08:09Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.