Causal Structural Learning from Time Series: A Convex Optimization
Approach
- URL: http://arxiv.org/abs/2301.11336v2
- Date: Sat, 15 Apr 2023 20:23:17 GMT
- Title: Causal Structural Learning from Time Series: A Convex Optimization
Approach
- Authors: Song Wei, Yao Xie
- Abstract summary: Structural learning aims to learn directed acyclic graphs (DAGs) from observational data.
Recent DAG learning remains a highly non-adaptive structural learning problem.
We propose a data approach for causal learning using a recently developed monotone variational (VI) formulation.
- Score: 12.4517307615083
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Structural learning, which aims to learn directed acyclic graphs (DAGs) from
observational data, is foundational to causal reasoning and scientific
discovery. Recent advancements formulate structural learning into a continuous
optimization problem; however, DAG learning remains a highly non-convex
problem, and there has not been much work on leveraging well-developed convex
optimization techniques for causal structural learning. We fill this gap by
proposing a data-adaptive linear approach for causal structural learning from
time series data, which can be conveniently cast into a convex optimization
problem using a recently developed monotone operator variational inequality
(VI) formulation. Furthermore, we establish non-asymptotic recovery guarantee
of the VI-based approach and show the superior performance of our proposed
method on structure recovery over existing methods via extensive numerical
experiments.
Related papers
- $ψ$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning [6.612096312467342]
Learning the structure of Directed A Graphs (DAGs) presents a significant challenge due to the vast search space of possible graphs, which scales with the number of nodes.
Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable a exponentiallyity constraints.
We present a novel framework for learning DAGs, employing a Approximation approach integrated with Gradient Descent (SGD)-based optimization techniques.
arXiv Detail & Related papers (2024-10-31T12:13:11Z) - Non-negative Weighted DAG Structure Learning [12.139158398361868]
We address the problem of learning the true DAGs from nodal observations.
We propose a DAG recovery algorithm based on the method that is guaranteed to return ar.
arXiv Detail & Related papers (2024-09-12T09:41:29Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - On the Sparse DAG Structure Learning Based on Adaptive Lasso [39.31370830038554]
We develop a data-driven DAG structure learning method without the predefined threshold, called adaptive NOTEARS [30]
We show that adaptive NOTEARS enjoys the oracle properties under some specific conditions. Furthermore, simulation results validate the effectiveness of our method, without setting any gap of edges around zero.
arXiv Detail & Related papers (2022-09-07T05:47:59Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z) - Learning DAGs without imposing acyclicity [0.6526824510982799]
We show that it is possible to learn a directed acyclic graph (DAG) from data without imposing the acyclicity constraint.
This approach is computationally efficient and is not affected by the explosion of complexity as in classical structural learning algorithms.
arXiv Detail & Related papers (2020-06-04T16:52:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.