On the Sparse DAG Structure Learning Based on Adaptive Lasso
- URL: http://arxiv.org/abs/2209.02946v1
- Date: Wed, 7 Sep 2022 05:47:59 GMT
- Title: On the Sparse DAG Structure Learning Based on Adaptive Lasso
- Authors: Danru Xu, Erdun Gao, Wei Huang, Mingming Gong
- Abstract summary: We develop a data-driven DAG structure learning method without the predefined threshold, called adaptive NOTEARS [30]
We show that adaptive NOTEARS enjoys the oracle properties under some specific conditions. Furthermore, simulation results validate the effectiveness of our method, without setting any gap of edges around zero.
- Score: 39.31370830038554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the underlying casual structure, represented by Directed Acyclic
Graphs (DAGs), of concerned events from fully-observational data is a crucial
part of causal reasoning, but it is challenging due to the combinatorial and
large search space. A recent flurry of developments recast this combinatorial
problem into a continuous optimization problem by leveraging an algebraic
equality characterization of acyclicity. However, these methods suffer from the
fixed-threshold step after optimization, which is not a flexible and systematic
way to rule out the cycle-inducing edges or false discoveries edges with small
values caused by numerical precision. In this paper, we develop a data-driven
DAG structure learning method without the predefined threshold, called adaptive
NOTEARS [30], achieved by applying adaptive penalty levels to each parameters
in the regularization term. We show that adaptive NOTEARS enjoys the oracle
properties under some specific conditions. Furthermore, simulation experimental
results validate the effectiveness of our method, without setting any gap of
edges weights around zero.
Related papers
- $ψ$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning [6.612096312467342]
Learning the structure of Directed A Graphs (DAGs) presents a significant challenge due to the vast search space of possible graphs, which scales with the number of nodes.
Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable a exponentiallyity constraints.
We present a novel framework for learning DAGs, employing a Approximation approach integrated with Gradient Descent (SGD)-based optimization techniques.
arXiv Detail & Related papers (2024-10-31T12:13:11Z) - Non-negative Weighted DAG Structure Learning [12.139158398361868]
We address the problem of learning the true DAGs from nodal observations.
We propose a DAG recovery algorithm based on the method that is guaranteed to return ar.
arXiv Detail & Related papers (2024-09-12T09:41:29Z) - CoLiDE: Concomitant Linear DAG Estimation [12.415463205960156]
We deal with the problem of learning acyclic graph structure from observational data to a linear equation.
We propose a new convex score function for sparsity-aware learning DAGs.
arXiv Detail & Related papers (2023-10-04T15:32:27Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Causal Structural Learning from Time Series: A Convex Optimization
Approach [12.4517307615083]
Structural learning aims to learn directed acyclic graphs (DAGs) from observational data.
Recent DAG learning remains a highly non-adaptive structural learning problem.
We propose a data approach for causal learning using a recently developed monotone variational (VI) formulation.
arXiv Detail & Related papers (2023-01-26T16:39:58Z) - Score-based Causal Representation Learning with Interventions [54.735484409244386]
This paper studies the causal representation learning problem when latent causal variables are observed indirectly.
The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables.
arXiv Detail & Related papers (2023-01-19T18:39:48Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Efficient Neural Causal Discovery without Acyclicity Constraints [30.08586535981525]
We present ENCO, an efficient structure learning method for directed, acyclic causal graphs.
In experiments, we show that ENCO can efficiently recover graphs with hundreds of nodes, an order of magnitude larger than what was previously possible.
arXiv Detail & Related papers (2021-07-22T07:01:41Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.