Causal Graph Discovery from Self and Mutually Exciting Time Series
- URL: http://arxiv.org/abs/2301.11197v2
- Date: Fri, 27 Jan 2023 22:14:31 GMT
- Title: Causal Graph Discovery from Self and Mutually Exciting Time Series
- Authors: Song Wei, Yao Xie, Christopher S. Josef, Rishikesan Kamaleswaran
- Abstract summary: We develop a non-asymptotic recovery guarantee and quantifiable uncertainty by solving a linear program.
We demonstrate the effectiveness of our approach in recovering highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
- Score: 10.410454851418548
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a generalized linear structural causal model, coupled with a novel
data-adaptive linear regularization, to recover causal directed acyclic graphs
(DAGs) from time series. By leveraging a recently developed stochastic monotone
Variational Inequality (VI) formulation, we cast the causal discovery problem
as a general convex optimization. Furthermore, we develop a non-asymptotic
recovery guarantee and quantifiable uncertainty by solving a linear program to
establish confidence intervals for a wide range of non-linear monotone link
functions. We validate our theoretical results and show the competitive
performance of our method via extensive numerical experiments. Most
importantly, we demonstrate the effectiveness of our approach in recovering
highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
while achieving comparable prediction performance to powerful ``black-box''
models such as XGBoost. Thus, the future adoption of our proposed method to
conduct continuous surveillance of high-risk patients by clinicians is much
more likely.
Related papers
- ProDAG: Projection-induced variational inference for directed acyclic graphs [8.556906995059324]
We develop a variational Bayes inference framework based on novel distributions that have support directly on the space of Directed Acyclic Graphs (DAGs)
Our method, ProDAG, can deliver accurate inference and often outperforms existing state-of-the-art alternatives.
arXiv Detail & Related papers (2024-05-24T03:04:28Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - BayesDAG: Gradient-Based Posterior Inference for Causal Discovery [30.027520859604955]
We introduce a scalable causal discovery framework based on a combination of Markov Chain Monte Carlo and Variational Inference.
Our approach directly samples DAGs from the posterior without requiring any DAG regularization.
We derive a novel equivalence to the permutation-based DAG learning, which opens up possibilities of using any relaxed estimator defined over permutations.
arXiv Detail & Related papers (2023-07-26T02:34:13Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - Causal Graph Discovery from Self and Mutually Exciting Time Series [12.802653884445132]
We develop a non-asymptotic recovery guarantee and quantifiable uncertainty by solving a linear program.
We demonstrate the effectiveness of our approach in recovering highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
arXiv Detail & Related papers (2021-06-04T16:59:24Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.