Causal Graph Discovery from Self and Mutually Exciting Time Series
- URL: http://arxiv.org/abs/2301.11197v2
- Date: Fri, 27 Jan 2023 22:14:31 GMT
- Title: Causal Graph Discovery from Self and Mutually Exciting Time Series
- Authors: Song Wei, Yao Xie, Christopher S. Josef, Rishikesan Kamaleswaran
- Abstract summary: We develop a non-asymptotic recovery guarantee and quantifiable uncertainty by solving a linear program.
We demonstrate the effectiveness of our approach in recovering highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
- Score: 10.410454851418548
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a generalized linear structural causal model, coupled with a novel
data-adaptive linear regularization, to recover causal directed acyclic graphs
(DAGs) from time series. By leveraging a recently developed stochastic monotone
Variational Inequality (VI) formulation, we cast the causal discovery problem
as a general convex optimization. Furthermore, we develop a non-asymptotic
recovery guarantee and quantifiable uncertainty by solving a linear program to
establish confidence intervals for a wide range of non-linear monotone link
functions. We validate our theoretical results and show the competitive
performance of our method via extensive numerical experiments. Most
importantly, we demonstrate the effectiveness of our approach in recovering
highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
while achieving comparable prediction performance to powerful ``black-box''
models such as XGBoost. Thus, the future adoption of our proposed method to
conduct continuous surveillance of high-risk patients by clinicians is much
more likely.
Related papers
- Kernel-Based Differentiable Learning of Non-Parametric Directed Acyclic Graphical Models [17.52142371968811]
Causal discovery amounts to learning a directed acyclic graph (DAG) that encodes a causal model.
Recent research has sought to bypass the search by reformulating causal discovery as a continuous optimization problem.
arXiv Detail & Related papers (2024-08-20T16:09:40Z) - Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - ProDAG: Projection-Induced Variational Inference for Directed Acyclic Graphs [8.556906995059324]
Directed acyclic graph (DAG) learning is a rapidly expanding field of research.
It remains statistically and computationally challenging to learn a single (point estimate) DAG from data, let alone provide uncertainty quantification.
Our article addresses the difficult task of quantifying graph uncertainty by developing a Bayesian variational inference framework based on novel distributions that have support directly on the space of DAGs.
arXiv Detail & Related papers (2024-05-24T03:04:28Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - BayesDAG: Gradient-Based Posterior Inference for Causal Discovery [30.027520859604955]
We introduce a scalable causal discovery framework based on a combination of Markov Chain Monte Carlo and Variational Inference.
Our approach directly samples DAGs from the posterior without requiring any DAG regularization.
We derive a novel equivalence to the permutation-based DAG learning, which opens up possibilities of using any relaxed estimator defined over permutations.
arXiv Detail & Related papers (2023-07-26T02:34:13Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - Causal Graph Discovery from Self and Mutually Exciting Time Series [12.802653884445132]
We develop a non-asymptotic recovery guarantee and quantifiable uncertainty by solving a linear program.
We demonstrate the effectiveness of our approach in recovering highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
arXiv Detail & Related papers (2021-06-04T16:59:24Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.