Amortized Causal Discovery: Learning to Infer Causal Graphs from
Time-Series Data
- URL: http://arxiv.org/abs/2006.10833v3
- Date: Mon, 21 Feb 2022 19:31:08 GMT
- Title: Amortized Causal Discovery: Learning to Infer Causal Graphs from
Time-Series Data
- Authors: Sindy L\"owe, David Madras, Richard Zemel, Max Welling
- Abstract summary: We propose Amortized Causal Discovery, a novel framework to learn to infer causal relations from time-series data.
We demonstrate experimentally that this approach, implemented as a variational model, leads to significant improvements in causal discovery performance.
- Score: 63.15776078733762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On time-series data, most causal discovery methods fit a new model whenever
they encounter samples from a new underlying causal graph. However, these
samples often share relevant information which is lost when following this
approach. Specifically, different samples may share the dynamics which describe
the effects of their causal relations. We propose Amortized Causal Discovery, a
novel framework that leverages such shared dynamics to learn to infer causal
relations from time-series data. This enables us to train a single, amortized
model that infers causal relations across samples with different underlying
causal graphs, and thus leverages the shared dynamics information. We
demonstrate experimentally that this approach, implemented as a variational
model, leads to significant improvements in causal discovery performance, and
show how it can be extended to perform well under added noise and hidden
confounding.
Related papers
- Embracing the black box: Heading towards foundation models for causal
discovery from time series data [8.073449277052495]
Causal Pretraining is a methodology that aims to learn a direct mapping from time series to the underlying causal graphs in a supervised manner.
Our empirical findings suggest that causal discovery in a supervised manner is possible, assuming that the training and test time series samples share most of their dynamics.
We provide examples where causal discovery for real-world data with causally pretrained neural networks is possible within limits.
arXiv Detail & Related papers (2024-02-14T16:49:13Z) - Sample, estimate, aggregate: A recipe for causal discovery foundation models [28.116832159265964]
We train a supervised model that learns to predict a larger causal graph from the outputs of classical causal discovery algorithms run over subsets of variables.
Our approach is enabled by the observation that typical errors in the outputs of classical methods remain comparable across datasets.
Experiments on real and synthetic data demonstrate that this model maintains high accuracy in the face of misspecification or distribution shift.
arXiv Detail & Related papers (2024-02-02T21:57:58Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Discovering Mixtures of Structural Causal Models from Time Series Data [23.18511951330646]
We propose a general variational inference-based framework called MCD to infer the underlying causal models.
Our approach employs an end-to-end training process that maximizes an evidence-lower bound for the data likelihood.
We demonstrate that our method surpasses state-of-the-art benchmarks in causal discovery tasks.
arXiv Detail & Related papers (2023-10-10T05:13:10Z) - Causal discovery for time series with constraint-based model and PMIME
measure [0.0]
We present a novel approach for discovering causality in time series data that combines a causal discovery algorithm with an information theoretic-based measure.
We evaluate the performance of our approach on several simulated data sets, showing promising results.
arXiv Detail & Related papers (2023-05-31T09:38:50Z) - DOMINO: Visual Causal Reasoning with Time-Dependent Phenomena [59.291745595756346]
We propose a set of visual analytics methods that allow humans to participate in the discovery of causal relations associated with windows of time delay.
Specifically, we leverage a well-established method, logic-based causality, to enable analysts to test the significance of potential causes.
Since an effect can be a cause of other effects, we allow users to aggregate different temporal cause-effect relations found with our method into a visual flow diagram.
arXiv Detail & Related papers (2023-03-12T03:40:21Z) - Rhino: Deep Causal Temporal Relationship Learning With History-dependent
Noise [13.709618907099783]
We propose a novel causal relationship learning framework for time-series data, called Rhino.
Rhino combines vector auto-regression, deep learning and variational inference to model non-linear relationships with instantaneous effects.
Theoretically, we prove the structural identifiability of Rhino.
arXiv Detail & Related papers (2022-10-26T13:33:58Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.