Identifying Causal Effects using Instrumental Time Series: Nuisance IV and Correcting for the Past
- URL: http://arxiv.org/abs/2203.06056v3
- Date: Fri, 19 Jul 2024 20:48:15 GMT
- Title: Identifying Causal Effects using Instrumental Time Series: Nuisance IV and Correcting for the Past
- Authors: Nikolaj Thams, Rikke Søndergaard, Sebastian Weichwald, Jonas Peters,
- Abstract summary: We consider IV regression in time series models, such as vector auto-regressive ( VAR) processes.
Direct applications of i.i.d. techniques are generally inconsistent as they do not correctly adjust for dependencies in the past.
We provide methods, prove their consistency, and show how the inferred causal effect can be used for distribution generalization.
- Score: 12.49477539101379
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instrumental variable (IV) regression relies on instruments to infer causal effects from observational data with unobserved confounding. We consider IV regression in time series models, such as vector auto-regressive (VAR) processes. Direct applications of i.i.d. techniques are generally inconsistent as they do not correctly adjust for dependencies in the past. In this paper, we outline the difficulties that arise due to time structure and propose methodology for constructing identifying equations that can be used for consistent parametric estimation of causal effects in time series data. One method uses extra nuisance covariates to obtain identifiability (an idea that can be of interest even in the i.i.d. case). We further propose a graph marginalization framework that allows us to apply nuisance IV and other IV methods in a principled way to time series. Our methods make use of a version of the global Markov property, which we prove holds for VAR(p) processes. For VAR(1) processes, we prove identifiability conditions that relate to Jordan forms and are different from the well-known rank conditions in the i.i.d. case (they do not require as many instruments as covariates, for example). We provide methods, prove their consistency, and show how the inferred causal effect can be used for distribution generalization. Simulation experiments corroborate our theoretical results. We provide ready-to-use Python code.
Related papers
- DecoR: Deconfounding Time Series with Robust Regression [8.02119748947076]
This work focuses on estimating the causal effect between two time series, which are confounded by a third, unobserved time series.
We introduce Deconfounding by Robust regression (DecoR), a novel approach that estimates the causal effect using robust linear regression in the frequency domain.
arXiv Detail & Related papers (2024-06-11T06:59:17Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Geometry-Aware Instrumental Variable Regression [56.16884466478886]
We propose a transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information.
We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings.
arXiv Detail & Related papers (2024-05-19T17:49:33Z) - Regularized DeepIV with Model Selection [72.17508967124081]
Regularized DeepIV (RDIV) regression can converge to the least-norm IV solution.
Our method matches the current state-of-the-art convergence rate.
arXiv Detail & Related papers (2024-03-07T05:38:56Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - Deep Counterfactual Estimation with Categorical Background Variables [3.04585143845864]
Counterfactual queries typically ask the "What if?" question retrospectively.
We introduce CounterFactual Query Prediction (CFQP), a novel method to infer counterfactuals from continuous observations.
Our method significantly outperforms previously available deep-learning-based counterfactual methods.
arXiv Detail & Related papers (2022-10-11T22:27:11Z) - Sequential Causal Effect Variational Autoencoder: Time Series Causal
Link Estimation under Hidden Confounding [8.330791157878137]
Estimating causal effects from observational data sometimes leads to spurious relationships which can be misconceived as causal.
We propose Sequential Causal Effect Variational Autoencoder (SCEVAE), a novel method for time series causality analysis under hidden confounding.
arXiv Detail & Related papers (2022-09-23T09:43:58Z) - Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning [107.70165026669308]
In offline reinforcement learning (RL) an optimal policy is learned solely from a priori collected observational data.
We study a confounded Markov decision process where the transition dynamics admit an additive nonlinear functional form.
We propose a provably efficient IV-aided Value Iteration (IVVI) algorithm based on a primal-dual reformulation of the conditional moment restriction.
arXiv Detail & Related papers (2021-02-19T13:01:40Z) - High-recall causal discovery for autocorrelated time series with latent
confounders [12.995632804090198]
We show that existing causal discovery methods such as FCI and variants suffer from low recall in the autocorrelated time series case.
We provide Python code for all methods involved in the simulation studies.
arXiv Detail & Related papers (2020-07-03T18:01:04Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z) - MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent
Variable Models [14.173184309520453]
State-of-the-art methods for causal inference don't consider missing values.
Missing data require an adapted unconfoundedness hypothesis.
Latent confounders whose distribution is learned through variational autoencoders adapted to missing values are considered.
arXiv Detail & Related papers (2020-02-25T12:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.