Deconfounding Temporal Autoencoder: Estimating Treatment Effects over
Time Using Noisy Proxies
- URL: http://arxiv.org/abs/2112.03013v1
- Date: Mon, 6 Dec 2021 13:14:31 GMT
- Title: Deconfounding Temporal Autoencoder: Estimating Treatment Effects over
Time Using Noisy Proxies
- Authors: Milan Kuzmanovic, Tobias Hatt, Stefan Feuerriegel
- Abstract summary: Estimating individualized treatment effects (ITEs) from observational data is crucial for decision-making.
We develop the Deconfounding Temporal Autoencoder, a novel method that leverages observed noisy proxies to learn a hidden embedding.
We demonstrate the effectiveness of our DTA by improving over state-of-the-art benchmarks by a substantial margin.
- Score: 15.733136147164032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating individualized treatment effects (ITEs) from observational data is
crucial for decision-making. In order to obtain unbiased ITE estimates, a
common assumption is that all confounders are observed. However, in practice,
it is unlikely that we observe these confounders directly. Instead, we often
observe noisy measurements of true confounders, which can serve as valid
proxies. In this paper, we address the problem of estimating ITE in the
longitudinal setting where we observe noisy proxies instead of true
confounders. To this end, we develop the Deconfounding Temporal Autoencoder, a
novel method that leverages observed noisy proxies to learn a hidden embedding
that reflects the true hidden confounders. In particular, the DTA combines a
long short-term memory autoencoder with a causal regularization penalty that
renders the potential outcomes and treatment assignment conditionally
independent given the learned hidden embedding. Once the hidden embedding is
learned via DTA, state-of-the-art outcome models can be used to control for it
and obtain unbiased estimates of ITE. Using synthetic and real-world medical
data, we demonstrate the effectiveness of our DTA by improving over
state-of-the-art benchmarks by a substantial margin.
Related papers
- On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Mitigating LLM Hallucinations via Conformal Abstention [70.83870602967625]
We develop a principled procedure for determining when a large language model should abstain from responding in a general domain.
We leverage conformal prediction techniques to develop an abstention procedure that benefits from rigorous theoretical guarantees on the hallucination rate (error rate)
Experimentally, our resulting conformal abstention method reliably bounds the hallucination rate on various closed-book, open-domain generative question answering datasets.
arXiv Detail & Related papers (2024-04-04T11:32:03Z) - CenTime: Event-Conditional Modelling of Censoring in Survival Analysis [49.44664144472712]
We introduce CenTime, a novel approach to survival analysis that directly estimates the time to event.
Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce.
Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance.
arXiv Detail & Related papers (2023-09-07T17:07:33Z) - Estimating Treatment Effects in Continuous Time with Hidden Confounders [8.292249583600809]
Estimating treatment effects in the longitudinal setting in the presence of hidden confounders remains an extremely challenging problem.
Recent advancements in neural differential equations to build a latent factor model using a controlled differential equation and Lipschitz constrained convolutional operation.
Experiments on both synthetic and real-world datasets highlight the promise of continuous time methods for estimating treatment effects in the presence of hidden confounders.
arXiv Detail & Related papers (2023-02-19T00:28:20Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv Detail & Related papers (2022-08-31T08:18:44Z) - Variational Temporal Deconfounder for Individualized Treatment Effect
Estimation from Longitudinal Observational Data [8.347630187110004]
Existing approaches for estimating treatment effects from longitudinal observational data are usually built upon a strong assumption of "unconfoundedness"
We propose the Variational Temporal Deconfounder (VTD), an approach that leverages deep variational embeddings in the longitudinal setting using proxies.
We test our VTD method on both synthetic and real-world clinical data, and the results show that our approach is effective when hidden confounding is the leading bias compared to other existing models.
arXiv Detail & Related papers (2022-07-23T16:43:12Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Sequential Deconfounding for Causal Inference with Unobserved
Confounders [18.586616164230566]
We develop the Sequential Deconfounder, a method that enables estimating individualized treatment effects over time.
This is the first deconfounding method that can be used in a general sequential setting.
We prove that using our method yields unbiased estimates of individualized treatment responses over time.
arXiv Detail & Related papers (2021-04-16T09:56:39Z) - Quantifying Ignorance in Individual-Level Causal-Effect Estimates under
Hidden Confounding [38.09565581056218]
We study the problem of learning conditional average treatment effects (CATE) from high-dimensional, observational data with unobserved confounders.
We present a new parametric interval estimator suited for high-dimensional data.
arXiv Detail & Related papers (2021-03-08T15:58:06Z) - Estimating Individual Treatment Effects with Time-Varying Confounders [9.784193264717098]
Estimating individual treatment effect (ITE) from observational data is meaningful and practical in healthcare.
Existing work mainly relies on the strong ignorability assumption that no hidden confounders exist.
We propose Deep Sequential Weighting (DSW) for estimating ITE with time-varying confounders.
arXiv Detail & Related papers (2020-08-27T02:21:56Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.