Dynamic treatment effects: high-dimensional inference under model misspecification
- URL: http://arxiv.org/abs/2111.06818v3
- Date: Thu, 30 Jan 2025 03:30:07 GMT
- Title: Dynamic treatment effects: high-dimensional inference under model misspecification
- Authors: Yuqian Zhang, Weijie Ji, Jelena Bradic,
- Abstract summary: This paper introduces a novel "sequential model doubly robust" estimator.
We develop novel moment-targeting estimates to account for confounding effects and establish that root-$N$ inference can be achieved.
Unlike off-the-shelf high-dimensional methods, which fail to deliver robust inference under model misspecification even within the doubly robust framework, our newly developed loss functions address this limitation effectively.
- Score: 8.916614661563893
- License:
- Abstract: Estimating dynamic treatment effects is crucial across various disciplines, providing insights into the time-dependent causal impact of interventions. However, this estimation poses challenges due to time-varying confounding, leading to potentially biased estimates. Furthermore, accurately specifying the growing number of treatment assignments and outcome models with multiple exposures appears increasingly challenging to accomplish. Double robustness, which permits model misspecification, holds great value in addressing these challenges. This paper introduces a novel "sequential model doubly robust" estimator. We develop novel moment-targeting estimates to account for confounding effects and establish that root-$N$ inference can be achieved as long as at least one nuisance model is correctly specified at each exposure time, despite the presence of high-dimensional covariates. Although the nuisance estimates themselves do not achieve root-$N$ rates, the carefully designed loss functions in our framework ensure final root-$N$ inference for the causal parameter of interest. Unlike off-the-shelf high-dimensional methods, which fail to deliver robust inference under model misspecification even within the doubly robust framework, our newly developed loss functions address this limitation effectively.
Related papers
- Bridging Internal Probability and Self-Consistency for Effective and Efficient LLM Reasoning [53.25336975467293]
We present the first theoretical error decomposition analysis of methods such as perplexity and self-consistency.
Our analysis reveals a fundamental trade-off: perplexity methods suffer from substantial model error due to the absence of a proper consistency function.
We propose Reasoning-Pruning Perplexity Consistency (RPC), which integrates perplexity with self-consistency, and Reasoning Pruning, which eliminates low-probability reasoning paths.
arXiv Detail & Related papers (2025-02-01T18:09:49Z) - Average Causal Effect Estimation in DAGs with Hidden Variables: Extensions of Back-Door and Front-Door Criteria [3.0232957374216953]
We develop one-step corrected plug-in and targeted minimum loss-based estimators of causal effects for a class of directed aparametric graphs (DAGs) with hidden variables.
We leverage machine learning to minimize modeling assumptions while ensuring key statistical properties such as linear primality, double robustness, efficiency, and staying within the bounds of the target parameter space.
arXiv Detail & Related papers (2024-09-06T01:07:29Z) - On the Trade-offs between Adversarial Robustness and Actionable Explanations [32.05150063480917]
We make one of the first attempts at studying the impact of adversarially robust models on actionable explanations.
We derive theoretical bounds on the differences between the cost and the validity of recourses generated by state-of-the-art algorithms.
Our results show that adversarially robust models significantly increase the cost and reduce the validity of the resulting recourses.
arXiv Detail & Related papers (2023-09-28T13:59:50Z) - Doubly Robust Proximal Causal Learning for Continuous Treatments [56.05592840537398]
We propose a kernel-based doubly robust causal learning estimator for continuous treatments.
We show that its oracle form is a consistent approximation of the influence function.
We then provide a comprehensive convergence analysis in terms of the mean square error.
arXiv Detail & Related papers (2023-09-22T12:18:53Z) - RobustMQ: Benchmarking Robustness of Quantized Models [54.15661421492865]
Quantization is an essential technique for deploying deep neural networks (DNNs) on devices with limited resources.
We thoroughly evaluated the robustness of quantized models against various noises (adrial attacks, natural corruptions, and systematic noises) on ImageNet.
Our research contributes to advancing the robust quantization of models and their deployment in real-world scenarios.
arXiv Detail & Related papers (2023-08-04T14:37:12Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - High-dimensional Inference for Dynamic Treatment Effects [11.688030627514532]
We propose a novel DR representation for intermediate conditional outcome models that leads to superior robustness guarantees.
Our results represent a significant step forward as they provide new robustness guarantees.
arXiv Detail & Related papers (2021-10-10T23:05:29Z) - Deep Bayesian Estimation for Dynamic Treatment Regimes with a Long
Follow-up Time [28.11470886127216]
Causal effect estimation for dynamic treatment regimes (DTRs) contributes to sequential decision making.
We combine outcome regression models with treatment models for high dimensional features using uncensored subjects that are small in sample size.
Also, the developed deep Bayesian models can model uncertainty and output the prediction variance which is essential for the safety-aware applications, such as self-driving cars and medical treatment design.
arXiv Detail & Related papers (2021-09-20T13:21:39Z) - Low-Rank Temporal Attention-Augmented Bilinear Network for financial
time-series forecasting [93.73198973454944]
Deep learning models have led to significant performance improvements in many problems coming from different domains, including prediction problems of financial time-series data.
The Temporal Attention-Augmented Bilinear network was recently proposed as an efficient and high-performing model for Limit Order Book time-series forecasting.
In this paper, we propose a low-rank tensor approximation of the model to further reduce the number of trainable parameters and increase its speed.
arXiv Detail & Related papers (2021-07-05T10:15:23Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.