Anytime-Valid Inference for Double/Debiased Machine Learning of Causal Parameters
- URL: http://arxiv.org/abs/2408.09598v2
- Date: Tue, 10 Sep 2024 21:10:47 GMT
- Title: Anytime-Valid Inference for Double/Debiased Machine Learning of Causal Parameters
- Authors: Abhinandan Dalal, Patrick Blöbaum, Shiva Kasiviswanathan, Aaditya Ramdas,
- Abstract summary: Double (debiased) machine learning (DML) has seen widespread use in recent years for learning causal/structural parameters.
The classic double-debiased framework is only validally for a predetermined sample size.
This can be of particular concern in large scale experimental studies with huge financial costs or human lives at stake.
- Score: 27.333679232669823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Double (debiased) machine learning (DML) has seen widespread use in recent years for learning causal/structural parameters, in part due to its flexibility and adaptability to high-dimensional nuisance functions as well as its ability to avoid bias from regularization or overfitting. However, the classic double-debiased framework is only valid asymptotically for a predetermined sample size, thus lacking the flexibility of collecting more data if sharper inference is needed, or stopping data collection early if useful inferences can be made earlier than expected. This can be of particular concern in large scale experimental studies with huge financial costs or human lives at stake, as well as in observational studies where the length of confidence of intervals do not shrink to zero even with increasing sample size due to partial identifiability of a structural parameter. In this paper, we present time-uniform counterparts to the asymptotic DML results, enabling valid inference and confidence intervals for structural parameters to be constructed at any arbitrary (possibly data-dependent) stopping time. We provide conditions which are only slightly stronger than the standard DML conditions, but offer the stronger guarantee for anytime-valid inference. This facilitates the transformation of any existing DML method to provide anytime-valid guarantees with minimal modifications, making it highly adaptable and easy to use. We illustrate our procedure using two instances: a) local average treatment effect in online experiments with non-compliance, and b) partial identification of average treatment effect in observational studies with potential unmeasured confounding.
Related papers
- Automatic doubly robust inference for linear functionals via calibrated debiased machine learning [0.9694940903078658]
We propose a debiased machine learning estimator for doubly robust inference.
A C-DML estimator maintains linearity when either the outcome regression or the Riesz representer of the linear functional is estimated sufficiently well.
Our theoretical and empirical results support the use of C-DML to mitigate bias arising from the inconsistent or slow estimation of nuisance functions.
arXiv Detail & Related papers (2024-11-05T03:32:30Z) - Double Machine Learning meets Panel Data -- Promises, Pitfalls, and Potential Solutions [0.0]
Estimating causal effect using machine learning (ML) algorithms can help to relax functional form assumptions if used within appropriate frameworks.
We show how we can adapt machine learning (DML) for panel data in the presence of unobserved heterogeneity.
We also show that the influence of the unobserved heterogeneity on the observed confounders plays a significant role for the performance of most alternative methods.
arXiv Detail & Related papers (2024-09-02T13:59:54Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Robust High-dimensional Tuning Free Multiple Testing [0.49416305961918056]
This paper revisits the celebrated Hodges-Lehmann (HL) estimator for estimating location parameters in both the one- and two-sample problems.
We develop Berry-Esseen inequality and Cram'er type moderate deviation for the HL estimator based on newly developed non-asymptotic Bahadur representation.
It is convincingly shown that the resulting tuning-free and moment-free methods control false discovery proportion at a prescribed level.
arXiv Detail & Related papers (2022-11-22T02:35:28Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Finite-Sample Guarantees for High-Dimensional DML [0.0]
This paper gives novel finite-sample guarantees for joint inference on high-dimensional DML.
These guarantees are useful to applied researchers, as they are informative about how far off the coverage of joint confidence bands can be from the nominal level.
arXiv Detail & Related papers (2022-06-15T08:48:58Z) - Counterfactual inference for sequential experiments [17.817769460838665]
We consider after-study statistical inference for sequentially designed experiments wherein multiple units are assigned treatments for multiple time points.
Our goal is to provide inference guarantees for the counterfactual mean at the smallest possible scale.
We illustrate our theory via several simulations and a case study involving data from a mobile health clinical trial HeartSteps.
arXiv Detail & Related papers (2022-02-14T17:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.