Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with
Unmeasured Confounding
- URL: http://arxiv.org/abs/2112.11449v1
- Date: Tue, 21 Dec 2021 18:55:12 GMT
- Title: Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with
Unmeasured Confounding
- Authors: Jacob Dorn, Kevin Guo, Nathan Kallus
- Abstract summary: We study the problem of constructing bounds on the average treatment effect in the presence of unobserved confounding.
We propose novel estimators of these bounds that we call "doubly-valid/doubly-sharp" (DVDS) estimators.
- Score: 62.40420028973522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of constructing bounds on the average treatment effect
in the presence of unobserved confounding under the marginal sensitivity model
of Tan (2006). Combining an existing characterization involving adversarial
propensity scores with a new distributionally robust characterization of the
problem, we propose novel estimators of these bounds that we call
"doubly-valid/doubly-sharp" (DVDS) estimators. Double sharpness corresponds to
the fact that DVDS estimators consistently estimate the tightest possible
(i.e., sharp) bounds implied by the sensitivity model even when one of two
nuisance parameters is misspecified and achieve semiparametric efficiency when
all nuisance parameters are suitably consistent. Double validity is an entirely
new property for partial identification: DVDS estimators still provide valid,
though not sharp, bounds even when most nuisance parameters are misspecified.
In fact, even in cases when DVDS point estimates fail to be asymptotically
normal, standard Wald confidence intervals may remain valid. In the case of
binary outcomes, the DVDS estimators are particularly convenient and possesses
a closed-form expression in terms of the outcome regression and propensity
score. We demonstrate the DVDS estimators in a simulation study as well as a
case study of right heart catheterization.
Related papers
- How Likely Are You to Observe Non-locality with Imperfect Detection Efficiency and Random Measurement Settings? [0.0]
Imperfect detection efficiency remains one of the major obstacles in achieving loophole-free Bell tests over long distances.
We examine the impact of limited detection efficiency on the probability of Bell inequality with Haar random measurement settings.
We show that the so-called typicality of Bell inequality violation holds even if the detection efficiency is limited.
arXiv Detail & Related papers (2025-03-27T14:08:50Z) - Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes [44.974100402600165]
We study the evaluation of a policy best-parametric and worst-case perturbations to a decision process (MDP)
We use transition observations from the original MDP, whether they are generated under the same or a different policy.
Our estimator is also estimated statistical inference using Wald confidence intervals.
arXiv Detail & Related papers (2024-03-29T18:11:49Z) - Online Estimation with Rolling Validation: Adaptive Nonparametric Estimation with Streaming Data [13.069717985067937]
We propose a weighted rolling validation procedure, an online variant of leave-one-out cross-validation, that costs minimal extra for many typical gradient descent estimators.
Our analysis is straightforward, relying mainly on some general statistical assumptions.
arXiv Detail & Related papers (2023-10-18T17:52:57Z) - Model-Agnostic Covariate-Assisted Inference on Partially Identified Causal Effects [1.9253333342733674]
Many causal estimands are only partially identifiable since they depend on the unobservable joint distribution between potential outcomes.
We propose a unified and model-agnostic inferential approach for a wide class of partially identified estimands.
arXiv Detail & Related papers (2023-10-12T08:17:30Z) - A Targeted Accuracy Diagnostic for Variational Approximations [8.969208467611896]
Variational Inference (VI) is an attractive alternative to Markov Chain Monte Carlo (MCMC)
Existing methods characterize the quality of the whole variational distribution.
We propose the TArgeted Diagnostic for Distribution Approximation Accuracy (TADDAA)
arXiv Detail & Related papers (2023-02-24T02:50:18Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - On double-descent in uncertainty quantification in overparametrized
models [24.073221004661427]
Uncertainty quantification is a central challenge in reliable and trustworthy machine learning.
We show a trade-off between classification accuracy and calibration, unveiling a double descent like behavior in the calibration curve of optimally regularized estimators.
This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.
arXiv Detail & Related papers (2022-10-23T16:01:08Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Causal Inference Under Unmeasured Confounding With Negative Controls: A
Minimax Learning Approach [84.29777236590674]
We study the estimation of causal parameters when not all confounders are observed and instead negative controls are available.
Recent work has shown how these can enable identification and efficient estimation via two so-called bridge functions.
arXiv Detail & Related papers (2021-03-25T17:59:19Z) - Don't Just Blame Over-parametrization for Over-confidence: Theoretical
Analysis of Calibration in Binary Classification [58.03725169462616]
We show theoretically that over-parametrization is not the only reason for over-confidence.
We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting.
Perhaps surprisingly, we also show that over-confidence is not always the case.
arXiv Detail & Related papers (2021-02-15T21:38:09Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.