Causal Inference Under Unmeasured Confounding With Negative Controls: A
Minimax Learning Approach
- URL: http://arxiv.org/abs/2103.14029v2
- Date: Mon, 29 Mar 2021 00:23:44 GMT
- Title: Causal Inference Under Unmeasured Confounding With Negative Controls: A
Minimax Learning Approach
- Authors: Nathan Kallus, Xiaojie Mao, Masatoshi Uehara
- Abstract summary: We study the estimation of causal parameters when not all confounders are observed and instead negative controls are available.
Recent work has shown how these can enable identification and efficient estimation via two so-called bridge functions.
- Score: 84.29777236590674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the estimation of causal parameters when not all confounders are
observed and instead negative controls are available. Recent work has shown how
these can enable identification and efficient estimation via two so-called
bridge functions. In this paper, we tackle the primary challenge to causal
inference using negative controls: the identification and estimation of these
bridge functions. Previous work has relied on uniqueness and completeness
assumptions on these functions that may be implausible in practice and also
focused on their parametric estimation. Instead, we provide a new
identification strategy that avoids both uniqueness and completeness. And, we
provide a new estimators for these functions based on minimax learning
formulations. These estimators accommodate general function classes such as
reproducing Hilbert spaces and neural networks. We study finite-sample
convergence results both for estimating bridge function themselves and for the
final estimation of the causal parameter. We do this under a variety of
combinations of assumptions that include realizability and closedness
conditions on the hypothesis and critic classes employed in the minimax
estimator. Depending on how much we are willing to assume, we obtain different
convergence rates. In some cases, we show the estimate for the causal parameter
may converge even when our bridge function estimators do not converge to any
valid bridge function. And, in other cases, we show we can obtain
semiparametric efficiency.
Related papers
- Estimation and Inference for Causal Functions with Multiway Clustered Data [6.988496457312806]
This paper proposes methods of estimation and uniform inference for a general class of causal functions.
The causal function is identified as a conditional expectation of an adjusted (Neyman-orthogonal) signal.
We apply the proposed methods to analyze the causal relationship between levels in Africa and the historical slave trade.
arXiv Detail & Related papers (2024-09-10T17:17:53Z) - Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error
Feedback [31.115084475673793]
The ensemble method is a promising way to mitigate the overestimation issue in Q-learning.
It is known that the estimation bias hinges heavily on the ensemble size.
We devise an ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias.
arXiv Detail & Related papers (2023-06-20T22:06:14Z) - On High dimensional Poisson models with measurement error: hypothesis
testing for nonlinear nonconvex optimization [13.369004892264146]
We estimation and testing regression model with high dimensionals, which has wide applications in analyzing data.
We propose to estimate regression parameter through minimizing penalized consistency.
The proposed method is applied to the Alzheimer's Disease Initiative.
arXiv Detail & Related papers (2022-12-31T06:58:42Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Inference on Strongly Identified Functionals of Weakly Identified
Functions [71.42652863687117]
We study a novel condition for the functional to be strongly identified even when the nuisance function is not.
We propose penalized minimax estimators for both the primary and debiasing nuisance functions.
arXiv Detail & Related papers (2022-08-17T13:38:31Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - A Minimax Learning Approach to Off-Policy Evaluation in Partially
Observable Markov Decision Processes [31.215206208622728]
We consider off-policy evaluation (OPE) in Partially Observable Markov Decision Processes (POMDPs)
Existing methods suffer from either a large bias in the presence of unmeasured confounders, or a large variance in settings with continuous or large observation/state spaces.
We first propose novel identification methods for OPE in POMDPs with latent confounders, by introducing bridge functions that link the target policy's value and the observed data distribution.
arXiv Detail & Related papers (2021-11-12T15:52:24Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Minimax Kernel Machine Learning for a Class of Doubly Robust Functionals [16.768606469968113]
We consider a class of doubly robust moment functions originally introduced in (Robins et al., 2008)
We demonstrate that this moment function can be used to construct estimating equations for the nuisance functions.
The convergence rates of the nuisance functions are analyzed using the modern techniques in statistical learning theory.
arXiv Detail & Related papers (2021-04-07T05:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.