Two-Stage Nuisance Function Estimation for Causal Mediation Analysis
- URL: http://arxiv.org/abs/2404.00735v1
- Date: Sun, 31 Mar 2024 16:38:48 GMT
- Title: Two-Stage Nuisance Function Estimation for Causal Mediation Analysis
- Authors: AmirEmad Ghassami,
- Abstract summary: We propose a two-stage estimation strategy that estimates the nuisance functions based on the role they play in the structure of the bias of the influence function-based estimator of the mediation functional.
We provide analysis of the proposed method, as well as sufficient conditions for consistency and normality of the estimator of the parameter of interest.
- Score: 8.288031125057524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When estimating the direct and indirect causal effects using the influence function-based estimator of the mediation functional, it is crucial to understand what aspects of the treatment, the mediator, and the outcome mean mechanisms should be focused on. Specifically, considering them as nuisance functions and attempting to fit these nuisance functions as accurate as possible is not necessarily the best approach to take. In this work, we propose a two-stage estimation strategy for the nuisance functions that estimates the nuisance functions based on the role they play in the structure of the bias of the influence function-based estimator of the mediation functional. We provide robustness analysis of the proposed method, as well as sufficient conditions for consistency and asymptotic normality of the estimator of the parameter of interest.
Related papers
- Doubly Robust Proximal Causal Learning for Continuous Treatments [56.05592840537398]
We propose a kernel-based doubly robust causal learning estimator for continuous treatments.
We show that its oracle form is a consistent approximation of the influence function.
We then provide a comprehensive convergence analysis in terms of the mean square error.
arXiv Detail & Related papers (2023-09-22T12:18:53Z) - Nuisance Function Tuning and Sample Splitting for Optimal Doubly Robust Estimation [5.018363990542611]
We consider the problem of how to estimate nuisance functions to obtain optimal rates of convergence for a doubly robust nonparametric functional.
We show that plug-in and first-order biased-corrected estimators can achieve minimax rates of convergence across all H"older smoothness classes of the nuisance functions.
arXiv Detail & Related papers (2022-12-30T18:17:06Z) - Neighborhood Adaptive Estimators for Causal Inference under Network
Interference [152.4519491244279]
We consider the violation of the classical no-interference assumption, meaning that the treatment of one individuals might affect the outcomes of another.
To make interference tractable, we consider a known network that describes how interference may travel.
We study estimators for the average direct treatment effect on the treated in such a setting.
arXiv Detail & Related papers (2022-12-07T14:53:47Z) - Sequential Decision Making on Unmatched Data using Bayesian Kernel
Embeddings [10.75801980090826]
We propose a novel algorithm for maximizing the expectation of a function.
We take into consideration the uncertainty derived from the estimation of both the conditional distribution of the features and the unknown function.
Our algorithm empirically outperforms the current state-of-the-art algorithm in the experiments conducted.
arXiv Detail & Related papers (2022-10-25T01:27:29Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Inference on Strongly Identified Functionals of Weakly Identified
Functions [71.42652863687117]
We study a novel condition for the functional to be strongly identified even when the nuisance function is not.
We propose penalized minimax estimators for both the primary and debiasing nuisance functions.
arXiv Detail & Related papers (2022-08-17T13:38:31Z) - Variance-Aware Off-Policy Evaluation with Linear Function Approximation [85.75516599931632]
We study the off-policy evaluation problem in reinforcement learning with linear function approximation.
We propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration.
arXiv Detail & Related papers (2021-06-22T17:58:46Z) - Multiply Robust Causal Mediation Analysis with Continuous Treatments [12.196869756333797]
We propose an estimator suitable for settings with continuous treatments inspired by the influence function-based estimator of Tchetgen Tchetgen and Shpitser (2012)
Our proposed approach employs cross-fitting, relaxing the smoothness requirements on the nuisance functions and allowing them to be estimated at slower rates than the target parameter.
arXiv Detail & Related papers (2021-05-19T16:58:57Z) - Minimax Kernel Machine Learning for a Class of Doubly Robust Functionals [16.768606469968113]
We consider a class of doubly robust moment functions originally introduced in (Robins et al., 2008)
We demonstrate that this moment function can be used to construct estimating equations for the nuisance functions.
The convergence rates of the nuisance functions are analyzed using the modern techniques in statistical learning theory.
arXiv Detail & Related papers (2021-04-07T05:52:15Z) - Causal Estimation with Functional Confounders [24.54466899641308]
Causal inference relies on two fundamental assumptions: ignorability and positivity.
We study causal inference when the true confounder value can be expressed as a function of the observed data.
In this setting, ignorability is satisfied, however positivity is violated, and causal inference is impossible in general.
arXiv Detail & Related papers (2021-02-17T02:16:21Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.