Kernel Debiased Plug-in Estimation: Simultaneous, Automated Debiasing without Influence Functions for Many Target Parameters
- URL: http://arxiv.org/abs/2306.08598v5
- Date: Sun, 2 Jun 2024 22:34:38 GMT
- Title: Kernel Debiased Plug-in Estimation: Simultaneous, Automated Debiasing without Influence Functions for Many Target Parameters
- Authors: Brian Cho, Yaroslav Mukhin, Kyra Gan, Ivana Malenica,
- Abstract summary: We propose a novel method named emph kernel plug-in estimation (KDPE)
We show that KDPE simultaneously debiases emphall pathwise differentiable target parameters that satisfy our regularity conditions.
We numerically illustrate the use of KDPE and validate our theoretical results.
- Score: 1.5999407512883512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When estimating target parameters in nonparametric models with nuisance parameters, substituting the unknown nuisances with nonparametric estimators can introduce ``plug-in bias.'' Traditional methods addressing this suboptimal bias-variance trade-off rely on the \emph{influence function} (IF) of the target parameter. When estimating multiple target parameters, these methods require debiasing the nuisance parameter multiple times using the corresponding IFs, which poses analytical and computational challenges. In this work, we leverage the \emph{targeted maximum likelihood estimation} (TMLE) framework to propose a novel method named \emph{kernel debiased plug-in estimation} (KDPE). KDPE refines an initial estimate through regularized likelihood maximization steps, employing a nonparametric model based on \emph{reproducing kernel Hilbert spaces}. We show that KDPE: (i) simultaneously debiases \emph{all} pathwise differentiable target parameters that satisfy our regularity conditions, (ii) does not require the IF for implementation, and (iii) remains computationally tractable. We numerically illustrate the use of KDPE and validate our theoretical results.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Efficient Sensitivity Analysis for Parametric Robust Markov Chains [23.870902923521335]
We provide a novel method for sensitivity analysis of robust Markov chains.
We measure sensitivity in terms of partial derivatives with respect to the uncertain transition probabilities.
We embed the results within an iterative learning scheme that profits from having access to a dedicated sensitivity analysis.
arXiv Detail & Related papers (2023-05-01T08:23:55Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Regularized Nonlinear Regression for Simultaneously Selecting and
Estimating Key Model Parameters [1.6122433144430324]
In system identification, estimating parameters of a model using limited observations results in poor identifiability.
We propose a new method to simultaneously select and estimate sensitive parameters as key model parameters and fix the remaining parameters to a set of typical values.
arXiv Detail & Related papers (2021-04-23T06:17:57Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z) - Orthogonal Statistical Learning [49.55515683387805]
We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk depends on an unknown nuisance parameter.
We show that if the population risk satisfies a condition called Neymanity, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order.
arXiv Detail & Related papers (2019-01-25T02:21:24Z) - Double/Debiased Machine Learning for Treatment and Causal Parameters [5.405360145866329]
We show how to remove the regularization bias by solving auxiliary prediction problems via ML tools.
The resulting method could be called a "double ML" method because it relies on estimating primary and auxiliary predictive models.
This allows us to use a very broad set of ML predictive methods in solving the auxiliary and main prediction problems.
arXiv Detail & Related papers (2016-07-30T01:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.