Statistical inference using Regularized M-estimation in the reproducing
kernel Hilbert space for handling missing data
- URL: http://arxiv.org/abs/2107.07371v1
- Date: Thu, 15 Jul 2021 14:51:39 GMT
- Title: Statistical inference using Regularized M-estimation in the reproducing
kernel Hilbert space for handling missing data
- Authors: Hengfang Wang and Jae Kwang Kim
- Abstract summary: We first use the kernel ridge regression to develop imputation for handling item nonresponse.
A nonparametric propensity score estimator using the kernel Hilbert space is also developed.
The proposed method is applied to analyze the air pollution data measured in Beijing, China.
- Score: 0.76146285961466
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Imputation and propensity score weighting are two popular techniques for
handling missing data. We address these problems using the regularized
M-estimation techniques in the reproducing kernel Hilbert space. Specifically,
we first use the kernel ridge regression to develop imputation for handling
item nonresponse. While this nonparametric approach is potentially promising
for imputation, its statistical properties are not investigated in the
literature. Under some conditions on the order of the tuning parameter, we
first establish the root-$n$ consistency of the kernel ridge regression
imputation estimator and show that it achieves the lower bound of the
semiparametric asymptotic variance. A nonparametric propensity score estimator
using the reproducing kernel Hilbert space is also developed by a novel
application of the maximum entropy method for the density ratio function
estimation. We show that the resulting propensity score estimator is
asymptotically equivalent to the kernel ridge regression imputation estimator.
Results from a limited simulation study are also presented to confirm our
theory. The proposed method is applied to analyze the air pollution data
measured in Beijing, China.
Related papers
- Progression: an extrapolation principle for regression [0.0]
We propose a novel statistical extrapolation principle.
It assumes a simple relationship between predictors and the response at the boundary of the training predictor samples.
Our semi-parametric method, progression, leverages this extrapolation principle and offers guarantees on the approximation error beyond the training data range.
arXiv Detail & Related papers (2024-10-30T17:29:51Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Statistical Inference after Kernel Ridge Regression Imputation under
item nonresponse [0.76146285961466]
We consider a nonparametric approach to imputation using the kernel ridge regression technique and propose consistent variance estimation.
The proposed variance estimator is based on a linearization approach which employs the entropy method to estimate the density ratio.
arXiv Detail & Related papers (2021-01-29T20:46:33Z) - Early stopping and polynomial smoothing in regression with reproducing
kernels [2.132096006921048]
We study the problem of early stopping for iterative learning algorithms in a reproducing kernel Hilbert space (RKHS)
We present a data-driven rule to perform early stopping without a validation set that is based on the so-called minimum discrepancy principle.
The proposed rule is proved to be minimax-optimal over different types of kernel spaces.
arXiv Detail & Related papers (2020-07-14T05:27:18Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z) - Maximum likelihood estimation and uncertainty quantification for
Gaussian process approximation of deterministic functions [10.319367855067476]
This article provides one of the first theoretical analyses in the context of Gaussian process regression with a noiseless dataset.
We show that the maximum likelihood estimation of the scale parameter alone provides significant adaptation against misspecification of the Gaussian process model.
arXiv Detail & Related papers (2020-01-29T17:20:21Z) - Statistical Inference for Model Parameters in Stochastic Gradient
Descent [45.29532403359099]
gradient descent coefficients (SGD) has been widely used in statistical estimation for large-scale data due to its computational and memory efficiency.
We investigate the problem of statistical inference of true model parameters based on SGD when the population loss function is strongly convex and satisfies certain conditions.
arXiv Detail & Related papers (2016-10-27T07:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.