Asymptotics of the Empirical Bootstrap Method Beyond Asymptotic
Normality
- URL: http://arxiv.org/abs/2011.11248v1
- Date: Mon, 23 Nov 2020 07:14:30 GMT
- Title: Asymptotics of the Empirical Bootstrap Method Beyond Asymptotic
Normality
- Authors: Morgane Austern, Vasilis Syrgkanis
- Abstract summary: We show that the limiting distribution of the empirical bootstrap estimator is consistent under stability conditions.
We propose three alternative ways to use the bootstrap method to build confidence intervals with coverage guarantees.
- Score: 25.402400996745058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the most commonly used methods for forming confidence intervals for
statistical inference is the empirical bootstrap, which is especially expedient
when the limiting distribution of the estimator is unknown. However, despite
its ubiquitous role, its theoretical properties are still not well understood
for non-asymptotically normal estimators. In this paper, under stability
conditions, we establish the limiting distribution of the empirical bootstrap
estimator, derive tight conditions for it to be asymptotically consistent, and
quantify the speed of convergence. Moreover, we propose three alternative ways
to use the bootstrap method to build confidence intervals with coverage
guarantees. Finally, we illustrate the generality and tightness of our results
by a series of examples, including uniform confidence bands, two-sample kernel
tests, minmax stochastic programs and the empirical risk of stacked estimators.
Related papers
- Statistical Inference in Tensor Completion: Optimal Uncertainty Quantification and Statistical-to-Computational Gaps [7.174572371800217]
This paper presents a simple yet efficient method for statistical inference of tensor linear forms using incomplete and noisy observations.
It is suitable for various statistical inference tasks, including constructing confidence intervals, inference under heteroskedastic and sub-exponential noise, and simultaneous testing.
arXiv Detail & Related papers (2024-10-15T03:09:52Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Model Free Prediction with Uncertainty Assessment [7.524024486998338]
We propose a novel framework that transforms the deep estimation paradigm into a platform conducive to conditional mean estimation.
We develop an end-to-end convergence rate for the conditional diffusion model and establish the normality of the generated samples.
Through numerical experiments, we empirically validate the efficacy of our proposed methodology.
arXiv Detail & Related papers (2024-05-21T11:19:50Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Score Matching-based Pseudolikelihood Estimation of Neural Marked
Spatio-Temporal Point Process with Uncertainty Quantification [59.81904428056924]
We introduce SMASH: a Score MAtching estimator for learning markedPs with uncertainty quantification.
Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of markedPs through score-matching.
The superior performance of our proposed framework is demonstrated through extensive experiments in both event prediction and uncertainty quantification.
arXiv Detail & Related papers (2023-10-25T02:37:51Z) - Estimation Beyond Data Reweighting: Kernel Method of Moments [9.845144212844662]
We provide an empirical likelihood estimator based on maximum mean discrepancy which we term the kernel method of moments (KMM)
We show that our method achieves competitive performance on several conditional moment restriction tasks.
arXiv Detail & Related papers (2023-05-18T11:52:43Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Time-uniform central limit theory and asymptotic confidence sequences [34.00292366598841]
Confidence sequences (CS) provide valid inference at arbitrary stopping times and incur no penalties for "peeking" at the data.
CSs are nonasymptotic, enjoying finite-sample guarantees but not the aforementioned broad applicability of confidence intervals.
Asymptotic CSs forgo nonasymptotic validity for CLT-like versatility and (asymptotic) time-uniform guarantees.
arXiv Detail & Related papers (2021-03-11T05:45:35Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.