Distributed Nonparametric Function Estimation: Optimal Rate of
Convergence and Cost of Adaptation
- URL: http://arxiv.org/abs/2107.00179v1
- Date: Thu, 1 Jul 2021 02:16:16 GMT
- Title: Distributed Nonparametric Function Estimation: Optimal Rate of
Convergence and Cost of Adaptation
- Authors: T. Tony Cai and Hongji Wei
- Abstract summary: Distributed minimax estimation and distributed adaptive estimation under communication constraints are studied.
We quantify the exact communication cost for adaptation and construct an optimally adaptive procedure for distributed estimation.
- Score: 1.332560004325655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributed minimax estimation and distributed adaptive estimation under
communication constraints for Gaussian sequence model and white noise model are
studied. The minimax rate of convergence for distributed estimation over a
given Besov class, which serves as a benchmark for the cost of adaptation, is
established. We then quantify the exact communication cost for adaptation and
construct an optimally adaptive procedure for distributed estimation over a
range of Besov classes. The results demonstrate significant differences between
nonparametric function estimation in the distributed setting and the
conventional centralized setting. For global estimation, adaptation in general
cannot be achieved for free in the distributed setting. The new technical tools
to obtain the exact characterization for the cost of adaptation can be of
independent interest.
Related papers
- Adaptive Refinement Protocols for Distributed Distribution Estimation under $\ell^p$-Losses [9.766173684831324]
Consider the communication-constrained estimation of discrete distributions under $ellp$ losses.
We obtain the minimax optimal rates of the problem in most parameter regimes.
arXiv Detail & Related papers (2024-10-09T13:46:08Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Nearest Neighbor Sampling for Covariate Shift Adaptation [7.940293148084844]
We propose a new covariate shift adaptation method which avoids estimating the weights.
The basic idea is to directly work on unlabeled target data, labeled according to the $k$-nearest neighbors in the source dataset.
Our experiments show that it achieves drastic reduction in the running time with remarkable accuracy.
arXiv Detail & Related papers (2023-12-15T17:28:09Z) - Adaptive importance sampling for heavy-tailed distributions via
$\alpha$-divergence minimization [2.879807093604632]
We propose an AIS algorithm that approximates the target by Student-t proposal distributions.
We adapt location and scale parameters by matching the escort moments of the target and the proposal.
These updates minimize the $alpha$-divergence between the target and the proposal, thereby connecting with variational inference.
arXiv Detail & Related papers (2023-10-25T14:07:08Z) - Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation
and Inference [5.924780594614676]
We show that the error of estimating a single coordinate can be enlarged by a multiple of $sqrtd$ when data are allowed to be arbitrarily adaptive.
We propose a novel estimator for single coordinate inference via solving a Two-stage Adaptive Linear Estimating equation (TALE)
arXiv Detail & Related papers (2023-10-01T00:45:09Z) - Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error
Feedback [31.115084475673793]
The ensemble method is a promising way to mitigate the overestimation issue in Q-learning.
It is known that the estimation bias hinges heavily on the ensemble size.
We devise an ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias.
arXiv Detail & Related papers (2023-06-20T22:06:14Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Quantization for decentralized learning under subspace constraints [61.59416703323886]
We consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints.
We propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates.
The analysis shows that, under some general conditions on the quantization noise, the strategy is stable both in terms of mean-square error and average bit rate.
arXiv Detail & Related papers (2022-09-16T09:38:38Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.