Adaptive Refinement Protocols for Distributed Distribution Estimation under $\ell^p$-Losses
- URL: http://arxiv.org/abs/2410.06884v1
- Date: Wed, 9 Oct 2024 13:46:08 GMT
- Title: Adaptive Refinement Protocols for Distributed Distribution Estimation under $\ell^p$-Losses
- Authors: Deheng Yuan, Tao Guo, Zhongyi Huang,
- Abstract summary: Consider the communication-constrained estimation of discrete distributions under $ellp$ losses.
We obtain the minimax optimal rates of the problem in most parameter regimes.
- Score: 9.766173684831324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider the communication-constrained estimation of discrete distributions under $\ell^p$ losses, where each distributed terminal holds multiple independent samples and uses limited number of bits to describe the samples. We obtain the minimax optimal rates of the problem in most parameter regimes. An elbow effect of the optimal rates at $p=2$ is clearly identified. To show the optimal rates, we first design estimation protocols to achieve them. The key ingredient of these protocols is to introduce adaptive refinement mechanisms, which first generate rough estimate by partial information and then establish refined estimate in subsequent steps guided by the rough estimate. The protocols leverage successive refinement, sample compression and thresholding methods to achieve the optimal rates in different parameter regimes. The optimality of the protocols is shown by deriving compatible minimax lower bounds.
Related papers
- Statistical evaluation and optimization of entanglement purification protocols [0.0]
We demonstrate that pioneering protocols are unable to improve the estimated initial average concurrence of almost uniformly sampled density matrices.
We also develop a more efficient protocol and investigate it numerically together with a recent proposal based on an entangling rank-$2$ projector.
arXiv Detail & Related papers (2024-02-19T16:58:03Z) - Nearest Neighbor Sampling for Covariate Shift Adaptation [7.940293148084844]
We propose a new covariate shift adaptation method which avoids estimating the weights.
The basic idea is to directly work on unlabeled target data, labeled according to the $k$-nearest neighbors in the source dataset.
Our experiments show that it achieves drastic reduction in the running time with remarkable accuracy.
arXiv Detail & Related papers (2023-12-15T17:28:09Z) - Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Quantization for decentralized learning under subspace constraints [61.59416703323886]
We consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints.
We propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates.
The analysis shows that, under some general conditions on the quantization noise, the strategy is stable both in terms of mean-square error and average bit rate.
arXiv Detail & Related papers (2022-09-16T09:38:38Z) - Minimization of the estimation error for entanglement distribution
networks with arbitrary noise [1.3198689566654105]
We consider a setup in which nodes randomly sample a subset of the entangled qubit pairs to measure and then estimate the average fidelity of the unsampled pairs conditioned on the measurement outcome.
The proposed estimation protocol achieves the lowest mean squared estimation error in a difficult scenario with arbitrary noise and no prior information.
arXiv Detail & Related papers (2022-03-18T13:05:36Z) - Correlated quantization for distributed mean estimation and optimization [21.17434087570296]
We propose a correlated quantization protocol whose error guarantee depends on the deviation of data points instead of their absolute range.
We show that applying the proposed protocol as sub-routine in distributed optimization algorithms leads to better convergence rates.
arXiv Detail & Related papers (2022-03-09T18:14:55Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - Distributed Nonparametric Function Estimation: Optimal Rate of
Convergence and Cost of Adaptation [1.332560004325655]
Distributed minimax estimation and distributed adaptive estimation under communication constraints are studied.
We quantify the exact communication cost for adaptation and construct an optimally adaptive procedure for distributed estimation.
arXiv Detail & Related papers (2021-07-01T02:16:16Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Optimal Off-Policy Evaluation from Multiple Logging Policies [77.62012545592233]
We study off-policy evaluation from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling.
We find the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one.
arXiv Detail & Related papers (2020-10-21T13:43:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.