Correlated quantization for distributed mean estimation and optimization
- URL: http://arxiv.org/abs/2203.04925v1
- Date: Wed, 9 Mar 2022 18:14:55 GMT
- Title: Correlated quantization for distributed mean estimation and optimization
- Authors: Ananda Theertha Suresh, Ziteng Sun, Jae Hun Ro, Felix Yu
- Abstract summary: We propose a correlated quantization protocol whose error guarantee depends on the deviation of data points instead of their absolute range.
We show that applying the proposed protocol as sub-routine in distributed optimization algorithms leads to better convergence rates.
- Score: 21.17434087570296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of distributed mean estimation and optimization under
communication constraints. We propose a correlated quantization protocol whose
error guarantee depends on the deviation of data points instead of their
absolute range. The design doesn't need any prior knowledge on the
concentration property of the dataset, which is required to get such dependence
in previous works. We show that applying the proposed protocol as sub-routine
in distributed optimization algorithms leads to better convergence rates. We
also prove the optimality of our protocol under mild assumptions. Experimental
results show that our proposed algorithm outperforms existing mean estimation
protocols on a diverse set of tasks.
Related papers
- Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation [62.2436697657307]
Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data.
We propose a method called Stratified Prediction-Powered Inference (StratPPI)
We show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies.
arXiv Detail & Related papers (2024-06-06T17:37:39Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates [5.13323375365494]
We provide theoretical guarantees for the convergence behaviour of diffusion-based generative models under strongly log-concave data.
Our class of functions used for score estimation is made of Lipschitz continuous functions avoiding any Lipschitzness assumption on the score function.
This approach yields the best known convergence rate for our sampling algorithm.
arXiv Detail & Related papers (2023-11-22T18:40:45Z) - Integrated Conditional Estimation-Optimization [6.037383467521294]
Many real-world optimization problems uncertain parameters with probability can be estimated using contextual feature information.
In contrast to the standard approach of estimating the distribution of uncertain parameters, we propose an integrated conditional estimation approach.
We show that our ICEO approach is theally consistent under moderate conditions.
arXiv Detail & Related papers (2021-10-24T04:49:35Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Distributed Nonparametric Function Estimation: Optimal Rate of
Convergence and Cost of Adaptation [1.332560004325655]
Distributed minimax estimation and distributed adaptive estimation under communication constraints are studied.
We quantify the exact communication cost for adaptation and construct an optimally adaptive procedure for distributed estimation.
arXiv Detail & Related papers (2021-07-01T02:16:16Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Data-Driven Combinatorial Optimization with Incomplete Information: a
Distributionally Robust Optimization Approach [0.0]
We analyze linear optimization problems where the cost vector is not known a priori, but is only observable through a finite data set.
The goal is to find a procedure that transforms the data set into an estimate of the expected value of the objective function.
arXiv Detail & Related papers (2021-05-28T23:17:35Z) - A Framework for Sample Efficient Interval Estimation with Control
Variates [94.32811054797148]
We consider the problem of estimating confidence intervals for the mean of a random variable.
Under certain conditions, we show improved efficiency compared to existing estimation algorithms.
arXiv Detail & Related papers (2020-06-18T05:42:30Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.