Decentralized optimization with non-identical sampling in presence of
stragglers
- URL: http://arxiv.org/abs/2108.11071v1
- Date: Wed, 25 Aug 2021 06:33:38 GMT
- Title: Decentralized optimization with non-identical sampling in presence of
stragglers
- Authors: Tharindu Adikari, Stark Draper
- Abstract summary: We consider decentralized consensus optimization when sample data from non-identical and perform variable amounts of work due to slow nodes known as stragglers.
While we conclude that the two convex methods are optimal, we also show that an optimal method does not exist.
- Score: 3.04585143845864
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider decentralized consensus optimization when workers sample data
from non-identical distributions and perform variable amounts of work due to
slow nodes known as stragglers. The problem of non-identical distributions and
the problem of variable amount of work have been previously studied separately.
In our work we analyze them together under a unified system model. We study the
convergence of the optimization algorithm when combining worker outputs under
two heuristic methods: (1) weighting equally, and (2) weighting by the amount
of work completed by each. We prove convergence of the two methods under
perfect consensus, assuming straggler statistics are independent and identical
across all workers for all iterations. Our numerical results show that under
approximate consensus the second method outperforms the first method for both
convex and non-convex objective functions. We make use of the theory on minimum
variance unbiased estimator (MVUE) to evaluate the existence of an optimal
method for combining worker outputs. While we conclude that neither of the two
heuristic methods are optimal, we also show that an optimal method does not
exist.
Related papers
- Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Distributionally Robust Optimization with Markovian Data [8.126833795693699]
We study a program where the probability distribution of the uncertain problem parameters is unknown.
We propose a data-driven distributionally to estimate the problem's objective function and optimal solution.
arXiv Detail & Related papers (2021-06-12T10:59:02Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.