Distributed gradient methods under heavy-tailed communication noise
- URL: http://arxiv.org/abs/2505.24464v1
- Date: Fri, 30 May 2025 11:07:21 GMT
- Title: Distributed gradient methods under heavy-tailed communication noise
- Authors: Manojlo Vukovic, Dusan Jakovetic, Dragana Bajovic, Soummya Kar,
- Abstract summary: We consider a standard distributed optimization problem in which networked nodes collaboratively minimize the sum of their locally known convex costs.<n>Heavy-tailed noise is highly relevant and frequently arises in densely deployed wireless sensor and Internet of Things (IoT) networks.<n>We show that the proposed method converges to a neighborhood of the network-wide problem solution in the mean squared error sense.
- Score: 8.424688018502726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a standard distributed optimization problem in which networked nodes collaboratively minimize the sum of their locally known convex costs. For this setting, we address for the first time the fundamental problem of design and analysis of distributed methods to solve the above problem when inter-node communication is subject to \emph{heavy-tailed} noise. Heavy-tailed noise is highly relevant and frequently arises in densely deployed wireless sensor and Internet of Things (IoT) networks. Specifically, we design a distributed gradient-type method that features a carefully balanced mixed time-scale time-varying consensus and gradient contribution step sizes and a bounded nonlinear operator on the consensus update to limit the effect of heavy-tailed noise. Assuming heterogeneous strongly convex local costs with mutually different minimizers that are arbitrarily far apart, we show that the proposed method converges to a neighborhood of the network-wide problem solution in the mean squared error (MSE) sense, and we also characterize the corresponding convergence rate. We further show that the asymptotic MSE can be made arbitrarily small through consensus step-size tuning, possibly at the cost of slowing down the transient error decay. Numerical experiments corroborate our findings and demonstrate the resilience of the proposed method to heavy-tailed (and infinite variance) communication noise. They also show that existing distributed methods, designed for finite-communication-noise-variance settings, fail in the presence of infinite variance noise.
Related papers
- Diffusion Models for Solving Inverse Problems via Posterior Sampling with Piecewise Guidance [52.705112811734566]
A novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme.<n>The proposed method is problem-agnostic and readily adaptable to a variety of inverse problems.<n>The framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4times) and (8times) super-resolution tasks.
arXiv Detail & Related papers (2025-07-22T19:35:14Z) - DAWN-FM: Data-Aware and Noise-Informed Flow Matching for Solving Inverse Problems [4.212663349859165]
Inverse problems, which involve estimating parameters from incomplete or noisy observations, arise in various fields such as medical imaging.<n>We employ Flow Matching (FM), a generative framework that integrates a deterministic processes to map a simple reference distribution.<n>Our method DAWN-FM: Data-AWare and Noise-informed Flow Matching incorporates data and noise embedding, allowing the model to access representations about the measured data.
arXiv Detail & Related papers (2024-12-06T04:18:49Z) - Differential error feedback for communication-efficient decentralized learning [48.924131251745266]
We propose a new decentralized communication-efficient learning approach that blends differential quantization with error feedback.
We show that the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate.
The results establish that, in the small step-size regime and with a finite number of bits, it is possible to attain the performance achievable in the absence of compression.
arXiv Detail & Related papers (2024-06-26T15:11:26Z) - FedNMUT -- Federated Noisy Model Update Tracking Convergence Analysis [3.665841843512992]
A novel Decentralized Noisy Model Update Tracking Federated Learning algorithm (FedNMUT) is proposed.
It is tailored to function efficiently in the presence noisy communication channels.
FedNMUT incorporates noise into its parameters to mimic the conditions of noisy communication channels.
arXiv Detail & Related papers (2024-03-20T02:17:47Z) - Dencentralized learning in the presence of low-rank noise [57.18977364494388]
Observations collected by agents in a network may be unreliable due to observation noise or interference.
This paper proposes a distributed algorithm that allows each node to improve the reliability of its own observation.
arXiv Detail & Related papers (2022-03-18T09:13:57Z) - On Convergence of Federated Averaging Langevin Dynamics [22.013125418713763]
We propose a federated averaging Langevin algorithm (FA-LD) for uncertainty quantification and mean predictions with distributed clients.
We develop theoretical guarantees for FA-LD for strongly log-con distributions with non-icaved data.
We show convergence results based on different averaging schemes where only partial device updates are available.
arXiv Detail & Related papers (2021-12-09T18:54:29Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - A scalable multi-step least squares method for network identification
with unknown disturbance topology [0.0]
We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
arXiv Detail & Related papers (2021-06-14T16:12:49Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.