Byzantine-Robust Decentralized Learning via ClippedGossip
- URL: http://arxiv.org/abs/2202.01545v2
- Date: Thu, 20 Apr 2023 15:22:15 GMT
- Title: Byzantine-Robust Decentralized Learning via ClippedGossip
- Authors: Lie He, Sai Praneeth Karimireddy, Martin Jaggi
- Abstract summary: We propose a ClippedGossip algorithm for Byzantine-robust consensus optimization.
We demonstrate the encouraging empirical performance of ClippedGossip under a large number of attacks.
- Score: 61.03711813598128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the challenging task of Byzantine-robust
decentralized training on arbitrary communication graphs. Unlike federated
learning where workers communicate through a server, workers in the
decentralized environment can only talk to their neighbors, making it harder to
reach consensus and benefit from collaborative training. To address these
issues, we propose a ClippedGossip algorithm for Byzantine-robust consensus and
optimization, which is the first to provably converge to a
$O(\delta_{\max}\zeta^2/\gamma^2)$ neighborhood of the stationary point for
non-convex objectives under standard assumptions. Finally, we demonstrate the
encouraging empirical performance of ClippedGossip under a large number of
attacks.
Related papers
- Byzantine-Robust and Communication-Efficient Distributed Learning via Compressed Momentum Filtering [17.446431849022346]
Distributed learning has become the standard approach for training large-scale machine learning models across private data silos.
It faces critical challenges related to robustness and communication preservation.
We propose a novel Byzantine-robust and communication-efficient distributed learning method.
arXiv Detail & Related papers (2024-09-13T08:53:10Z) - Fantastyc: Blockchain-based Federated Learning Made Secure and Practical [0.7083294473439816]
Federated Learning is a decentralized framework that enables clients to collaboratively train a machine learning model under the orchestration of a central server without sharing their local data.
The centrality of this framework represents a point of failure which is addressed in literature by blockchain-based federated learning approaches.
We propose Fantastyc, a solution designed to address these challenges that have been never met together in the state of the art.
arXiv Detail & Related papers (2024-06-05T20:01:49Z) - Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences [61.74021364776313]
Distributed learning has emerged as a leading paradigm for training large machine learning models.
In real-world scenarios, participants may be unreliable or malicious, posing a significant challenge to the integrity and accuracy of the trained models.
We propose the first distributed method with client sampling and provable tolerance to Byzantine workers.
arXiv Detail & Related papers (2023-11-23T17:50:30Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.