Network Fault-tolerant and Byzantine-resilient Social Learning via
Collaborative Hierarchical Non-Bayesian Learning
- URL: http://arxiv.org/abs/2307.14952v1
- Date: Thu, 27 Jul 2023 15:46:46 GMT
- Title: Network Fault-tolerant and Byzantine-resilient Social Learning via
Collaborative Hierarchical Non-Bayesian Learning
- Authors: Connor Mclaughlin, Matthew Ding, Denis Edogmus, Lili Su
- Abstract summary: We address the problem of non-Bayesian learning over networks vulnerable to communication failures and adversarial attacks.
We first propose a hierarchical robust push-sum algorithm that can achieve average consensus despite frequent packet-dropping link failures.
We then obtain a packet-dropping fault-tolerant non-Bayesian learning algorithm with provable convergence guarantees.
- Score: 2.236663830879273
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As the network scale increases, existing fully distributed solutions start to
lag behind the real-world challenges such as (1) slow information propagation,
(2) network communication failures, and (3) external adversarial attacks. In
this paper, we focus on hierarchical system architecture and address the
problem of non-Bayesian learning over networks that are vulnerable to
communication failures and adversarial attacks. On network communication, we
consider packet-dropping link failures.
We first propose a hierarchical robust push-sum algorithm that can achieve
average consensus despite frequent packet-dropping link failures. We provide a
sparse information fusion rule between the parameter server and arbitrarily
selected network representatives. Then, interleaving the consensus update step
with a dual averaging update with Kullback-Leibler (KL) divergence as the
proximal function, we obtain a packet-dropping fault-tolerant non-Bayesian
learning algorithm with provable convergence guarantees.
On external adversarial attacks, we consider Byzantine attacks in which the
compromised agents can send maliciously calibrated messages to others
(including both the agents and the parameter server). To avoid the curse of
dimensionality of Byzantine consensus, we solve the non-Bayesian learning
problem via running multiple dynamics, each of which only involves Byzantine
consensus with scalar inputs. To facilitate resilient information propagation
across sub-networks, we use a novel Byzantine-resilient gossiping-type rule at
the parameter server.
Related papers
- Byzantine-Robust and Communication-Efficient Distributed Learning via Compressed Momentum Filtering [17.446431849022346]
Distributed learning has become the standard approach for training large-scale machine learning models across private data silos.
It faces critical challenges related to robustness and communication preservation.
We propose a novel Byzantine-robust and communication-efficient distributed learning method.
arXiv Detail & Related papers (2024-09-13T08:53:10Z) - Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification [1.1887808102491482]
We propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
This research serves as an alarming call for network security experts and practitioners to develop robust defense measures against such attacks.
arXiv Detail & Related papers (2023-09-27T14:02:02Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Federated Learning via Plurality Vote [38.778944321534084]
Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy.
Recent studies have tackled various challenges in federated learning.
We propose a new scheme named federated learning via plurality vote (FedVote)
arXiv Detail & Related papers (2021-10-06T18:16:22Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.