Byzantine-Resilient High-Dimensional Federated Learning
- URL: http://arxiv.org/abs/2006.13041v2
- Date: Sun, 16 Aug 2020 21:24:14 GMT
- Title: Byzantine-Resilient High-Dimensional Federated Learning
- Authors: Deepesh Data and Suhas Diggavi
- Abstract summary: We employ gradientCV descent (SGD) with local iterations in the presence of malicious/Byzantine clients.
The clients update iteration by taking their own subset of gradient vectors then communicate with the net.
We believe that our algorithm is the first Byzantine-resilient algorithm and analysis with local datasets.
- Score: 10.965065178451104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study stochastic gradient descent (SGD) with local iterations in the
presence of malicious/Byzantine clients, motivated by the federated learning.
The clients, instead of communicating with the central server in every
iteration, maintain their local models, which they update by taking several SGD
iterations based on their own datasets and then communicate the net update with
the server, thereby achieving communication-efficiency. Furthermore, only a
subset of clients communicate with the server, and this subset may be different
at different synchronization times. The Byzantine clients may collaborate and
send arbitrary vectors to the server to disrupt the learning process. To combat
the adversary, we employ an efficient high-dimensional robust mean estimation
algorithm from Steinhardt et al.~\cite[ITCS 2018]{Resilience_SCV18} at the
server to filter-out corrupt vectors; and to analyze the outlier-filtering
procedure, we develop a novel matrix concentration result that may be of
independent interest.
We provide convergence analyses for strongly-convex and non-convex smooth
objectives in the heterogeneous data setting, where different clients may have
different local datasets, and we do not make any probabilistic assumptions on
data generation. We believe that ours is the first Byzantine-resilient
algorithm and analysis with local iterations. We derive our convergence results
under minimal assumptions of bounded variance for SGD and bounded gradient
dissimilarity (which captures heterogeneity among local datasets). We also
extend our results to the case when clients compute full-batch gradients.
Related papers
- Modality Alignment Meets Federated Broadcasting [9.752555511824593]
Federated learning (FL) has emerged as a powerful approach to safeguard data privacy by training models across distributed edge devices without centralizing local data.
This paper introduces a novel FL framework leveraging modality alignment, where a text encoder resides on the server, and image encoders operate on local devices.
arXiv Detail & Related papers (2024-11-24T13:30:03Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays [0.0]
Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients")
Two major challenges hindering the performance of FL algorithms are long training times caused by straggling clients, and a decline in model accuracy under non-iid local data distributions ("client drift")
We propose and analyze Asynchronous Exact Averaging (AREA), a new (sub)gradient algorithm that utilizes communication to speed up convergence and enhance scalability, and employs client memory to correct the client drift caused by variations in client update frequencies.
arXiv Detail & Related papers (2024-05-16T14:22:49Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Coded Computing for Low-Latency Federated Learning over Wireless Edge
Networks [10.395838711844892]
Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server.
We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure.
arXiv Detail & Related papers (2020-11-12T06:21:59Z) - Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data [10.965065178451104]
We study distributed gradient descent (SGD) in the master-worker architecture under Byzantine attacks.
Our algorithm can tolerate up to $frac14$ fraction Byzantine workers.
arXiv Detail & Related papers (2020-05-16T04:15:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.