Federated Adversarial Learning: A Framework with Convergence Analysis
- URL: http://arxiv.org/abs/2208.03635v1
- Date: Sun, 7 Aug 2022 04:17:34 GMT
- Title: Federated Adversarial Learning: A Framework with Convergence Analysis
- Authors: Xiaoxiao Li, Zhao Song, Jiaming Yang
- Abstract summary: Federated learning (FL) is a trending training paradigm to utilize decentralized training data.
FL allows clients to update model parameters locally for several epochs, then share them to a global model for aggregation.
This training paradigm with multi-local step updating before aggregation exposes unique vulnerabilities to adversarial attacks.
- Score: 28.136498729360504
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is a trending training paradigm to utilize
decentralized training data. FL allows clients to update model parameters
locally for several epochs, then share them to a global model for aggregation.
This training paradigm with multi-local step updating before aggregation
exposes unique vulnerabilities to adversarial attacks. Adversarial training is
a popular and effective method to improve the robustness of networks against
adversaries. In this work, we formulate a general form of federated adversarial
learning (FAL) that is adapted from adversarial learning in the centralized
setting. On the client side of FL training, FAL has an inner loop to generate
adversarial samples for adversarial training and an outer loop to update local
model parameters. On the server side, FAL aggregates local model updates and
broadcast the aggregated model. We design a global robust training loss and
formulate FAL training as a min-max optimization problem. Unlike the
convergence analysis in classical centralized training that relies on the
gradient direction, it is significantly harder to analyze the convergence in
FAL for three reasons: 1) the complexity of min-max optimization, 2) model not
updating in the gradient direction due to the multi-local updates on the
client-side before aggregation and 3) inter-client heterogeneity. We address
these challenges by using appropriate gradient approximation and coupling
techniques and present the convergence analysis in the over-parameterized
regime. Our main result theoretically shows that the minimum loss under our
algorithm can converge to $\epsilon$ small with chosen learning rate and
communication rounds. It is noteworthy that our analysis is feasible for
non-IID clients.
Related papers
- Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees [18.24213566328972]
Decentralized decentralized learning (DFL) captures FL settings where both (i) model updates and (ii) model aggregations are carried out by the clients without a central server.
DSpodFL consistently achieves speeds compared with baselines under various system settings.
arXiv Detail & Related papers (2024-02-05T19:02:19Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates [8.363640358539605]
Federated learning (FL) is a promising framework for privacy-preserving collaborative learning.
This paper investigates the convergence of the classical FedAvg algorithm with arbitrary client dropouts.
We then design a novel training algorithm named MimiC, where the server modifies each received model update based on the previous ones.
arXiv Detail & Related papers (2023-06-21T12:11:02Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.