MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates
- URL: http://arxiv.org/abs/2306.12212v4
- Date: Mon, 8 Apr 2024 08:00:42 GMT
- Title: MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates
- Authors: Yuchang Sun, Yuyi Mao, Jun Zhang,
- Abstract summary: Federated learning (FL) is a promising framework for privacy-preserving collaborative learning.
This paper investigates the convergence of the classical FedAvg algorithm with arbitrary client dropouts.
We then design a novel training algorithm named MimiC, where the server modifies each received model update based on the previous ones.
- Score: 8.363640358539605
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the model updates need to be collected at a server. However, when being deployed at mobile edge networks, clients may have unpredictable availability and drop out of the training process, which hinders the convergence of FL. This paper tackles such a critical challenge. Specifically, we first investigate the convergence of the classical FedAvg algorithm with arbitrary client dropouts. We find that with the common choice of a decaying learning rate, FedAvg oscillates around a stationary point of the global loss function, which is caused by the divergence between the aggregated and desired central update. Motivated by this new observation, we then design a novel training algorithm named MimiC, where the server modifies each received model update based on the previous ones. The proposed modification of the received model updates mimics the imaginary central update irrespective of dropout clients. The theoretical analysis of MimiC shows that divergence between the aggregated and central update diminishes with proper learning rates, leading to its convergence. Simulation results further demonstrate that MimiC maintains stable convergence performance and learns better models than the baseline methods.
Related papers
- Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Federated Adversarial Learning: A Framework with Convergence Analysis [28.136498729360504]
Federated learning (FL) is a trending training paradigm to utilize decentralized training data.
FL allows clients to update model parameters locally for several epochs, then share them to a global model for aggregation.
This training paradigm with multi-local step updating before aggregation exposes unique vulnerabilities to adversarial attacks.
arXiv Detail & Related papers (2022-08-07T04:17:34Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Byzantine-robust Federated Learning through Spatial-temporal Analysis of
Local Model Updates [6.758334200305236]
Federated Learning (FL) enables multiple distributed clients (e.g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
In this paper, we propose to mitigate these failures and attacks from a spatial-temporal perspective.
Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space.
arXiv Detail & Related papers (2021-07-03T18:48:11Z) - Separation of Powers in Federated Learning [5.966064140042439]
Federated Learning (FL) enables collaborative training among mutually distrusting parties.
Recent attacks have reconstructed large fractions of training data from ostensibly "sanitized" model updates.
We introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture.
arXiv Detail & Related papers (2021-05-19T21:00:44Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Slashing Communication Traffic in Federated Learning by Transmitting
Clustered Model Updates [12.660500431713336]
Federated Learning (FL) is an emerging decentralized learning framework through which multiple clients can collaboratively train a learning model.
heavy communication traffic can be incurred by exchanging model updates via the Internet between clients and the parameter server.
In this work, we devise the Model Update Compression by Soft Clustering (MUCSC) algorithm to compress model updates transmitted between clients and the PS.
arXiv Detail & Related papers (2021-05-10T07:15:49Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Adversarial Robustness through Bias Variance Decomposition: A New
Perspective for Federated Learning [41.525434598682764]
Federated learning learns a neural network model by aggregating the knowledge from a group of distributed clients under the privacy-preserving constraint.
We show that this paradigm might inherit the adversarial vulnerability of the centralized neural network.
We propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms.
arXiv Detail & Related papers (2020-09-18T18:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.