Improving Accuracy of Federated Learning in Non-IID Settings
- URL: http://arxiv.org/abs/2010.15582v1
- Date: Wed, 14 Oct 2020 21:02:14 GMT
- Title: Improving Accuracy of Federated Learning in Non-IID Settings
- Authors: Mustafa Safa Ozdayi, Murat Kantarcioglu, Rishabh Iyer
- Abstract summary: Federated Learning (FL) is a decentralized machine learning protocol that allows a set of participating agents to collaboratively train a model without sharing their data.
It has been observed that the performance of FL is closely tied with the local data distributions of agents.
In this work, we identify four simple techniques that can improve the performance of trained models without incurring any additional communication overhead to FL.
- Score: 11.908715869667445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a decentralized machine learning protocol that
allows a set of participating agents to collaboratively train a model without
sharing their data. This makes FL particularly suitable for settings where data
privacy is desired. However, it has been observed that the performance of FL is
closely tied with the local data distributions of agents. Particularly, in
settings where local data distributions vastly differ among agents, FL performs
rather poorly with respect to the centralized training. To address this
problem, we hypothesize the reasons behind the performance degradation, and
develop some techniques to address these reasons accordingly. In this work, we
identify four simple techniques that can improve the performance of trained
models without incurring any additional communication overhead to FL, but
rather, some light computation overhead either on the client, or the
server-side. In our experimental analysis, combination of our techniques
improved the validation accuracy of a model trained via FL by more than 12%
with respect to our baseline. This is about 5% less than the accuracy of the
model trained on centralized data.
Related papers
- Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Federated Learning for Predictive Maintenance and Quality Inspection in
Industrial Applications [0.36855408155998204]
Federated learning (FL) enables multiple participants to develop a machine learning model without compromising privacy and confidentiality of their data.
We evaluate the performance of different FL aggregation methods and compare them to central and local training approaches.
We introduce a new federated learning dataset from a real-world quality inspection setting.
arXiv Detail & Related papers (2023-04-21T16:11:09Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - The Impact of Data Distribution on Fairness and Robustness in Federated
Learning [7.209240273742224]
Federated Learning (FL) is a distributed machine learning protocol that allows a set of agents to collaboratively train a model without sharing their datasets.
In this work, we look at how variations in local data distributions affect the fairness and the properties of the trained models.
arXiv Detail & Related papers (2021-11-29T22:04:50Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Federated learning with hierarchical clustering of local updates to
improve training on non-IID data [3.3517146652431378]
We show that learning a single joint model is often not optimal in the presence of certain types of non-iid data.
We present a modification to FL by introducing a hierarchical clustering step (FL+HC)
We show how FL+HC allows model training to converge in fewer communication rounds compared to FL without clustering.
arXiv Detail & Related papers (2020-04-24T15:16:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.