FedBR: Improving Federated Learning on Heterogeneous Data via Local
Learning Bias Reduction
- URL: http://arxiv.org/abs/2205.13462v4
- Date: Wed, 31 May 2023 02:43:52 GMT
- Title: FedBR: Improving Federated Learning on Heterogeneous Data via Local
Learning Bias Reduction
- Authors: Yongxin Guo, Xiaoying Tang, Tao Lin
- Abstract summary: Federated Learning (FL) is a way for machines to learn from data that is kept locally, in order to protect the privacy of clients.
We propose FedBR, a novel unified algorithm that reduces the local learning bias on features and classifiers.
We conducted several experiments to test algopt and found that it consistently outperforms other SOTA FL methods.
- Score: 5.757705591791482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a way for machines to learn from data that is kept
locally, in order to protect the privacy of clients. This is typically done
using local SGD, which helps to improve communication efficiency. However, such
a scheme is currently constrained by slow and unstable convergence due to the
variety of data on different clients' devices. In this work, we identify three
under-explored phenomena of biased local learning that may explain these
challenges caused by local updates in supervised FL. As a remedy, we propose
FedBR, a novel unified algorithm that reduces the local learning bias on
features and classifiers to tackle these challenges. FedBR has two components.
The first component helps to reduce bias in local classifiers by balancing the
output of the models. The second component helps to learn local features that
are similar to global features, but different from those learned from other
data sources. We conducted several experiments to test \algopt and found that
it consistently outperforms other SOTA FL methods. Both of its components also
individually show performance gains. Our code is available at
https://github.com/lins-lab/fedbr.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation [15.298650496155508]
Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.
Existing FL methods face challenges when dealing with scarce and label-skewed data across devices, resulting in local model overfitting and drift.
We propose a pioneering framework called FLea, incorporating the following key components.
arXiv Detail & Related papers (2024-06-13T19:28:08Z) - Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation [15.298650496155508]
Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.
Existing FL methods face challenges when dealing with scarce and label-skewed data across devices, resulting in local model overfitting and drift.
We propose a pioneering framework called textitFLea, incorporating the following key components.
arXiv Detail & Related papers (2023-12-04T20:24:09Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Global Knowledge Distillation in Federated Learning [3.7311680121118345]
We propose a novel global knowledge distillation method, named FedGKD, which learns the knowledge from past global models to tackle down the local bias training problem.
To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on various CV datasets (CIFAR-10/100) and settings (non-i.i.d data)
arXiv Detail & Related papers (2021-06-30T18:14:24Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.