Federated Learning with Classifier Shift for Class Imbalance
- URL: http://arxiv.org/abs/2304.04972v1
- Date: Tue, 11 Apr 2023 04:38:39 GMT
- Title: Federated Learning with Classifier Shift for Class Imbalance
- Authors: Yunheng Shen, Haoxiang Wang, Hairong Lv
- Abstract summary: Federated learning aims to learn a global model collaboratively while the training data belongs to different clients and is not allowed to be exchanged.
This paper proposes a simple and effective approach named FedShift which adds the shift on the classifier output during the local training phase to alleviate the negative impact of class imbalance.
- Score: 6.097542448692326
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning aims to learn a global model collaboratively while the
training data belongs to different clients and is not allowed to be exchanged.
However, the statistical heterogeneity challenge on non-IID data, such as class
imbalance in classification, will cause client drift and significantly reduce
the performance of the global model. This paper proposes a simple and effective
approach named FedShift which adds the shift on the classifier output during
the local training phase to alleviate the negative impact of class imbalance.
We theoretically prove that the classifier shift in FedShift can make the local
optimum consistent with the global optimum and ensure the convergence of the
algorithm. Moreover, our experiments indicate that FedShift significantly
outperforms the other state-of-the-art federated learning approaches on various
datasets regarding accuracy and communication efficiency.
Related papers
- FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning [15.256986486372407]
Spiking federated learning allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data.
Existing spiking federated learning methods employ a random selection approach for client aggregation, assuming unbiased client participation.
We propose a credit assignment-based active client selection strategy, the SFedCA, to judiciously aggregate clients that contribute to the global sample distribution balance.
arXiv Detail & Related papers (2024-06-18T01:56:22Z) - Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedCME: Client Matching and Classifier Exchanging to Handle Data
Heterogeneity in Federated Learning [5.21877373352943]
Data heterogeneity across clients is one of the key challenges in Federated Learning (FL)
We propose a novel FL framework named FedCME by client matching and classifier exchanging.
Experimental results demonstrate that FedCME performs better than FedAvg, FedProx, MOON and FedRS on popular federated learning benchmarks.
arXiv Detail & Related papers (2023-07-17T15:40:45Z) - Towards Unbiased Training in Federated Open-world Semi-supervised
Learning [15.08153616709326]
We propose a novel Federatedopen-world Semi-Supervised Learning (FedoSSL) framework, which can solve the key challenge in distributed and open-world settings.
We adopt an uncertainty-aware suppressed loss to alleviate the biased training between locally unseen and globally unseen classes.
The proposed FedoSSL can be easily adapted to state-of-the-art FL methods, which is also validated via extensive experiments on benchmarks and real-world datasets.
arXiv Detail & Related papers (2023-05-01T11:12:37Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - FedProc: Prototypical Contrastive Federated Learning on Non-IID data [24.1906520295278]
Federated learning allows multiple clients to collaborate to train deep learning models while keeping the training data locally.
We propose FedProc: prototypical contrastive federated learning.
We show that FedProc improves the accuracy by $1.6%sim7.9%$ with acceptable computation cost.
arXiv Detail & Related papers (2021-09-25T04:32:23Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.