Federated Skewed Label Learning with Logits Fusion
- URL: http://arxiv.org/abs/2311.08202v1
- Date: Tue, 14 Nov 2023 14:37:33 GMT
- Title: Federated Skewed Label Learning with Logits Fusion
- Authors: Yuwei Wang, Runhan Li, Hao Tan, Xuefeng Jiang, Sheng Sun, Min Liu, Bo
Gao, Zhiyuan Wu
- Abstract summary: Federated learning (FL) aims to collaboratively train a shared model across multiple clients without transmitting their local data.
We propose FedBalance, which corrects the optimization bias among local models by calibrating their logits.
Our method can gain 13% higher average accuracy compared with state-of-the-art methods.
- Score: 23.062650578266837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) aims to collaboratively train a shared model across
multiple clients without transmitting their local data. Data heterogeneity is a
critical challenge in realistic FL settings, as it causes significant
performance deterioration due to discrepancies in optimization among local
models. In this work, we focus on label distribution skew, a common scenario in
data heterogeneity, where the data label categories are imbalanced on each
client. To address this issue, we propose FedBalance, which corrects the
optimization bias among local models by calibrating their logits. Specifically,
we introduce an extra private weak learner on the client side, which forms an
ensemble model with the local model. By fusing the logits of the two models,
the private weak learner can capture the variance of different data, regardless
of their category. Therefore, the optimization direction of local models can be
improved by increasing the penalty for misclassifying minority classes and
reducing the attention to majority classes, resulting in a better global model.
Extensive experiments show that our method can gain 13\% higher average
accuracy compared with state-of-the-art methods.
Related papers
- FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning [10.641875933652647]
Federated Learning (FL) is a novel approach that allows for collaborative machine learning.
FL faces challenges due to non-uniformly distributed (non-iid) data across clients.
This paper introduces FedDistill, a framework enhancing the knowledge transfer from the global model to local models.
arXiv Detail & Related papers (2024-04-14T10:23:30Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Is Aggregation the Only Choice? Federated Learning via Layer-wise Model Recombination [33.12164201146458]
We propose a novel and FL paradigm named FedMR (Federated Model Recombination)
The goal of FedMR is to guide the recombined models to be trained towards a flat area.
Compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without exposing privacy of each client.
arXiv Detail & Related papers (2023-05-18T05:58:24Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.