FedCRL: Personalized Federated Learning with Contrastive Shared Representations for Label Heterogeneity in Non-IID Data
- URL: http://arxiv.org/abs/2404.17916v2
- Date: Fri, 22 Nov 2024 12:51:47 GMT
- Title: FedCRL: Personalized Federated Learning with Contrastive Shared Representations for Label Heterogeneity in Non-IID Data
- Authors: Chenghao Huang, Xiaolu Chen, Yanru Zhang, Hao Wang,
- Abstract summary: This paper proposes a novel personalized federated learning algorithm, named Federated Contrastive Shareable Representations (FedCoSR)
parameters of local models' shallow layers and typical local representations are both considered shareable information for the server.
To address poor performance caused by label distribution skew among clients, contrastive learning is adopted between local and global representations.
- Score: 13.146806294562474
- License:
- Abstract: Heterogeneity resulting from label distribution skew and data scarcity can lead to inaccuracy and unfairness in intelligent communication applications that mainly rely on distributed computing. To deal with it, this paper proposes a novel personalized federated learning algorithm, named Federated Contrastive Shareable Representations (FedCoSR), to facilitate knowledge sharing among clients while maintaining data privacy. Specifically, parameters of local models' shallow layers and typical local representations are both considered shareable information for the server and aggregated globally. To address poor performance caused by label distribution skew among clients, contrastive learning is adopted between local and global representations to enrich local knowledge. Additionally, to ensure fairness for clients with scarce data, FedCoSR introduces adaptive local aggregation to coordinate the global model involvement in each client. Our simulations demonstrate FedCoSR's effectiveness in mitigating label heterogeneity by achieving accuracy and fairness improvements over existing methods on datasets with varying degrees of label heterogeneity.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - GLOCALFAIR: Jointly Improving Global and Local Group Fairness in Federated Learning [8.033939709734451]
Federated learning (FL) has emerged as a prospective solution for collaboratively learning a shared model across clients without sacrificing their data privacy.
FL tends to be biased against certain demographic groups due to the inherent FL properties, such as data heterogeneity and party selection.
We propose GFAIR, a client-server codesign that can improve global and local group fairness without the need for sensitive statistics about the client's private datasets.
arXiv Detail & Related papers (2024-01-07T18:10:14Z) - FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation [7.052566906745796]
FedLPA is a layer-wise posterior aggregation method for federated learning.
We show that FedLPA significantly improves learning performance over state-of-the-art methods across several metrics.
arXiv Detail & Related papers (2023-09-30T10:51:27Z) - FedCiR: Client-Invariant Representation Learning for Federated Non-IID
Features [15.555538379806135]
Federated learning (FL) is a distributed learning paradigm that maximizes the potential of data-driven models for edge devices without sharing their raw data.
We propose FedCiR, a client-invariant representation learning framework that enables clients to extract informative and client-invariant features.
arXiv Detail & Related papers (2023-08-30T06:36:32Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedCME: Client Matching and Classifier Exchanging to Handle Data
Heterogeneity in Federated Learning [5.21877373352943]
Data heterogeneity across clients is one of the key challenges in Federated Learning (FL)
We propose a novel FL framework named FedCME by client matching and classifier exchanging.
Experimental results demonstrate that FedCME performs better than FedAvg, FedProx, MOON and FedRS on popular federated learning benchmarks.
arXiv Detail & Related papers (2023-07-17T15:40:45Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Robust Convergence in Federated Learning through Label-wise Clustering [6.693651193181458]
Non-IID dataset and heterogeneous environment of the local clients are regarded as a major issue in Federated Learning (FL)
We propose a novel Label-wise clustering algorithm that guarantees the trainability among geographically heterogeneous local clients.
Our paper shows that proposed Label-wise clustering demonstrates prompt and robust convergence compared to other FL algorithms.
arXiv Detail & Related papers (2021-12-28T18:13:09Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.