Federated Learning with Bilateral Curation for Partially Class-Disjoint Data
- URL: http://arxiv.org/abs/2405.18972v1
- Date: Wed, 29 May 2024 10:34:44 GMT
- Title: Federated Learning with Bilateral Curation for Partially Class-Disjoint Data
- Authors: Ziqing Fan, Ruipeng Zhang, Jiangchao Yao, Bo Han, Ya Zhang, Yanfeng Wang,
- Abstract summary: Partially class-disjoint data (PCDD), a common yet under-explored data formation, severely challenges the performance of federated algorithms.
We propose a novel approach called FedGELA where the ETF is globally fixed as a simplex ETF while locally adapted to the personal distributions.
We conduct extensive experiments on a range of datasets to demonstrate that our FedGELA achieves promising performance.
- Score: 47.55180390473258
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Partially class-disjoint data (PCDD), a common yet under-explored data formation where each client contributes a part of classes (instead of all classes) of samples, severely challenges the performance of federated algorithms. Without full classes, the local objective will contradict the global objective, yielding the angle collapse problem for locally missing classes and the space waste problem for locally existing classes. As far as we know, none of the existing methods can intrinsically mitigate PCDD challenges to achieve holistic improvement in the bilateral views (both global view and local view) of federated learning. To address this dilemma, we are inspired by the strong generalization of simplex Equiangular Tight Frame~(ETF) on the imbalanced data, and propose a novel approach called FedGELA where the classifier is globally fixed as a simplex ETF while locally adapted to the personal distributions. Globally, FedGELA provides fair and equal discrimination for all classes and avoids inaccurate updates of the classifier, while locally it utilizes the space of locally missing classes for locally existing classes. We conduct extensive experiments on a range of datasets to demonstrate that our FedGELA achieves promising performance~(averaged improvement of 3.9% to FedAvg and 1.5% to best baselines) and provide both local and global convergence guarantees. Source code is available at:https://github.com/MediaBrain-SJTU/FedGELA.git.
Related papers
- Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Unbiased Training in Federated Open-world Semi-supervised
Learning [15.08153616709326]
We propose a novel Federatedopen-world Semi-Supervised Learning (FedoSSL) framework, which can solve the key challenge in distributed and open-world settings.
We adopt an uncertainty-aware suppressed loss to alleviate the biased training between locally unseen and globally unseen classes.
The proposed FedoSSL can be easily adapted to state-of-the-art FL methods, which is also validated via extensive experiments on benchmarks and real-world datasets.
arXiv Detail & Related papers (2023-05-01T11:12:37Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Preservation of the Global Knowledge by Not-True Self Knowledge
Distillation in Federated Learning [8.474470736998136]
In Federated Learning (FL), a strong global model is collaboratively learned by aggregating the clients' locally trained models.
We observe that fitting on biased local distribution shifts the feature on global distribution and results in forgetting of global knowledge.
We propose a simple yet effective framework Federated Local Self-Distillation (FedLSD), which utilizes the global knowledge on locally available data.
arXiv Detail & Related papers (2021-06-06T11:51:47Z) - Robustness and Personalization in Federated Learning: A Unified Approach
via Regularization [4.7234844467506605]
We present a class of methods for robust, personalized federated learning, called Fed+.
The principal advantage of Fed+ is to better accommodate the real-world characteristics found in federated training.
We demonstrate the benefits of Fed+ through extensive experiments on benchmark datasets.
arXiv Detail & Related papers (2020-09-14T10:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.