Preservation of the Global Knowledge by Not-True Self Knowledge
Distillation in Federated Learning
- URL: http://arxiv.org/abs/2106.03097v1
- Date: Sun, 6 Jun 2021 11:51:47 GMT
- Title: Preservation of the Global Knowledge by Not-True Self Knowledge
Distillation in Federated Learning
- Authors: Gihun Lee, Yongjin Shin, Minchan Jeong, Se-Young Yun
- Abstract summary: In Federated Learning (FL), a strong global model is collaboratively learned by aggregating the clients' locally trained models.
We observe that fitting on biased local distribution shifts the feature on global distribution and results in forgetting of global knowledge.
We propose a simple yet effective framework Federated Local Self-Distillation (FedLSD), which utilizes the global knowledge on locally available data.
- Score: 8.474470736998136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Federated Learning (FL), a strong global model is collaboratively learned
by aggregating the clients' locally trained models. Although this allows no
need to access clients' data directly, the global model's convergence often
suffers from data heterogeneity. This paper suggests that forgetting could be
the bottleneck of global convergence. We observe that fitting on biased local
distribution shifts the feature on global distribution and results in
forgetting of global knowledge. We consider this phenomenon as an analogy to
Continual Learning, which also faces catastrophic forgetting when fitted on the
new task distribution. Based on our findings, we hypothesize that tackling down
the forgetting in local training relives the data heterogeneity problem. To
this end, we propose a simple yet effective framework Federated Local
Self-Distillation (FedLSD), which utilizes the global knowledge on locally
available data. By following the global perspective on local data, FedLSD
encourages the learned features to preserve global knowledge and have
consistent views across local models, thus improving convergence without
compromising data privacy. Under our framework, we further extend FedLSD to
FedLS-NTD, which only considers the not-true class signals to compensate noisy
prediction of the global model. We validate that both FedLSD and FedLS-NTD
significantly improve the performance in standard FL benchmarks in various
setups, especially in the extreme data heterogeneity cases.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning [10.641875933652647]
Federated Learning (FL) is a novel approach that allows for collaborative machine learning.
FL faces challenges due to non-uniformly distributed (non-iid) data across clients.
This paper introduces FedDistill, a framework enhancing the knowledge transfer from the global model to local models.
arXiv Detail & Related papers (2024-04-14T10:23:30Z) - FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated Learning [27.28589196972422]
Federated Learning (FL) aggregates locally trained models from individual clients to construct a global model.
FL often suffers from significant performance degradation when clients have heterogeneous data distributions.
We propose a novel method, Federated Stabilized Orthogonal Learning (FedSOL), which balances local and global learning.
arXiv Detail & Related papers (2023-08-24T03:43:02Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Heterogeneous Federated Knowledge Graph Embedding Learning and
Unlearning [14.063276595895049]
Federated Learning (FL) is a paradigm to train a global machine learning model across distributed clients without sharing raw data.
We propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning.
We show that FedLU achieves superior results in both link prediction and knowledge forgetting.
arXiv Detail & Related papers (2023-02-04T02:44:48Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - The Best of Both Worlds: Accurate Global and Personalized Models through
Federated Learning with Data-Free Hyper-Knowledge Distillation [17.570719572024608]
FedHKD (Federated Hyper-Knowledge Distillation) is a novel FL algorithm in which clients rely on knowledge distillation to train local models.
Unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server.
We conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance.
arXiv Detail & Related papers (2023-01-21T16:20:57Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.