UPFL: Unsupervised Personalized Federated Learning towards New Clients
- URL: http://arxiv.org/abs/2307.15994v1
- Date: Sat, 29 Jul 2023 14:30:11 GMT
- Title: UPFL: Unsupervised Personalized Federated Learning towards New Clients
- Authors: Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li and Ming Gao
- Abstract summary: In this paper, we address a relatively unexplored problem in federated learning.
When a federated model has been trained and deployed, and an unlabeled new client joins, providing a personalized model for the new client becomes a highly challenging task.
We extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.
- Score: 13.98952154869707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning has gained significant attention as a
promising approach to address the challenge of data heterogeneity. In this
paper, we address a relatively unexplored problem in federated learning. When a
federated model has been trained and deployed, and an unlabeled new client
joins, providing a personalized model for the new client becomes a highly
challenging task. To address this challenge, we extend the adaptive risk
minimization technique into the unsupervised personalized federated learning
setting and propose our method, FedTTA. We further improve FedTTA with two
simple yet effective optimization strategies: enhancing the training of the
adaptation model with proxy regularization and early-stopping the adaptation
through entropy. Moreover, we propose a knowledge distillation loss
specifically designed for FedTTA to address the device heterogeneity. Extensive
experiments on five datasets against eleven baselines demonstrate the
effectiveness of our proposed FedTTA and its variants. The code is available
at: https://github.com/anonymous-federated-learning/code.
Related papers
- Tailored Federated Learning: Leveraging Direction Regulation & Knowledge Distillation [2.1670850691529275]
Federated learning has emerged as a transformative training paradigm in privacy-sensitive domains like healthcare.
We propose an FL optimization algorithm that integrates model delta regularization, personalized models, federated knowledge distillation, and mix-pooling.
arXiv Detail & Related papers (2024-09-29T15:39:39Z) - Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning [20.412469498888292]
Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
arXiv Detail & Related papers (2024-05-07T08:44:29Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - FAT: Federated Adversarial Training [5.287156503763459]
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML)
We take the first known steps towards federated adversarial training (FAT) combining both methods to reduce the threat of evasion during inference while preserving the data privacy during training.
arXiv Detail & Related papers (2020-12-03T09:47:47Z) - Personalized Cross-Silo Federated Learning on Non-IID Data [62.68467223450439]
Non-IID data present a tough challenge for federated learning.
We propose a novel idea of pairwise collaborations between clients with similar data.
arXiv Detail & Related papers (2020-07-07T21:38:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.