Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation
- URL: http://arxiv.org/abs/2303.15168v1
- Date: Mon, 27 Mar 2023 13:00:20 GMT
- Title: Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation
- Authors: Yang Lu, Pinxin Qian, Gang Huang, Hanzi Wang
- Abstract summary: PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
- Score: 24.679535905451758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized Federated Learning (PFL) aims to learn personalized models for
each client based on the knowledge across all clients in a privacy-preserving
manner. Existing PFL methods generally assume that the underlying global data
across all clients are uniformly distributed without considering the long-tail
distribution. The joint problem of data heterogeneity and long-tail
distribution in the FL environment is more challenging and severely affects the
performance of personalized models. In this paper, we propose a PFL method
called Federated Learning with Adversarial Feature Augmentation (FedAFA) to
address this joint problem in PFL. FedAFA optimizes the personalized model for
each client by producing a balanced feature set to enhance the local minority
classes. The local minority class features are generated by transferring the
knowledge from the local majority class features extracted by the global model
in an adversarial example learning manner. The experimental results on
benchmarks under different settings of data heterogeneity and long-tail
distribution demonstrate that FedAFA significantly improves the personalized
performance of each client compared with the state-of-the-art PFL algorithm.
The code is available at https://github.com/pxqian/FedAFA.
Related papers
- Personalized Federated Learning on Heterogeneous and Long-Tailed Data via Expert Collaborative Learning [12.008179288136166]
The data collected in real-world scenarios is likely to follow a long-tailed distribution.
The presence of long-tailed data can significantly degrade the performance of PFL models.
We propose a method called Expert Collaborative Learning (ECL) to tackle this problem.
arXiv Detail & Related papers (2024-08-04T13:11:49Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning [9.22574528776347]
FedSelect is a novel PFL algorithm inspired by the iterative subnetwork discovery procedure used for the Lottery Ticket Hypothesis.
We show that FedSelect outperforms recent state-of-the-art PFL algorithms under challenging client data heterogeneity settings.
arXiv Detail & Related papers (2024-04-03T05:36:21Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - Personalized Federated Learning with Moreau Envelopes [16.25105865597947]
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique.
One challenge associated with FL is statistical diversity among clients.
We propose an algorithm for personalized FL (FedFedMe) using envelopes regularized loss function.
arXiv Detail & Related papers (2020-06-16T00:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.