FedLoGe: Joint Local and Generic Federated Learning under Long-tailed
Data
- URL: http://arxiv.org/abs/2401.08977v2
- Date: Fri, 8 Mar 2024 13:37:55 GMT
- Title: FedLoGe: Joint Local and Generic Federated Learning under Long-tailed
Data
- Authors: Zikai Xiao, Zihan Chen, Liyinglan Liu, Yang Feng, Jian Wu, Wanlu Liu,
Joey Tianyi Zhou, Howard Hao Yang, Zuozhu Liu
- Abstract summary: Federated Long-Tailed Learning (Fed-LT) is a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution.
This paper introduces an approach termed Federated Local and Generic Model Training in Fed-LT (FedLoGe), which enhances both local and generic model performance.
- Score: 46.29190753993415
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated Long-Tailed Learning (Fed-LT), a paradigm wherein data collected
from decentralized local clients manifests a globally prevalent long-tailed
distribution, has garnered considerable attention in recent times. In the
context of Fed-LT, existing works have predominantly centered on addressing the
data imbalance issue to enhance the efficacy of the generic global model while
neglecting the performance at the local level. In contrast, conventional
Personalized Federated Learning (pFL) techniques are primarily devised to
optimize personalized local models under the presumption of a balanced global
data distribution. This paper introduces an approach termed Federated Local and
Generic Model Training in Fed-LT (FedLoGe), which enhances both local and
generic model performance through the integration of representation learning
and classifier alignment within a neural collapse framework. Our investigation
reveals the feasibility of employing a shared backbone as a foundational
framework for capturing overarching global trends, while concurrently employing
individualized classifiers to encapsulate distinct refinements stemming from
each client's local features. Building upon this discovery, we establish the
Static Sparse Equiangular Tight Frame Classifier (SSE-C), inspired by neural
collapse principles that naturally prune extraneous noisy features and foster
the acquisition of potent data representations. Furthermore, leveraging
insights from imbalance neural collapse's classifier norm patterns, we develop
Global and Local Adaptive Feature Realignment (GLA-FR) via an auxiliary global
classifier and personalized Euclidean norm transfer to align global features
with client preferences. Extensive experimental results on CIFAR-10/100-LT,
ImageNet, and iNaturalist demonstrate the advantage of our method over
state-of-the-art pFL and Fed-LT approaches.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - GRP-FED: Addressing Client Imbalance in Federated Learning via
Global-Regularized Personalization [6.592268037926868]
We present Global-Regularized Personalization (GRP-FED) to tackle the data imbalanced issue.
With adaptive aggregation, the global model treats multiple clients fairly and mitigates the global long-tailed issue.
Our results show that our GRP-FED improves under both global and local scenarios.
arXiv Detail & Related papers (2021-08-31T14:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.