Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient
Balancer
- URL: http://arxiv.org/abs/2310.07587v4
- Date: Sun, 26 Nov 2023 05:45:44 GMT
- Title: Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient
Balancer
- Authors: Zikai Xiao, Zihan Chen, Songshang Liu, Hualiang Wang, Yang Feng, Jin
Hao, Joey Tianyi Zhou, Jian Wu, Howard Hao Yang, Zuozhu Liu
- Abstract summary: This paper investigates a federated long-tailed learning (Fed-LT) task in which each client holds a locally heterogeneous dataset.
We propose a method termed $textttFed-GraB$, comprised of a Self-Natural Gradient Balancer (SGB) module.
We show that $textttFed-GraB$ achieves state-of-the-art performance on representative datasets such as CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iist.
- Score: 47.82735112096587
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Data privacy and long-tailed distribution are the norms rather than the
exception in many real-world tasks. This paper investigates a federated
long-tailed learning (Fed-LT) task in which each client holds a locally
heterogeneous dataset; if the datasets can be globally aggregated, they jointly
exhibit a long-tailed distribution. Under such a setting, existing federated
optimization and/or centralized long-tailed learning methods hardly apply due
to challenges in (a) characterizing the global long-tailed distribution under
privacy constraints and (b) adjusting the local learning strategy to cope with
the head-tail imbalance. In response, we propose a method termed
$\texttt{Fed-GraB}$, comprised of a Self-adjusting Gradient Balancer (SGB)
module that re-weights clients' gradients in a closed-loop manner, based on the
feedback of global long-tailed distribution evaluated by a Direct Prior
Analyzer (DPA) module. Using $\texttt{Fed-GraB}$, clients can effectively
alleviate the distribution drift caused by data heterogeneity during the model
training process and obtain a global model with better performance on the
minority classes while maintaining the performance of the majority classes.
Extensive experiments demonstrate that $\texttt{Fed-GraB}$ achieves
state-of-the-art performance on representative datasets such as CIFAR-10-LT,
CIFAR-100-LT, ImageNet-LT, and iNaturalist.
Related papers
- FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous
Federated Learning [9.975023463908496]
Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data.
We propose a novel regularization technique based on adaptive self-distillation (ASD) for training models on the client side.
Our regularization scheme adaptively adjusts to the client's training data based on the global model entropy and the client's label distribution.
arXiv Detail & Related papers (2023-05-31T07:00:42Z) - Towards Unbiased Training in Federated Open-world Semi-supervised
Learning [15.08153616709326]
We propose a novel Federatedopen-world Semi-Supervised Learning (FedoSSL) framework, which can solve the key challenge in distributed and open-world settings.
We adopt an uncertainty-aware suppressed loss to alleviate the biased training between locally unseen and globally unseen classes.
The proposed FedoSSL can be easily adapted to state-of-the-art FL methods, which is also validated via extensive experiments on benchmarks and real-world datasets.
arXiv Detail & Related papers (2023-05-01T11:12:37Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - GRP-FED: Addressing Client Imbalance in Federated Learning via
Global-Regularized Personalization [6.592268037926868]
We present Global-Regularized Personalization (GRP-FED) to tackle the data imbalanced issue.
With adaptive aggregation, the global model treats multiple clients fairly and mitigates the global long-tailed issue.
Our results show that our GRP-FED improves under both global and local scenarios.
arXiv Detail & Related papers (2021-08-31T14:09:04Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.