FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem
in Federated Learning
- URL: http://arxiv.org/abs/2311.13267v1
- Date: Wed, 22 Nov 2023 09:37:33 GMT
- Title: FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem
in Federated Learning
- Authors: Seongyoon Kim, Gihun Lee, Jaehoon Oh, Se-Young Yun
- Abstract summary: We introduce Federated Averaging with Feature Normalization Update (FedFN), a straightforward learning method.
We demonstrate the superior performance of FedFN through extensive experiments, even when applied to pretrained ResNet18.
- Score: 29.626725039794383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a collaborative method for training models while
preserving data privacy in decentralized settings. However, FL encounters
challenges related to data heterogeneity, which can result in performance
degradation. In our study, we observe that as data heterogeneity increases,
feature representation in the FedAVG model deteriorates more significantly
compared to classifier weight. Additionally, we observe that as data
heterogeneity increases, the gap between higher feature norms for observed
classes, obtained from local models, and feature norms of unobserved classes
widens, in contrast to the behavior of classifier weight norms. This widening
gap extends to encompass the feature norm disparities between local and the
global models. To address these issues, we introduce Federated Averaging with
Feature Normalization Update (FedFN), a straightforward learning method. We
demonstrate the superior performance of FedFN through extensive experiments,
even when applied to pretrained ResNet18. Subsequently, we confirm the
applicability of FedFN to foundation models.
Related papers
- Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Addressing Data Heterogeneity in Federated Learning with Adaptive Normalization-Free Feature Recalibration [1.33512912917221]
Federated learning is a decentralized collaborative training paradigm that preserves stakeholders' data ownership while improving performance and generalization.
We propose Adaptive Normalization-free Feature Recalibration (ANFR), an architecture-level approach that combines weight standardization and channel attention.
arXiv Detail & Related papers (2024-10-02T20:16:56Z) - Synthetic Data Aided Federated Learning Using Foundation Models [4.666380225768727]
We propose Differentially Private Synthetic Data Aided Federated Learning Using Foundation Models (DPSDA-FL)
Our experimental results have shown that DPSDA-FL can improve class recall and classification accuracy of the global model by up to 26% and 9%, respectively, in FL with Non-IID issues.
arXiv Detail & Related papers (2024-07-06T20:31:43Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Stabilizing and Improving Federated Learning with Non-IID Data and
Client Dropout [15.569507252445144]
Label distribution skew induced data heterogeniety has been shown to be a significant obstacle that limits the model performance in federated learning.
We propose a simple yet effective framework by introducing a prior-calibrated softmax function for computing the cross-entropy loss.
The improved model performance over existing baselines in the presence of non-IID data and client dropout is demonstrated.
arXiv Detail & Related papers (2023-03-11T05:17:59Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.