Federated Learning with Adaptive Batchnorm for Personalized Healthcare
- URL: http://arxiv.org/abs/2112.00734v1
- Date: Wed, 1 Dec 2021 11:36:56 GMT
- Title: Federated Learning with Adaptive Batchnorm for Personalized Healthcare
- Authors: Yiqiang Chen, Wang Lu, Jindong Wang, Xin Qin, Tao Qin
- Abstract summary: We propose AdaFed to tackle domain shifts and obtain personalized models for local clients.
AdaFed learns the similarity between clients via the statistics of the batch normalization layers.
Experiments on five healthcare benchmarks demonstrate that AdaFed achieves better accuracy compared to state-of-the-art methods.
- Score: 47.52430258876696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a growing interest in applying machine learning techniques for
healthcare. Recently, federated machine learning (FL) is gaining popularity
since it allows researchers to train powerful models without compromising data
privacy and security. However, the performance of existing FL approaches often
deteriorates when encountering non-iid situations where there exist
distribution gaps among clients, and few previous efforts focus on
personalization in healthcare. In this article, we propose AdaFed to tackle
domain shifts and obtain personalized models for local clients. AdaFed learns
the similarity between clients via the statistics of the batch normalization
layers while preserving the specificity of each client with different local
batch normalization. Comprehensive experiments on five healthcare benchmarks
demonstrate that AdaFed achieves better accuracy compared to state-of-the-art
methods (e.g., \textbf{10}\%+ accuracy improvement for PAMAP2) with faster
convergence speed.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedPDC:Federated Learning for Public Dataset Correction [1.5533842336139065]
Federated learning has lower classification accuracy than traditional machine learning in Non-IID scenarios.
New algorithm FedPDC is proposed to optimize the aggregation mode of local models and the loss function of local training.
In many benchmark experiments, FedPDC can effectively improve the accuracy of the global model in the case of extremely unbalanced data distribution.
arXiv Detail & Related papers (2023-02-24T08:09:23Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedHealth 2: Weighted Federated Transfer Learning via Batch
Normalization for Personalized Healthcare [10.350441801743855]
FedHealth 2 is an extension of FedHealth to tackle domain shifts and get personalized models for local clients.
It can achieve better accuracy (10%+ improvement for activity recognition) and personalized healthcare without compromising privacy and security.
arXiv Detail & Related papers (2021-06-02T08:10:50Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - PFA: Privacy-preserving Federated Adaptation for Effective Model
Personalization [6.66389628571674]
Federated learning (FL) has become a prevalent distributed machine learning paradigm with improved privacy.
This paper introduces a new concept called federated adaptation, targeting at adapting the trained model in a federated manner to achieve better personalization results.
We propose PFA, a framework to accomplish Privacy-preserving Federated Adaptation.
arXiv Detail & Related papers (2021-03-02T08:07:34Z) - FedBN: Federated Learning on Non-IID Features via Local Batch
Normalization [23.519212374186232]
The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data.
We propose an effective method that uses local batch normalization to alleviate the feature shift before averaging models.
The resulting scheme, called FedBN, outperforms both classical FedAvg and the state-of-the-art for non-iid data.
arXiv Detail & Related papers (2021-02-15T16:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.