Auto-FedRL: Federated Hyperparameter Optimization for
Multi-institutional Medical Image Segmentation
- URL: http://arxiv.org/abs/2203.06338v1
- Date: Sat, 12 Mar 2022 04:11:42 GMT
- Title: Auto-FedRL: Federated Hyperparameter Optimization for
Multi-institutional Medical Image Segmentation
- Authors: Pengfei Guo, Dong Yang, Ali Hatamizadeh, An Xu, Ziyue Xu, Wenqi Li,
Can Zhao, Daguang Xu, Stephanie Harmon, Evrim Turkbey, Baris Turkbey,
Bradford Wood, Francesca Patella, Elvira Stellato, Gianpaolo Carrafiello,
Vishal M. Patel, Holger R. Roth
- Abstract summary: Federated learning (FL) is a distributed machine learning technique that enables collaborative model training while avoiding explicit data sharing.
In this work, we propose an efficient reinforcement learning(RL)-based federated hyperparameter optimization algorithm, termed Auto-FedRL.
The effectiveness of the proposed method is validated on a heterogeneous data split of the CIFAR-10 dataset and two real-world medical image segmentation datasets.
- Score: 48.821062916381685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed machine learning technique that
enables collaborative model training while avoiding explicit data sharing. The
inherent privacy-preserving property of FL algorithms makes them especially
attractive to the medical field. However, in case of heterogeneous client data
distributions, standard FL methods are unstable and require intensive
hyperparameter tuning to achieve optimal performance. Conventional
hyperparameter optimization algorithms are impractical in real-world FL
applications as they involve numerous training trials, which are often not
affordable with limited compute budgets. In this work, we propose an efficient
reinforcement learning~(RL)-based federated hyperparameter optimization
algorithm, termed Auto-FedRL, in which an online RL agent can dynamically
adjust hyperparameters of each client based on the current training progress.
Extensive experiments are conducted to investigate different search strategies
and RL agents. The effectiveness of the proposed method is validated on a
heterogeneous data split of the CIFAR-10 dataset as well as two real-world
medical image segmentation datasets for COVID-19 lesion segmentation in chest
CT and pancreas segmentation in abdominal CT.
Related papers
- FedMRL: Data Heterogeneity Aware Federated Multi-agent Deep Reinforcement Learning for Medical Imaging [12.307490659840845]
We introduce FedMRL, a novel multi-agent deep reinforcement learning framework designed to address data heterogeneity.
FedMRL incorporates a novel loss function to facilitate fairness among clients, preventing bias in the final global model.
We assess our approach using two publicly available real-world medical datasets, and the results demonstrate that FedMRL significantly outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-07-08T10:10:07Z) - Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning [67.49221252724229]
E-health allows smart devices and medical institutions to collaboratively collect patients' data, which is trained by Artificial Intelligence (AI) technologies to help doctors make diagnosis.
Applying federated learning in e-health faces many challenges.
Medical data is both horizontally and vertically partitioned.
A naive combination of HFL and VFL has limitations including low training efficiency, unsound convergence analysis, and lack of parameter tuning strategies.
arXiv Detail & Related papers (2024-04-15T19:45:07Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Federated Ensemble Model-based Reinforcement Learning in Edge Computing [21.840086997141498]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm.
We propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time.
Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment.
arXiv Detail & Related papers (2021-09-12T16:19:10Z) - Genetic CFL: Optimization of Hyper-Parameters in Clustered Federated
Learning [4.710427287359642]
Federated learning (FL) is a distributed model for deep learning that integrates client-server architecture, edge computing, and real-time intelligence.
FL has the capability of revolutionizing machine learning (ML) but lacks in the practicality of implementation due to technological limitations, communication overhead, non-IID (independent and identically distributed) data, and privacy concerns.
We propose a novel hybrid algorithm, namely genetic clustered FL (Genetic CFL), that clusters edge devices based on the training hyper- parameters and genetically modifies the parameters cluster-wise.
arXiv Detail & Related papers (2021-07-15T10:16:05Z) - Auto-FedAvg: Learnable Federated Averaging for Multi-Institutional
Medical Image Segmentation [7.009650174262515]
Federated learning (FL) enables collaborative model training while preserving each participant's privacy.
FedAvg is a standard algorithm that uses fixed weights, often originating from the dataset sizes at each client, to aggregate the distributed learned models on a server during the FL process.
In this work, we design a new data-driven approach, namely Auto-FedAvg, where aggregation weights are dynamically adjusted.
arXiv Detail & Related papers (2021-04-20T18:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.