Federated Active Learning Framework for Efficient Annotation Strategy in Skin-lesion Classification
- URL: http://arxiv.org/abs/2406.11310v1
- Date: Mon, 17 Jun 2024 08:16:28 GMT
- Title: Federated Active Learning Framework for Efficient Annotation Strategy in Skin-lesion Classification
- Authors: Zhipeng Deng, Yuqiao Yang, Kenji Suzuki,
- Abstract summary: Federated Learning (FL) enables multiple institutes to train models collaboratively without sharing private data.
Active learning (AL) has shown promising performance in reducing the number of data annotations in medical image analysis.
We propose a federated AL (FedAL) framework in which AL is executed periodically and interactively under FL.
- Score: 1.8149633401257899
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables multiple institutes to train models collaboratively without sharing private data. Current FL research focuses on communication efficiency, privacy protection, and personalization and assumes that the data of FL have already been ideally collected. In medical scenarios, however, data annotation demands both expertise and intensive labor, which is a critical problem in FL. Active learning (AL), has shown promising performance in reducing the number of data annotations in medical image analysis. We propose a federated AL (FedAL) framework in which AL is executed periodically and interactively under FL. We exploit a local model in each hospital and a global model acquired from FL to construct an ensemble. We use ensemble-entropy-based AL as an efficient data-annotation strategy in FL. Therefore, our FedAL framework can decrease the amount of annotated data and preserve patient privacy while maintaining the performance of FL. To our knowledge, this is the first FedAL framework applied to medical images. We validated our framework on real-world dermoscopic datasets. Using only 50% of samples, our framework was able to achieve state-of-the-art performance on a skin-lesion classification task. Our framework performed better than several state-of-the-art AL methods under FL and achieved comparable performance to full-data FL.
Related papers
- GAI-Enabled Explainable Personalized Federated Semi-Supervised Learning [29.931169585178818]
Federated learning (FL) is a commonly distributed algorithm for mobile users (MUs) training artificial intelligence (AI) models.
We propose an explainable personalized FL framework, called XPFL. Particularly, in local training, we utilize a generative AI (GAI) model to learn from large unlabeled data.
In global aggregation, we obtain the new local local model by fusing the local and global FL models in specific proportions.
Finally, simulation results validate the effectiveness of the proposed XPFL framework.
arXiv Detail & Related papers (2024-10-11T08:58:05Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Where to Begin? From Random to Foundation Model Instructed
Initialization in Federated Learning for Medical Image Segmentation [11.412151951949102]
In medical image analysis, Federated Learning (FL) is a key technology that enables privacy-preserved, decentralized data processing.
We propose a novel perspective: exploring the impact of using the foundation model with enormous pre-trained knowledge.
arXiv Detail & Related papers (2023-11-27T00:29:10Z) - A Comprehensive View of Personalized Federated Learning on Heterogeneous Clinical Datasets [0.4926316920996346]
Federated learning (FL) is a key approach to overcoming the data silos that so frequently obstruct the training and deployment of machine-learning models in clinical settings.
This work contributes to a growing body of FL research specifically focused on clinical applications along three important directions.
arXiv Detail & Related papers (2023-09-28T20:12:17Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings [51.09574369310246]
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models.
We propose a novel cross-silo dataset suite focused on healthcare, FLamby, to bridge the gap between theory and practice of cross-silo FL.
Our flexible and modular suite allows researchers to easily download datasets, reproduce results and re-use the different components for their research.
arXiv Detail & Related papers (2022-10-10T12:17:30Z) - Federated Active Learning (F-AL): an Efficient Annotation Strategy for
Federated Learning [8.060606972572451]
Federated learning (FL) has been intensively investigated in terms of communication efficiency, privacy, and fairness.
We propose to apply active learning (AL) and sampling strategy into the FL framework to reduce the annotation workload.
We empirically demonstrate that the F-AL outperforms baseline methods in image classification tasks.
arXiv Detail & Related papers (2022-02-01T03:17:29Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.