Reliability and Performance Assessment of Federated Learning on Clinical
Benchmark Data
- URL: http://arxiv.org/abs/2005.11756v1
- Date: Sun, 24 May 2020 14:36:44 GMT
- Title: Reliability and Performance Assessment of Federated Learning on Clinical
Benchmark Data
- Authors: GeunHyeong Lee, Soo-Yong Shin
- Abstract summary: Federated learning (FL) has been suggested to protect personal privacy because it does not centralize data during the training phase.
In this study, we assessed the reliability and performance of FL on benchmark datasets including MNIST and MIMIC-III.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As deep learning have been applied in a clinical context, privacy concerns
have increased because of the collection and processing of a large amount of
personal data. Recently, federated learning (FL) has been suggested to protect
personal privacy because it does not centralize data during the training phase.
In this study, we assessed the reliability and performance of FL on benchmark
datasets including MNIST and MIMIC-III. In addition, we attempted to verify FL
on datasets that simulated a realistic clinical data distribution. We
implemented FL that uses a client and server architecture and tested client and
server FL on modified MNIST and MIMIC-III datasets. FL delivered reliable
performance on both imbalanced and extremely skewed distributions (i.e., the
difference of the number of patients and the characteristics of patients in
each hospital). Therefore, FL can be suitable to protect privacy when applied
to medical data.
Related papers
- Fine-Tuning Foundation Models with Federated Learning for Privacy Preserving Medical Time Series Forecasting [0.32985979395737786]
Federated Learning (FL) provides a decentralized machine learning approach, where multiple devices or servers collaboratively train a model without sharing their raw data.
In this paper, we fine-tune time series FMs with Electrocardiogram (ECG) and Impedance Cardiography (ICG) data using different FL techniques.
Our empirical results demonstrated that while FL can be effective for fine-tuning FMs on time series forecasting tasks, its benefits depend on the data distribution across clients.
arXiv Detail & Related papers (2025-02-13T20:01:15Z) - Federated Active Learning Framework for Efficient Annotation Strategy in Skin-lesion Classification [1.8149633401257899]
Federated Learning (FL) enables multiple institutes to train models collaboratively without sharing private data.
Active learning (AL) has shown promising performance in reducing the number of data annotations in medical image analysis.
We propose a federated AL (FedAL) framework in which AL is executed periodically and interactively under FL.
arXiv Detail & Related papers (2024-06-17T08:16:28Z) - Data Valuation and Detections in Federated Learning [4.899818550820576]
Federated Learning (FL) enables collaborative model training while preserving the privacy of raw data.
A challenge in this framework is the fair and efficient valuation of data, which is crucial for incentivizing clients to contribute high-quality data in the FL task.
This paper introduces a novel privacy-preserving method for evaluating client contributions and selecting relevant datasets without a pre-specified training algorithm in an FL task.
arXiv Detail & Related papers (2023-11-09T12:01:32Z) - Privacy-preserving patient clustering for personalized federated
learning [0.0]
Federated Learning (FL) is a machine learning framework that enables multiple organizations to train a model without sharing their data with a central server.
This is a problem in medical settings, where variations in the patient population contribute significantly to distribution differences across hospitals.
We propose Privacy-preserving Community-Based Federated machine Learning (PCBFL), a novel Clustered FL framework that can cluster patients using patient-level data while protecting privacy.
arXiv Detail & Related papers (2023-07-17T21:19:08Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings [51.09574369310246]
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models.
We propose a novel cross-silo dataset suite focused on healthcare, FLamby, to bridge the gap between theory and practice of cross-silo FL.
Our flexible and modular suite allows researchers to easily download datasets, reproduce results and re-use the different components for their research.
arXiv Detail & Related papers (2022-10-10T12:17:30Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z) - Inverse Distance Aggregation for Federated Learning with Non-IID Data [48.48922416867067]
Federated learning (FL) has been a promising approach in the field of medical imaging in recent years.
A critical problem in FL, specifically in medical scenarios is to have a more accurate shared model which is robust to noisy and out-of distribution clients.
We propose IDA, a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data.
arXiv Detail & Related papers (2020-08-17T23:20:01Z) - FOCUS: Dealing with Label Quality Disparity in Federated Learning [25.650278226178298]
We propose Federated Opportunistic Computing for Ubiquitous Systems (FOCUS) to address this challenge.
FOCUS quantifies the credibility of the client local data without directly observing them.
It effectively identifies clients with noisy labels and reduces their impact on the model performance.
arXiv Detail & Related papers (2020-01-29T09:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.