Application of Federated Learning in Building a Robust COVID-19 Chest
X-ray Classification Model
- URL: http://arxiv.org/abs/2204.10505v1
- Date: Fri, 22 Apr 2022 05:21:50 GMT
- Title: Application of Federated Learning in Building a Robust COVID-19 Chest
X-ray Classification Model
- Authors: Amartya Bhattacharya, Manish Gawali, Jitesh Seth, Viraj Kulkarni
- Abstract summary: Federated Learning (FL) helps AI models to generalize better without moving all the data to a central server.
We trained a deep learning model to solve a binary classification problem of predicting the presence or absence of COVID-19.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While developing artificial intelligence (AI)-based algorithms to solve
problems, the amount of data plays a pivotal role - large amount of data helps
the researchers and engineers to develop robust AI algorithms. In the case of
building AI-based models for problems related to medical imaging, these data
need to be transferred from the medical institutions where they were acquired
to the organizations developing the algorithms. This movement of data involves
time-consuming formalities like complying with HIPAA, GDPR, etc.There is also a
risk of patients' private data getting leaked, compromising their
confidentiality. One solution to these problems is using the Federated Learning
framework.
Federated Learning (FL) helps AI models to generalize better and create a
robust AI model by using data from different sources having different
distributions and data characteristics without moving all the data to a central
server. In our paper, we apply the FL framework for training a deep learning
model to solve a binary classification problem of predicting the presence or
absence of COVID-19. We took three different sources of data and trained
individual models on each source. Then we trained an FL model on the complete
data and compared all the model performances. We demonstrated that the FL model
performs better than the individual models. Moreover, the FL model performed at
par with the model trained on all the data combined at a central server. Thus
Federated Learning leads to generalized AI models without the cost of data
transfer and regulatory overhead.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - FLIGAN: Enhancing Federated Learning with Incomplete Data using GAN [1.5749416770494706]
Federated Learning (FL) provides a privacy-preserving mechanism for distributed training of machine learning models on networked devices.
We propose FLIGAN, a novel approach to address the issue of data incompleteness in FL.
Our methodology adheres to FL's privacy requirements by generating synthetic data in a federated manner without sharing the actual data in the process.
arXiv Detail & Related papers (2024-03-25T16:49:38Z) - Federated Data Model [16.62770246342126]
In artificial intelligence (AI), especially deep learning, data diversity and volume play a pivotal role in model development.
We developed a method called the Federated Data Model (FDM) to train robust deep learning models across different locations.
Our results show that models trained with this method perform well both on the data they were originally trained on and on data from other sites.
arXiv Detail & Related papers (2024-03-13T18:16:54Z) - Towards Personalized Federated Learning via Heterogeneous Model
Reassembly [84.44268421053043]
pFedHR is a framework that leverages heterogeneous model reassembly to achieve personalized federated learning.
pFedHR dynamically generates diverse personalized models in an automated manner.
arXiv Detail & Related papers (2023-08-16T19:36:01Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among
Complexity, Leakage, and Utility [8.782250973555026]
Bottleneck problems are an important class of optimization problems that have recently gained increasing attention in the domain of machine learning and information theory.
We propose a general family of optimization problems, termed as complexity-leakage-utility bottleneck (CLUB) model.
We show that the CLUB model generalizes all these problems as well as most other information-theoretic privacy models.
arXiv Detail & Related papers (2022-07-11T14:07:48Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.