Study of the performance and scalability of federated learning for
medical imaging with intermittent clients
- URL: http://arxiv.org/abs/2207.08581v2
- Date: Tue, 19 Jul 2022 09:24:09 GMT
- Title: Study of the performance and scalability of federated learning for
medical imaging with intermittent clients
- Authors: Judith S\'ainz-Pardo D\'iaz and \'Alvaro L\'opez Garc\'ia
- Abstract summary: Federated learning is a data decentralization privacy-preserving technique used to perform machine or deep learning in a secure way.
Use case of medical image analysis is proposed, using chest X-ray images obtained from an open data repository.
Different clients will be simulated from the training data, selected in an unbalanced manner, they do not all have the same number of data.
Two approaches to follow will be analyzed in the case of intermittent clients, as in a real scenario some clients may leave the training, and some new ones may enter the training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning is a data decentralization privacy-preserving technique
used to perform machine or deep learning in a secure way. In this paper we
present theoretical aspects about federated learning, such as the presentation
of an aggregation operator, different types of federated learning, and issues
to be taken into account in relation to the distribution of data from the
clients, together with the exhaustive analysis of a use case where the number
of clients varies. Specifically, a use case of medical image analysis is
proposed, using chest X-ray images obtained from an open data repository. In
addition to the advantages related to privacy, improvements in predictions (in
terms of accuracy and area under the curve) and reduction of execution times
will be studied with respect to the classical case (the centralized approach).
Different clients will be simulated from the training data, selected in an
unbalanced manner, i.e., they do not all have the same number of data. The
results of considering three or ten clients are exposed and compared between
them and against the centralized case. Two approaches to follow will be
analyzed in the case of intermittent clients, as in a real scenario some
clients may leave the training, and some new ones may enter the training. The
evolution of the results for the test set in terms of accuracy, area under the
curve and execution time is shown as the number of clients into which the
original data is divided increases. Finally, improvements and future work in
the field are proposed.
Related papers
- Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning [22.705411388403036]
This paper develops a novel personalized federated learning algorithm.
Each client constructs a personalized model by combining a locally fine-tuned model with multiple federated models.
Theoretical analysis and experiments on real datasets corroborate the effectiveness of this approach.
arXiv Detail & Related papers (2024-10-28T21:20:51Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Federated Semi-supervised Learning for Medical Image Segmentation with intra-client and inter-client Consistency [10.16245019262119]
Federated learning aims to train a shared model of isolated clients without local data exchange.
In this work, we propose a novel federated semi-supervised learning framework for medical image segmentation.
arXiv Detail & Related papers (2024-03-19T12:52:38Z) - Investigation of Federated Learning Algorithms for Retinal Optical
Coherence Tomography Image Classification with Statistical Heterogeneity [6.318288071829899]
We investigate the effectiveness of FedAvg and FedProx to train an OCT image classification model in a decentralized fashion.
We partitioned a publicly available OCT dataset across multiple clients under IID and Non-IID settings and conducted local training on the subsets for each client.
arXiv Detail & Related papers (2024-02-15T15:58:42Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - When to Trust Aggregated Gradients: Addressing Negative Client Sampling
in Federated Learning [41.51682329500003]
We propose a novel learning rate adaptation mechanism to adjust the server learning rate for the aggregated gradient in each round.
We make theoretical deductions to find a meaningful and robust indicator that is positively related to the optimal server learning rate.
arXiv Detail & Related papers (2023-01-25T03:52:45Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.