Federated Learning: Issues in Medical Application
- URL: http://arxiv.org/abs/2109.00202v1
- Date: Wed, 1 Sep 2021 06:04:08 GMT
- Title: Federated Learning: Issues in Medical Application
- Authors: Joo Hun Yoo, Hyejun Jeong, Jaehyeok Lee, Tai-Myoung Chung
- Abstract summary: Since the federated learning, which makes AI learning possible without moving local data around, was introduced by google in 2017 it has been actively studied particularly in the field of medicine.
The idea of machine learning in AI without collecting data from local clients is very attractive because data remain in local sites.
However, federated learning techniques still have various open issues due to its own characteristics such as non identical distribution, client participation management, and vulnerable environments.
- Score: 0.7598921989525735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the federated learning, which makes AI learning possible without moving
local data around, was introduced by google in 2017 it has been actively
studied particularly in the field of medicine. In fact, the idea of machine
learning in AI without collecting data from local clients is very attractive
because data remain in local sites. However, federated learning techniques
still have various open issues due to its own characteristics such as non
identical distribution, client participation management, and vulnerable
environments. In this presentation, the current issues to make federated
learning flawlessly useful in the real world will be briefly overviewed. They
are related to data/system heterogeneity, client management, traceability, and
security. Also, we introduce the modularized federated learning framework, we
currently develop, to experiment various techniques and protocols to find
solutions for aforementioned issues. The framework will be open to public after
development completes.
Related papers
- Privacy-Preserving Edge Federated Learning for Intelligent Mobile-Health Systems [4.082799056366928]
We propose a privacy-preserving edge FL framework for resource-constrained mobile-health and wearable technologies over the IoT infrastructure.
We evaluate our proposed framework extensively and provide the implementation of our technique on Amazon's AWS cloud platform.
arXiv Detail & Related papers (2024-05-09T08:15:31Z) - Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - The FeatureCloud AI Store for Federated Learning in Biomedicine and
Beyond [0.7517525791460022]
Privacy-preserving methods, such as Federated Learning (FL), allow for training ML models without sharing sensitive data.
We present the FeatureCloud AI Store for FL as an all-in-one platform for biomedical research and other applications.
arXiv Detail & Related papers (2021-05-12T15:31:46Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Federated Transfer Learning: concept and applications [2.474754293747645]
Federated transfer learning (FTL) allows knowledge to be transferred across domains that do not have many overlapping features and users.
In this work we study the background of FTL and its different existing applications.
We further analyze FTL from privacy and machine learning perspective.
arXiv Detail & Related papers (2020-09-26T19:46:07Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Federated and continual learning for classification tasks in a society
of devices [59.45414406974091]
Light Federated and Continual Consensus (LFedCon2) is a new federated and continual architecture that uses light, traditional learners.
Our method allows powerless devices (such as smartphones or robots) to learn in real time, locally, continuously, autonomously and from users.
In order to test our proposal, we have applied it in a heterogeneous community of smartphone users to solve the problem of walking recognition.
arXiv Detail & Related papers (2020-06-12T12:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.