Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information
- URL: http://arxiv.org/abs/2012.08604v1
- Date: Tue, 15 Dec 2020 20:41:24 GMT
- Title: Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information
- Authors: Qi Chang, Zhennan Yan, Lohendran Baskaran, Hui Qu, Yikai Zhang, Tong
Zhang, Shaoting Zhang, and Dimitris N. Metaxas
- Abstract summary: We propose an extendable and elastic learning framework to preserve privacy and security.
The proposed framework is named distributed Asynchronized Discriminator Generative Adrial Networks (AsynDGAN)
- Score: 55.866673486753115
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As deep learning technologies advance, increasingly more data is necessary to
generate general and robust models for various tasks. In the medical domain,
however, large-scale and multi-parties data training and analyses are
infeasible due to the privacy and data security concerns. In this paper, we
propose an extendable and elastic learning framework to preserve privacy and
security while enabling collaborative learning with efficient communication.
The proposed framework is named distributed Asynchronized Discriminator
Generative Adversarial Networks (AsynDGAN), which consists of a centralized
generator and multiple distributed discriminators. The advantages of our
proposed framework are five-fold: 1) the central generator could learn the real
data distribution from multiple datasets implicitly without sharing the image
data; 2) the framework is applicable for single-modality or multi-modality
data; 3) the learned generator can be used to synthesize samples for
down-stream learning tasks to achieve close-to-real performance as using actual
samples collected from multiple data centers; 4) the synthetic samples can also
be used to augment data or complete missing modalities for one single data
center; 5) the learning process is more efficient and requires lower bandwidth
than other distributed deep learning methods.
Related papers
- Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification [2.5091334993691206]
Development of a robust deep-learning model for retinal disease diagnosis requires a substantial dataset for training.
The capacity to generalize effectively on smaller datasets remains a persistent challenge.
We've combined a wide range of data sources to improve performance and generalization to new data.
arXiv Detail & Related papers (2024-09-17T17:22:35Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Beyond Just Vision: A Review on Self-Supervised Representation Learning
on Multimodal and Temporal Data [10.006890915441987]
Popularity of self-supervised learning is driven by the fact that traditional models typically require a huge amount of well-annotated data for training.
Self-supervised methods have been introduced to improve the efficiency of training data through discriminative pre-training of models.
We aim to provide the first comprehensive review of multimodal self-supervised learning methods for temporal data.
arXiv Detail & Related papers (2022-06-06T04:59:44Z) - A communication efficient distributed learning framework for smart
environments [0.4898659895355355]
This paper proposes a distributed learning framework to move data analytics closer to where data is generated.
Using distributed machine learning techniques, it is possible to drastically reduce the network overhead, while obtaining performance comparable to the cloud solution.
The analysis also shows when each distributed learning approach is preferable, based on the specific distribution of the data on the nodes.
arXiv Detail & Related papers (2021-09-27T13:44:34Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Synthetic Learning: Learn From Distributed Asynchronized Discriminator
GAN Without Sharing Medical Image Data [21.725983290877753]
We propose a data privacy-preserving and communication efficient distributed GAN learning framework named Distributed Asynchronized Discriminator GAN (AsynDGAN)
arXiv Detail & Related papers (2020-05-29T21:05:49Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN [80.17705319689139]
We propose a data-free knowledge amalgamate strategy to craft a well-behaved multi-task student network from multiple single/multi-task teachers.
The proposed method without any training data achieves the surprisingly competitive results, even compared with some full-supervised methods.
arXiv Detail & Related papers (2020-03-20T03:20:52Z) - Evaluation Framework For Large-scale Federated Learning [10.127616622630514]
Federated learning is proposed as a machine learning setting to enable distributed edge devices, such as mobile phones, to collaboratively learn a shared prediction model.
In this paper, we introduce a framework designed for large-scale federated learning which consists of approaches to generating dataset and modular evaluation framework.
arXiv Detail & Related papers (2020-03-03T15:12:13Z) - DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a
Trained Classifier [58.979104709647295]
We bridge the gap between the abundance of available data and lack of relevant data, for the future learning tasks of a trained network.
We use the available data, that may be an imbalanced subset of the original training dataset, or a related domain dataset, to retrieve representative samples.
We demonstrate that data from a related domain can be leveraged to achieve state-of-the-art performance.
arXiv Detail & Related papers (2019-12-27T02:05:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.