Synthetic Learning: Learn From Distributed Asynchronized Discriminator
GAN Without Sharing Medical Image Data
- URL: http://arxiv.org/abs/2006.00080v2
- Date: Sun, 14 Jun 2020 04:18:29 GMT
- Title: Synthetic Learning: Learn From Distributed Asynchronized Discriminator
GAN Without Sharing Medical Image Data
- Authors: Qi Chang, Hui Qu, Yikai Zhang, Mert Sabuncu, Chao Chen, Tong Zhang and
Dimitris Metaxas
- Abstract summary: We propose a data privacy-preserving and communication efficient distributed GAN learning framework named Distributed Asynchronized Discriminator GAN (AsynDGAN)
- Score: 21.725983290877753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a data privacy-preserving and communication
efficient distributed GAN learning framework named Distributed Asynchronized
Discriminator GAN (AsynDGAN). Our proposed framework aims to train a central
generator learns from distributed discriminator, and use the generated
synthetic image solely to train the segmentation model.We validate the proposed
framework on the application of health entities learning problem which is known
to be privacy sensitive. Our experiments show that our approach: 1) could learn
the real image's distribution from multiple datasets without sharing the
patient's raw data. 2) is more efficient and requires lower bandwidth than
other distributed deep learning methods. 3) achieves higher performance
compared to the model trained by one real dataset, and almost the same
performance compared to the model trained by all real datasets. 4) has provable
guarantees that the generator could learn the distributed distribution in an
all important fashion thus is unbiased.
Related papers
- Score Neural Operator: A Generative Model for Learning and Generalizing Across Multiple Probability Distributions [7.851040662069365]
We introduce the $emphScore Neural Operator, which learns the mapping from multiple probability distributions to their score functions within a unified framework.
Our approach offers significant potential for few-shot learning applications, where a single image from a new distribution can be leveraged to generate multiple distinct images from that distribution.
arXiv Detail & Related papers (2024-10-11T06:00:34Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Cross-feature Contrastive Loss for Decentralized Deep Learning on
Heterogeneous Data [8.946847190099206]
We present a novel approach for decentralized learning on heterogeneous data.
Cross-features for a pair of neighboring agents are the features obtained from the data of an agent with respect to the model parameters of the other agent.
Our experiments show that the proposed method achieves superior performance (0.2-4% improvement in test accuracy) compared to other existing techniques for decentralized learning on heterogeneous data.
arXiv Detail & Related papers (2023-10-24T14:48:23Z) - Collaborative Learning of Distributions under Heterogeneity and
Communication Constraints [35.82172666266493]
In machine learning, users often have to collaborate to learn distributions that generate the data.
We propose a novel two-stage method named SHIFT: First, the users collaborate by communicating with the server to learn a central distribution.
Then, the learned central distribution is fine-tuned to estimate the individual distributions of users.
arXiv Detail & Related papers (2022-06-01T18:43:06Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information [55.866673486753115]
We propose an extendable and elastic learning framework to preserve privacy and security.
The proposed framework is named distributed Asynchronized Discriminator Generative Adrial Networks (AsynDGAN)
arXiv Detail & Related papers (2020-12-15T20:41:24Z) - Variational Clustering: Leveraging Variational Autoencoders for Image
Clustering [8.465172258675763]
Variational Autoencoders (VAEs) naturally lend themselves to learning data distributions in a latent space.
We propose a method based on VAEs where we use a Gaussian Mixture prior to help cluster the images accurately.
Our method simultaneously learns a prior that captures the latent distribution of the images and a posterior to help discriminate well between data points.
arXiv Detail & Related papers (2020-05-10T09:34:48Z) - Brainstorming Generative Adversarial Networks (BGANs): Towards
Multi-Agent Generative Models with Distributed Private Datasets [70.62568022925971]
generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space.
In many scenarios, the available datasets may be limited and distributed across multiple agents, each of which is seeking to learn the distribution of the data on its own.
In this paper, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner.
arXiv Detail & Related papers (2020-02-02T02:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.