Generative Adversarial Network Based Synthetic Learning and a Novel
Domain Relevant Loss Term for Spine Radiographs
- URL: http://arxiv.org/abs/2205.02843v1
- Date: Thu, 5 May 2022 03:58:19 GMT
- Title: Generative Adversarial Network Based Synthetic Learning and a Novel
Domain Relevant Loss Term for Spine Radiographs
- Authors: Ethan Schonfeld, Anand Veeravagu
- Abstract summary: We accomplish GAN generation of synthetic spine radiographs without meaningful input for the first time from a literature review.
The introduction of a new clinical loss term for the generator was found to increase generation recall as well as accelerate model training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Problem: There is a lack of big data for the training of deep learning models
in medicine, characterized by the time cost of data collection and privacy
concerns. Generative adversarial networks (GANs) offer both the potential to
generate new data, as well as to use this newly generated data, without
inclusion of patients' real data, for downstream applications.
Approach: A series of GANs were trained and applied for a downstream computer
vision spine radiograph abnormality classification task. Separate classifiers
were trained with either access or no access to the original imaging. Trained
GANs included a conditional StyleGAN2 with adaptive discriminator augmentation,
a conditional StyleGAN2 with adaptive discriminator augmentation to generate
spine radiographs conditional on lesion type, and using a novel clinical loss
term for the generator a StyleGAN2 with adaptive discriminator augmentation
conditional on abnormality (SpineGAN). Finally, a differential privacy imposed
StyleGAN2 with adaptive discriminator augmentation conditional on abnormality
was trained and an ablation study was performed on its differential privacy
impositions.
Key Results: We accomplish GAN generation of synthetic spine radiographs
without meaningful input for the first time from a literature review. We
further demonstrate the success of synthetic learning for the spine domain with
a downstream clinical classification task (AUC of 0.830 using synthetic data
compared to AUC of 0.886 using the real data). Importantly, the introduction of
a new clinical loss term for the generator was found to increase generation
recall as well as accelerate model training. Lastly, we demonstrate that, in a
limited size medical dataset, differential privacy impositions severely impede
GAN training, finding that this is specifically due to the requirement for
gradient perturbation with noise.
Related papers
- Local Lesion Generation is Effective for Capsule Endoscopy Image Data Augmentation in a Limited Data Setting [0.0]
We propose and evaluate two local lesion generation approaches to address the challenge of augmenting small medical image datasets.
The first approach employs the Poisson Image Editing algorithm, a classical image processing technique, to create realistic image composites.
The second approach introduces a novel generative method, leveraging a fine-tuned Image Inpainting GAN to synthesize realistic lesions.
arXiv Detail & Related papers (2024-11-05T13:44:25Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Generative Adversarial Networks for Data Augmentation [0.0]
GANs have been utilized in medical image analysis for various tasks, including data augmentation, image creation, and domain adaptation.
GANs can generate synthetic samples that can be used to increase the available dataset.
It is essential to note that the use of GANs in medical imaging is still an active area of research to ensure that the produced images are of high quality and suitable for use in clinical settings.
arXiv Detail & Related papers (2023-06-03T06:33:33Z) - Improved Techniques for the Conditional Generative Augmentation of
Clinical Audio Data [36.45569352490318]
We propose a conditional generative adversarial neural network-based augmentation method which is able to synthesize mel spectrograms from a learned data distribution.
We show that our method outperforms all classical audio augmentation techniques and previously published generative methods in terms of generated sample quality.
The proposed model advances the state-of-the-art in the augmentation of clinical audio data and improves the data bottleneck for the design of clinical acoustic sensing systems.
arXiv Detail & Related papers (2022-11-05T10:58:04Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Categorical EHR Imputation with Generative Adversarial Nets [11.171712535005357]
We propose a simple and yet effective approach that is based on previous work on GANs for data imputation.
We show that our imputation approach largely improves the prediction accuracy, compared to more traditional data imputation approaches.
arXiv Detail & Related papers (2021-08-03T18:50:26Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.