Brain tumor segmentation using synthetic MR images -- A comparison of
GANs and diffusion models
- URL: http://arxiv.org/abs/2306.02986v2
- Date: Fri, 5 Jan 2024 12:48:31 GMT
- Title: Brain tumor segmentation using synthetic MR images -- A comparison of
GANs and diffusion models
- Authors: Muhammad Usman Akbar, M{\aa}ns Larsson, Anders Eklund
- Abstract summary: Generative AI models, such as generative adversarial networks (GANs) and diffusion models, can today produce very realistic synthetic images.
We show that segmentation networks trained on synthetic images reach Dice scores that are 80% - 90% of Dice scores when training with real images.
Our conclusion is that sharing synthetic medical images is a viable option to sharing real images, but that further work is required.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large annotated datasets are required for training deep learning models, but
in medical imaging data sharing is often complicated due to ethics,
anonymization and data protection legislation. Generative AI models, such as
generative adversarial networks (GANs) and diffusion models, can today produce
very realistic synthetic images, and can potentially facilitate data sharing.
However, in order to share synthetic medical images it must first be
demonstrated that they can be used for training different networks with
acceptable performance. Here, we therefore comprehensively evaluate four GANs
(progressive GAN, StyleGAN 1-3) and a diffusion model for the task of brain
tumor segmentation (using two segmentation networks, U-Net and a Swin
transformer). Our results show that segmentation networks trained on synthetic
images reach Dice scores that are 80% - 90% of Dice scores when training with
real images, but that memorization of the training images can be a problem for
diffusion models if the original dataset is too small. Our conclusion is that
sharing synthetic medical images is a viable option to sharing real images, but
that further work is required. The trained generative models and the generated
synthetic images are shared on AIDA data hub
Related papers
- Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Diffusion-Based Approaches in Medical Image Generation and Analysis [0.7834170106487724]
Data scarcity in medical imaging poses significant challenges due to privacy concerns.
Questions remain about the performance of convolutional neural network (CNN) models on original and synthetic datasets.
In this study, we investigated the effectiveness of diffusion models for generating synthetic medical images to train CNNs in three domains.
arXiv Detail & Related papers (2024-12-22T05:02:05Z) - Evaluating Utility of Memory Efficient Medical Image Generation: A Study on Lung Nodule Segmentation [0.0]
This work proposes a memory-efficient patch-wise denoising diffusion probabilistic model (DDPM) for generating synthetic medical images.
Our approach generates high-utility synthetic images with nodule segmentation while efficiently managing memory constraints.
We evaluate the method in two scenarios: training a segmentation model exclusively on synthetic data, and augmenting real-world training data with synthetic images.
arXiv Detail & Related papers (2024-10-16T13:20:57Z) - MediSyn: A Generalist Text-Guided Latent Diffusion Model For Diverse Medical Image Synthesis [4.541407789437896]
MediSyn is a text-guided latent diffusion model capable of generating synthetic images from 6 medical specialties and 10 image types.
A direct comparison of the synthetic images against the real images confirms that our model synthesizes novel images and, crucially, may preserve patient privacy.
Our findings highlight the immense potential for generalist image generative models to accelerate algorithmic research and development in medicine.
arXiv Detail & Related papers (2024-05-16T04:28:44Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Beware of diffusion models for synthesizing medical images -- A comparison with GANs in terms of memorizing brain MRI and chest x-ray images [0.0]
We train StyleGAN and a diffusion model, using BRATS20, BRATS21 and a chest x-ray pneumonia dataset, to synthesize brain MRI and chest x-ray images.
Our results show that diffusion models are more likely to memorize the training images, compared to StyleGAN, especially for small datasets.
arXiv Detail & Related papers (2023-05-12T17:55:40Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Image Translation for Medical Image Generation -- Ischemic Stroke
Lesions [0.0]
Synthetic databases with annotated pathologies could provide the required amounts of training data.
We train different image-to-image translation models to synthesize magnetic resonance images of brain volumes with and without stroke lesions.
We show that for a small database of only 10 or 50 clinical cases, synthetic data augmentation yields significant improvement.
arXiv Detail & Related papers (2020-10-05T09:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.