DiffuseExpand: Expanding dataset for 2D medical image segmentation using
diffusion models
- URL: http://arxiv.org/abs/2304.13416v2
- Date: Tue, 6 Jun 2023 09:44:19 GMT
- Title: DiffuseExpand: Expanding dataset for 2D medical image segmentation using
diffusion models
- Authors: Shitong Shao, Xiaohan Yuan, Zhen Huang, Ziming Qiu, Shuai Wang and
Kevin Zhou
- Abstract summary: We propose DiffuseExpand for expanding datasets for 2D medical image segmentation using DPM.
DPMs have shown powerful image synthesis performance, even better than Generative Adversarial Networks.
Our comparison and ablation experiments on COVID-19 and CGMH Pelvis datasets demonstrate the effectiveness of DiffuseExpand.
- Score: 5.822451422344051
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dataset expansion can effectively alleviate the problem of data scarcity for
medical image segmentation, due to privacy concerns and labeling difficulties.
However, existing expansion algorithms still face great challenges due to their
inability of guaranteeing the diversity of synthesized images with paired
segmentation masks. In recent years, Diffusion Probabilistic Models (DPMs) have
shown powerful image synthesis performance, even better than Generative
Adversarial Networks. Based on this insight, we propose an approach called
DiffuseExpand for expanding datasets for 2D medical image segmentation using
DPM, which first samples a variety of masks from Gaussian noise to ensure the
diversity, and then synthesizes images to ensure the alignment of images and
masks. After that, DiffuseExpand chooses high-quality samples to further
enhance the effectiveness of data expansion. Our comparison and ablation
experiments on COVID-19 and CGMH Pelvis datasets demonstrate the effectiveness
of DiffuseExpand. Our code is released at
https://github.com/shaoshitong/DiffuseExpand.
Related papers
- Diffusion Model-based Data Augmentation Method for Fetal Head Ultrasound Segmentation [1.188383832081829]
Generative AI (GenAI) has proven effective at producing realistic synthetic images.<n>This study proposes a novel mask-guided GenAI approach to generate synthetic fetal head ultrasound images.<n>Our results show that the synthetic data captures real image features effectively.
arXiv Detail & Related papers (2025-06-30T09:40:12Z) - Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation [9.795456238314825]
We introduce Siamese-Diffusion, a novel dual-component model comprising Mask-Diffusion and Image-Diffusion.<n>During training, a Noise Consistency Loss is introduced between these components to enhance the morphological fidelity of Mask-Diffusion.
arXiv Detail & Related papers (2025-05-09T14:07:27Z) - MRGen: Segmentation Data Engine for Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically important imaging modalities is challenging due to the scarcity of annotated data.<n>This paper investigates leveraging generative models to synthesize data, for training segmentation models for underrepresented modalities.<n>We present MRGen, a data engine for controllable medical image synthesis conditioned on text prompts and segmentation masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - HiDiff: Hybrid Diffusion Framework for Medical Image Segmentation [16.906987804797975]
HiDiff is a hybrid diffusion framework for medical image segmentation.
It can synergize the strengths of existing discriminative segmentation models and new generative diffusion models.
It excels at segmenting small objects and generalizing to new datasets.
arXiv Detail & Related papers (2024-07-03T23:59:09Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - DreamDA: Generative Data Augmentation with Diffusion Models [68.22440150419003]
This paper proposes a new classification-oriented framework DreamDA.
DreamDA generates diverse samples that adhere to the original data distribution by considering training images in the original data as seeds.
In addition, since the labels of the generated data may not align with the labels of their corresponding seed images, we introduce a self-training paradigm for generating pseudo labels.
arXiv Detail & Related papers (2024-03-19T15:04:35Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - DFormer: Diffusion-guided Transformer for Universal Image Segmentation [86.73405604947459]
The proposed DFormer views universal image segmentation task as a denoising process using a diffusion model.
At inference, our DFormer directly predicts the masks and corresponding categories from a set of randomly-generated masks.
Our DFormer outperforms the recent diffusion-based panoptic segmentation method Pix2Seq-D with a gain of 3.6% on MS COCO val 2017 set.
arXiv Detail & Related papers (2023-06-06T06:33:32Z) - Mask-conditioned latent diffusion for generating gastrointestinal polyp
images [2.027538200191349]
This study proposes a conditional DPM framework to generate synthetic GI polyp images conditioned on given segmentation masks.
Our system can generate an unlimited number of high-fidelity synthetic polyp images with the corresponding ground truth masks of polyps.
Results show that the best micro-imagewise IOU of 0.7751 was achieved from DeepLabv3+ when the training data consists of both real data and synthetic data.
arXiv Detail & Related papers (2023-04-11T14:11:17Z) - BerDiff: Conditional Bernoulli Diffusion Model for Medical Image
Segmentation [19.036821997968552]
We propose a conditional Bernoulli Diffusion model for medical image segmentation (BerDiff)
Our results show that our BerDiff outperforms other recently published state-of-the-art methods.
arXiv Detail & Related papers (2023-04-10T07:21:38Z) - Dataset Distillation via Factorization [58.8114016318593]
We introduce a emphdataset factorization approach, termed emphHaBa, which is a plug-and-play strategy portable to any existing dataset distillation (DD) baseline.
emphHaBa explores decomposing a dataset into two components: data emphHallucination networks and emphBases.
Our method can yield significant improvement on downstream classification tasks compared with previous state of the arts, while reducing the total number of compressed parameters by up to 65%.
arXiv Detail & Related papers (2022-10-30T08:36:19Z) - Multitask Brain Tumor Inpainting with Diffusion Models: A Methodological
Report [0.0]
Inpainting algorithms are a subset of DL generative models that can alter one or more regions of an input image.
The performance of these algorithms is frequently suboptimal due to their limited output variety.
Denoising diffusion probabilistic models (DDPMs) are a recently introduced family of generative networks that can generate results of comparable quality to GANs.
arXiv Detail & Related papers (2022-10-21T17:13:14Z) - DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition [85.94331736287765]
We formulate HFR as a dual generation problem, and tackle it via a novel Dual Variational Generation (DVG-Face) framework.
We integrate abundant identity information of large-scale visible data into the joint distribution.
Massive new diverse paired heterogeneous images with the same identity can be generated from noises.
arXiv Detail & Related papers (2020-09-20T09:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.