Diffusion-based Data Augmentation for Nuclei Image Segmentation
- URL: http://arxiv.org/abs/2310.14197v2
- Date: Fri, 19 Jan 2024 02:46:00 GMT
- Title: Diffusion-based Data Augmentation for Nuclei Image Segmentation
- Authors: Xinyi Yu and Guanbin Li and Wei Lou and Siqi Liu and Xiang Wan and Yan
Chen and Haofeng Li
- Abstract summary: We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
- Score: 68.28350341833526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nuclei segmentation is a fundamental but challenging task in the quantitative
analysis of histopathology images. Although fully-supervised deep
learning-based methods have made significant progress, a large number of
labeled images are required to achieve great segmentation performance.
Considering that manually labeling all nuclei instances for a dataset is
inefficient, obtaining a large-scale human-annotated dataset is time-consuming
and labor-intensive. Therefore, augmenting a dataset with only a few labeled
images to improve the segmentation performance is of significant research and
application value. In this paper, we introduce the first diffusion-based
augmentation method for nuclei segmentation. The idea is to synthesize a large
number of labeled images to facilitate training the segmentation model. To
achieve this, we propose a two-step strategy. In the first step, we train an
unconditional diffusion model to synthesize the Nuclei Structure that is
defined as the representation of pixel-level semantic and distance transform.
Each synthetic nuclei structure will serve as a constraint on histopathology
image synthesis and is further post-processed to be an instance map. In the
second step, we train a conditioned diffusion model to synthesize
histopathology images based on nuclei structures. The synthetic histopathology
images paired with synthetic instance maps will be added to the real dataset
for training the segmentation model. The experimental results show that by
augmenting 10% labeled real dataset with synthetic samples, one can achieve
comparable segmentation results with the fully-supervised baseline. The code is
released in: https://github.com/lhaof/Nudiff
Related papers
- Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for
Pixel-Level Semantic Segmentation [6.82236459614491]
We propose a novel method for generating pixel-level semantic segmentation labels using the text-to-image generative model Stable Diffusion.
By utilizing the text prompts, cross-attention, and self-attention of SD, we introduce three new techniques: class-prompt appending, class-prompt cross-attention, and self-attention exponentiation.
These techniques enable us to generate segmentation maps corresponding to synthetic images.
arXiv Detail & Related papers (2023-09-25T17:19:26Z) - Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis [9.47802391546853]
We develop a unified pipeline for microscopy image segmentation using synthetically generated training data.
Our framework achieves comparable results to models trained on authentic microscopy images with dense labels.
arXiv Detail & Related papers (2023-08-18T22:00:53Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework [70.18084425770091]
Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
arXiv Detail & Related papers (2022-12-20T14:53:26Z) - InsMix: Towards Realistic Generative Data Augmentation for Nuclei
Instance Segmentation [29.78647170035808]
We propose a realistic data augmentation method for nuclei segmentation, named InsMix, that follows a Copy-Paste-Smooth principle.
Specifically, we propose morphology constraints that enable the augmented images to acquire luxuriant information about nuclei.
To fully exploit the pixel redundancy of the background, we propose a background perturbation method, which randomly shuffles the background patches.
arXiv Detail & Related papers (2022-06-30T08:58:05Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Weakly Supervised Deep Nuclei Segmentation Using Partial Points
Annotation in Histopathology Images [51.893494939675314]
We propose a novel weakly supervised segmentation framework based on partial points annotation.
We show that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T15:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.