Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis
- URL: http://arxiv.org/abs/2308.09835v3
- Date: Fri, 8 Dec 2023 16:38:13 GMT
- Title: Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis
- Authors: Shijie Li, Mengwei Ren, Thomas Ach, Guido Gerig
- Abstract summary: We develop a unified pipeline for microscopy image segmentation using synthetically generated training data.
Our framework achieves comparable results to models trained on authentic microscopy images with dense labels.
- Score: 9.47802391546853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current deep learning-based approaches for the segmentation of microscopy
images heavily rely on large amount of training data with dense annotation,
which is highly costly and laborious in practice. Compared to full annotation
where the complete contour of objects is depicted, point annotations,
specifically object centroids, are much easier to acquire and still provide
crucial information about the objects for subsequent segmentation. In this
paper, we assume access to point annotations only during training and develop a
unified pipeline for microscopy image segmentation using synthetically
generated training data. Our framework includes three stages: (1) it takes
point annotations and samples a pseudo dense segmentation mask constrained with
shape priors; (2) with an image generative model trained in an unpaired manner,
it translates the mask to a realistic microscopy image regularized by object
level consistency; (3) the pseudo masks along with the synthetic images then
constitute a pairwise dataset for training an ad-hoc segmentation model. On the
public MoNuSeg dataset, our synthesis pipeline produces more diverse and
realistic images than baseline models while maintaining high coherence between
input masks and generated images. When using the identical segmentation
backbones, the models trained on our synthetic dataset significantly outperform
those trained with pseudo-labels or baseline-generated images. Moreover, our
framework achieves comparable results to models trained on authentic microscopy
images with dense labels, demonstrating its potential as a reliable and highly
efficient alternative to labor-intensive manual pixel-wise annotations in
microscopy image segmentation. The code is available.
Related papers
- Synthetic dual image generation for reduction of labeling efforts in semantic segmentation of micrographs with a customized metric function [0.0]
Training semantic segmentation models for material analysis requires micrographs and their corresponding masks.
We demonstrate a workflow for the improvement of semantic segmentation models through the generation of synthetic microstructural images in conjunction with masks.
The approach could be generalized to various types of image data such as it serves as a user-friendly solution for training models with a small number of real images.
arXiv Detail & Related papers (2024-08-01T16:54:11Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Outline-Guided Object Inpainting with Diffusion Models [11.391452115311798]
Instance segmentation datasets play a crucial role in training accurate and robust computer vision models.
We show how this issue can be mitigated by starting with small annotated instance segmentation datasets and augmenting them to obtain a sizeable annotated dataset.
We generate new images using a diffusion-based inpainting model to fill out the masked area with a desired object class by guiding the diffusion through the object outline.
arXiv Detail & Related papers (2024-02-26T09:21:17Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - Foreground-Background Separation through Concept Distillation from
Generative Image Foundation Models [6.408114351192012]
We present a novel method that enables the generation of general foreground-background segmentation models from simple textual descriptions.
We show results on the task of segmenting four different objects (humans, dogs, cars, birds) and a use case scenario in medical image analysis.
arXiv Detail & Related papers (2022-12-29T13:51:54Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Point-supervised Segmentation of Microscopy Images and Volumes via
Objectness Regularization [2.243486411968779]
This work enables the training of semantic segmentation networks on images with only a single point for training per instance.
We achieve competitive results against the state-of-the-art in point-supervised semantic segmentation on challenging datasets in digital pathology.
arXiv Detail & Related papers (2021-03-09T18:40:00Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.