NuInsSeg: A Fully Annotated Dataset for Nuclei Instance Segmentation in
H&E-Stained Histological Images
- URL: http://arxiv.org/abs/2308.01760v1
- Date: Thu, 3 Aug 2023 13:45:07 GMT
- Title: NuInsSeg: A Fully Annotated Dataset for Nuclei Instance Segmentation in
H&E-Stained Histological Images
- Authors: Amirreza Mahbod, Christine Polak, Katharina Feldmann, Rumsha Khan,
Katharina Gelles, Georg Dorffner, Ramona Woitek, Sepideh Hatamikia, Isabella
Ellinger
- Abstract summary: We release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg.
This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs.
For the first time, we provide additional ambiguous area masks for the entire dataset.
- Score: 1.1500025852056222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In computational pathology, automatic nuclei instance segmentation plays an
essential role in whole slide image analysis. While many computerized
approaches have been proposed for this task, supervised deep learning (DL)
methods have shown superior segmentation performances compared to classical
machine learning and image processing techniques. However, these models need
fully annotated datasets for training which is challenging to acquire,
especially in the medical domain. In this work, we release one of the biggest
fully manually annotated datasets of nuclei in Hematoxylin and Eosin
(H&E)-stained histological images, called NuInsSeg. This dataset contains 665
image patches with more than 30,000 manually segmented nuclei from 31 human and
mouse organs. Moreover, for the first time, we provide additional ambiguous
area masks for the entire dataset. These vague areas represent the parts of the
images where precise and deterministic manual annotations are impossible, even
for human experts. The dataset and detailed step-by-step instructions to
generate related segmentation masks are publicly available at
https://www.kaggle.com/datasets/ipateam/nuinsseg and
https://github.com/masih4/NuInsSeg, respectively.
Related papers
- NuSegDG: Integration of Heterogeneous Space and Gaussian Kernel for Domain-Generalized Nuclei Segmentation [9.332333405703732]
We propose a domain-generalizable framework for nuclei image segmentation, abbreviated to NuSegDG.
HS-Adapter learns multi-dimensional feature representations of different nuclei domains by injecting a small number of trainable parameters into the image encoder of SAM.
GKP-Encoder generates density maps driven by a single point, which guides segmentation predictions by mixing position prompts and semantic prompts.
arXiv Detail & Related papers (2024-08-21T17:19:23Z) - SOHES: Self-supervised Open-world Hierarchical Entity Segmentation [82.45303116125021]
This work presents Self-supervised Open-world Hierarchical Entities (SOHES), a novel approach that eliminates the need for human annotations.
We produce abundant high-quality pseudo-labels through visual feature clustering, and rectify the noises in pseudo-labels via a teacher- mutual-learning procedure.
Using raw images as the sole training data, our method achieves unprecedented performance in self-supervised open-world segmentation.
arXiv Detail & Related papers (2024-04-18T17:59:46Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis [9.47802391546853]
We develop a unified pipeline for microscopy image segmentation using synthetically generated training data.
Our framework achieves comparable results to models trained on authentic microscopy images with dense labels.
arXiv Detail & Related papers (2023-08-18T22:00:53Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - HAISTA-NET: Human Assisted Instance Segmentation Through Attention [3.073046540587735]
We propose a novel approach to enable more precise predictions and generate higher-quality segmentation masks.
Our human-assisted segmentation model, HAISTA-NET, augments the existing Strong Mask R-CNN network to incorporate human-specified partial boundaries.
We show that HAISTA-NET outperforms state-of-the art methods such as Mask R-CNN, Strong Mask R-CNN, and Mask2Former.
arXiv Detail & Related papers (2023-05-04T18:39:14Z) - Improving CT Image Segmentation Accuracy Using StyleGAN Driven Data
Augmentation [42.034896915716374]
This paper presents a StyleGAN-driven approach for segmenting publicly available large medical datasets.
Style transfer is used to augment the training dataset and generate new anatomically sound images.
The augmented dataset is then used to train a U-Net segmentation network which displays a significant improvement in the segmentation accuracy.
arXiv Detail & Related papers (2023-02-07T06:34:10Z) - High-Quality Entity Segmentation [110.55724145851725]
CropFormer is designed to tackle the intractability of instance-level segmentation on high-resolution images.
It improves mask prediction by fusing high-res image crops that provide more fine-grained image details and the full image.
With CropFormer, we achieve a significant AP gain of $1.9$ on the challenging entity segmentation task.
arXiv Detail & Related papers (2022-11-10T18:58:22Z) - DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort [117.41383937100751]
Current deep networks are extremely data-hungry, benefiting from training on large-scale datasets.
We show how the GAN latent code can be decoded to produce a semantic segmentation of the image.
These generated datasets can then be used for training any computer vision architecture just as real datasets are.
arXiv Detail & Related papers (2021-04-13T20:08:29Z) - RGB-based Semantic Segmentation Using Self-Supervised Depth Pre-Training [77.62171090230986]
We propose an easily scalable and self-supervised technique that can be used to pre-train any semantic RGB segmentation method.
In particular, our pre-training approach makes use of automatically generated labels that can be obtained using depth sensors.
We show how our proposed self-supervised pre-training with HN-labels can be used to replace ImageNet pre-training.
arXiv Detail & Related papers (2020-02-06T11:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.