An Auxiliary Task for Learning Nuclei Segmentation in 3D Microscopy
Images
- URL: http://arxiv.org/abs/2002.02857v1
- Date: Fri, 7 Feb 2020 15:47:55 GMT
- Title: An Auxiliary Task for Learning Nuclei Segmentation in 3D Microscopy
Images
- Authors: Peter Hirsch, Dagmar Kainmueller
- Abstract summary: We compare nuclei segmentation algorithms on a database of manually segmented 3d light microscopy volumes.
We propose a novel learning strategy that boosts segmentation accuracy by means of a simple auxiliary task.
We show that one of our baselines, the popular three-label model, when trained with our proposed auxiliary task, outperforms the recent StarDist-3D.
- Score: 6.700873164609009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation of cell nuclei in microscopy images is a prevalent necessity in
cell biology. Especially for three-dimensional datasets, manual segmentation is
prohibitively time-consuming, motivating the need for automated methods.
Learning-based methods trained on pixel-wise ground-truth segmentations have
been shown to yield state-of-the-art results on 2d benchmark image data of
nuclei, yet a respective benchmark is missing for 3d image data. In this work,
we perform a comparative evaluation of nuclei segmentation algorithms on a
database of manually segmented 3d light microscopy volumes. We propose a novel
learning strategy that boosts segmentation accuracy by means of a simple
auxiliary task, thereby robustly outperforming each of our baselines.
Furthermore, we show that one of our baselines, the popular three-label model,
when trained with our proposed auxiliary task, outperforms the recent
StarDist-3D. As an additional, practical contribution, we benchmark nuclei
segmentation against nuclei detection, i.e. the task of merely pinpointing
individual nuclei without generating respective pixel-accurate segmentations.
For learning nuclei detection, large 3d training datasets of manually annotated
nuclei center points are available. However, the impact on detection accuracy
caused by training on such sparse ground truth as opposed to dense pixel-wise
ground truth has not yet been quantified. To this end, we compare nuclei
detection accuracy yielded by training on dense vs. sparse ground truth. Our
results suggest that training on sparse ground truth yields competitive nuclei
detection rates.
Related papers
- Bayesian Self-Training for Semi-Supervised 3D Segmentation [59.544558398992386]
3D segmentation is a core problem in computer vision.
densely labeling 3D point clouds to employ fully-supervised training remains too labor intensive and expensive.
Semi-supervised training provides a more practical alternative, where only a small set of labeled data is given, accompanied by a larger unlabeled set.
arXiv Detail & Related papers (2024-09-12T14:54:31Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Unsupervised Learning of Object-Centric Embeddings for Cell Instance
Segmentation in Microscopy Images [3.039768384237206]
We introduce object-centric embeddings (OCEs)
OCEs embed image patches such that the offsets between patches cropped from the same object are preserved.
We show theoretically that OCEs can be learnt through a self-supervised task that predicts the spatial offset between image patches.
arXiv Detail & Related papers (2023-10-12T16:59:50Z) - Nuclei Segmentation with Point Annotations from Pathology Images via
Self-Supervised Learning and Co-Training [44.13451004973818]
We propose a weakly-supervised learning method for nuclei segmentation.
coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram.
A self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images.
arXiv Detail & Related papers (2022-02-16T17:08:44Z) - Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei
Segmentation in Histopathology Images [65.47507533905188]
We propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately.
The newly proposed multitask learning architecture enhances the generalization by learning shared representation from three tasks.
The proposed bending loss defines high penalties to concave contour points with large curvatures, and applies small penalties to convex contour points with small curvatures.
arXiv Detail & Related papers (2021-09-30T17:29:44Z) - Semi supervised segmentation and graph-based tracking of 3D nuclei in
time-lapse microscopy [10.398295735266212]
Current state-of-the-art deep learning methods do not result in accurate boundaries when the training data is weakly annotated.
This is motivated by the observation that current state-of-the-art deep learning methods do not result in accurate boundaries when the training data is weakly annotated.
A 3D U-Net is trained to get the centroid of the nuclei and integrated with a simple linear iterative clustering (SLIC) supervoxel algorithm.
arXiv Detail & Related papers (2020-10-26T05:09:44Z) - Instance-aware Self-supervised Learning for Nuclei Segmentation [47.07869311690419]
We propose a novel self-supervised learning framework to exploit the capacity of convolutional neural networks (CNNs) on the nuclei instance segmentation task.
The proposed approach involves two sub-tasks, which enable neural networks to implicitly leverage the prior-knowledge of nuclei size and quantity.
Experimental results on the publicly available MoNuSeg dataset show that the proposed self-supervised learning approach can remarkably boost the segmentation accuracy of nuclei instance.
arXiv Detail & Related papers (2020-07-22T03:37:14Z) - Weakly Supervised Deep Nuclei Segmentation Using Partial Points
Annotation in Histopathology Images [51.893494939675314]
We propose a novel weakly supervised segmentation framework based on partial points annotation.
We show that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T15:41:29Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.