Unsupervised Learning of Object-Centric Embeddings for Cell Instance
Segmentation in Microscopy Images
- URL: http://arxiv.org/abs/2310.08501v1
- Date: Thu, 12 Oct 2023 16:59:50 GMT
- Title: Unsupervised Learning of Object-Centric Embeddings for Cell Instance
Segmentation in Microscopy Images
- Authors: Steffen Wolf, Manan Lalit, Henry Westmacott, Katie McDole, Jan Funke
- Abstract summary: We introduce object-centric embeddings (OCEs)
OCEs embed image patches such that the offsets between patches cropped from the same object are preserved.
We show theoretically that OCEs can be learnt through a self-supervised task that predicts the spatial offset between image patches.
- Score: 3.039768384237206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation of objects in microscopy images is required for many biomedical
applications. We introduce object-centric embeddings (OCEs), which embed image
patches such that the spatial offsets between patches cropped from the same
object are preserved. Those learnt embeddings can be used to delineate
individual objects and thus obtain instance segmentations. Here, we show
theoretically that, under assumptions commonly found in microscopy images, OCEs
can be learnt through a self-supervised task that predicts the spatial offset
between image patches. Together, this forms an unsupervised cell instance
segmentation method which we evaluate on nine diverse large-scale microscopy
datasets. Segmentations obtained with our method lead to substantially improved
results, compared to state-of-the-art baselines on six out of nine datasets,
and perform on par on the remaining three datasets. If ground-truth annotations
are available, our method serves as an excellent starting point for supervised
training, reducing the required amount of ground-truth needed by one order of
magnitude, thus substantially increasing the practical applicability of our
method. Source code is available at https://github.com/funkelab/cellulus.
Related papers
- Microscopy Image Segmentation via Point and Shape Regularized Data
Synthesis [9.47802391546853]
We develop a unified pipeline for microscopy image segmentation using synthetically generated training data.
Our framework achieves comparable results to models trained on authentic microscopy images with dense labels.
arXiv Detail & Related papers (2023-08-18T22:00:53Z) - Self-supervised dense representation learning for live-cell microscopy
with time arrow prediction [0.0]
We present a self-supervised method that learns dense image representations from raw, unlabeled live-cell microscopy videos.
We show that the resulting dense representations capture inherently time-asymmetric biological processes such as cell divisions on a pixel-level.
Our method outperforms supervised methods, particularly when only limited ground truth annotations are available.
arXiv Detail & Related papers (2023-05-09T14:58:13Z) - De-coupling and De-positioning Dense Self-supervised Learning [65.56679416475943]
Dense Self-Supervised Learning (SSL) methods address the limitations of using image-level feature representations when handling images with multiple objects.
We show that they suffer from coupling and positional bias, which arise from the receptive field increasing with layer depth and zero-padding.
We demonstrate the benefits of our method on COCO and on a new challenging benchmark, OpenImage-MINI, for object classification, semantic segmentation, and object detection.
arXiv Detail & Related papers (2023-03-29T18:07:25Z) - Fusing Local Similarities for Retrieval-based 3D Orientation Estimation
of Unseen Objects [70.49392581592089]
We tackle the task of estimating the 3D orientation of previously-unseen objects from monocular images.
We follow a retrieval-based strategy and prevent the network from learning object-specific features.
Our experiments on the LineMOD, LineMOD-Occluded, and T-LESS datasets show that our method yields a significantly better generalization to unseen objects than previous works.
arXiv Detail & Related papers (2022-03-16T08:53:00Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - Object-Guided Instance Segmentation With Auxiliary Feature Refinement
for Biological Images [58.914034295184685]
Instance segmentation is of great importance for many biological applications, such as study of neural cell interactions, plant phenotyping, and quantitatively measuring how cells react to drug treatment.
Box-based instance segmentation methods capture objects via bounding boxes and then perform individual segmentation within each bounding box region.
Our method first detects the center points of the objects, from which the bounding box parameters are then predicted.
The segmentation branch reuses the object features as guidance to separate target object from the neighboring ones within the same bounding box region.
arXiv Detail & Related papers (2021-06-14T04:35:36Z) - Robust 3D Cell Segmentation: Extending the View of Cellpose [0.1384477926572109]
We extend the Cellpose approach to improve segmentation accuracy on 3D image data.
We show how the formulation of the gradient maps can be simplified while still being robust and reaching similar segmentation accuracy.
arXiv Detail & Related papers (2021-05-03T12:47:41Z) - Sparse Object-level Supervision for Instance Segmentation with Pixel
Embeddings [4.038011160363972]
Most state-of-the-art instance segmentation methods have to be trained on densely annotated images.
We propose a proposal-free segmentation approach based on non-spatial embeddings.
We evaluate the proposed method on challenging 2D and 3D segmentation problems in different microscopy modalities.
arXiv Detail & Related papers (2021-03-26T16:36:56Z) - Embedding-based Instance Segmentation of Microscopy Images [8.516639438995785]
We introduce EmbedSeg, an end-to-end trainable deep learning method based on the work by Neven et al.
While their approach embeds each pixel to the centroid of any given instance, in EmbedSeg, motivated by the complex shapes of biological objects, we propose to use the medoid instead.
We demonstrate that embedding-based instance segmentation achieves competitive results in comparison to state-of-the-art methods on diverse microscopy datasets.
arXiv Detail & Related papers (2021-01-25T12:06:44Z) - Self-supervised Segmentation via Background Inpainting [96.10971980098196]
We introduce a self-supervised detection and segmentation approach that can work with single images captured by a potentially moving camera.
We exploit a self-supervised loss function that we exploit to train a proposal-based segmentation network.
We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks and outperform existing self-supervised methods.
arXiv Detail & Related papers (2020-11-11T08:34:40Z) - Learning RGB-D Feature Embeddings for Unseen Object Instance
Segmentation [67.88276573341734]
We propose a new method for unseen object instance segmentation by learning RGB-D feature embeddings from synthetic data.
A metric learning loss function is utilized to learn to produce pixel-wise feature embeddings.
We further improve the segmentation accuracy with a new two-stage clustering algorithm.
arXiv Detail & Related papers (2020-07-30T00:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.