Robust 3D Cell Segmentation: Extending the View of Cellpose
- URL: http://arxiv.org/abs/2105.00794v1
- Date: Mon, 3 May 2021 12:47:41 GMT
- Title: Robust 3D Cell Segmentation: Extending the View of Cellpose
- Authors: Dennis Eschweiler and Johannes Stegmaier
- Abstract summary: We extend the Cellpose approach to improve segmentation accuracy on 3D image data.
We show how the formulation of the gradient maps can be simplified while still being robust and reaching similar segmentation accuracy.
- Score: 0.1384477926572109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing data set sizes of digital microscopy imaging experiments demand
for an automation of segmentation processes to be able to extract meaningful
biomedical information. Due to the shortage of annotated 3D image data that can
be used for machine learning-based approaches, 3D segmentation approaches are
required to be robust and to generalize well to unseen data. Reformulating the
problem of instance segmentation as a collection of diffusion gradient maps,
proved to be such a generalist approach for cell segmentation tasks. In this
paper, we extend the Cellpose approach to improve segmentation accuracy on 3D
image data and we further show how the formulation of the gradient maps can be
simplified while still being robust and reaching similar segmentation accuracy.
We quantitatively compared different experimental setups and validated on two
different data sets of 3D confocal microscopy images of A. thaliana.
Related papers
- Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - Unsupervised Learning of Object-Centric Embeddings for Cell Instance
Segmentation in Microscopy Images [3.039768384237206]
We introduce object-centric embeddings (OCEs)
OCEs embed image patches such that the offsets between patches cropped from the same object are preserved.
We show theoretically that OCEs can be learnt through a self-supervised task that predicts the spatial offset between image patches.
arXiv Detail & Related papers (2023-10-12T16:59:50Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - Large-Scale Multi-Hypotheses Cell Tracking Using Ultrametric Contours Maps [1.015920567871904]
We describe a method for large-scale 3D cell-tracking through a segmentation selection approach.
We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge.
Our framework is flexible and supports segmentations from off-the-shelf cell segmentation models.
arXiv Detail & Related papers (2023-08-08T18:41:38Z) - Semi-Weakly Supervised Object Kinematic Motion Prediction [56.282759127180306]
Given a 3D object, kinematic motion prediction aims to identify the mobile parts as well as the corresponding motion parameters.
We propose a graph neural network to learn the map between hierarchical part-level segmentation and mobile parts parameters.
The network predictions yield a large scale of 3D objects with pseudo labeled mobility information.
arXiv Detail & Related papers (2023-03-31T02:37:36Z) - YOLO2U-Net: Detection-Guided 3D Instance Segmentation for Microscopy [0.0]
We introduce a comprehensive method for accurate 3D instance segmentation of cells in the brain tissue.
The proposed method combines the 2D YOLO detection method with a multi-view fusion algorithm to construct a 3D localization of the cells.
The promising performance of the proposed method is shown in comparison with some current deep learning-based 3D instance segmentation methods.
arXiv Detail & Related papers (2022-07-13T14:17:52Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Bidirectional RNN-based Few Shot Learning for 3D Medical Image
Segmentation [11.873435088539459]
We propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation.
A U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image.
We evaluate our proposed model using three 3D CT datasets with annotations of different organs.
arXiv Detail & Related papers (2020-11-19T01:44:55Z) - Spherical Harmonics for Shape-Constrained 3D Cell Segmentation [0.7525061684310219]
We show how spherical harmonics can be used as an alternative way to inherently constrain the predictions of neural networks for the segmentation of cells in 3D microscopy image data.
arXiv Detail & Related papers (2020-10-23T12:58:26Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.