Large-Scale Multi-Hypotheses Cell Tracking Using Ultrametric Contours Maps
- URL: http://arxiv.org/abs/2308.04526v2
- Date: Thu, 11 Apr 2024 23:50:32 GMT
- Title: Large-Scale Multi-Hypotheses Cell Tracking Using Ultrametric Contours Maps
- Authors: Jordão Bragantini, Merlin Lange, Loïc Royer,
- Abstract summary: We describe a method for large-scale 3D cell-tracking through a segmentation selection approach.
We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge.
Our framework is flexible and supports segmentations from off-the-shelf cell segmentation models.
- Score: 1.015920567871904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we describe a method for large-scale 3D cell-tracking through a segmentation selection approach. The proposed method is effective at tracking cells across large microscopy datasets on two fronts: (i) It can solve problems containing millions of segmentation instances in terabyte-scale 3D+t datasets; (ii) It achieves competitive results with or without deep learning, which requires 3D annotated data, that is scarce in the fluorescence microscopy field. The proposed method computes cell tracks and segments using a hierarchy of segmentation hypotheses and selects disjoint segments by maximizing the overlap between adjacent frames. We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge and has a faster integer linear programming formulation. Moreover, our framework is flexible and supports segmentations from off-the-shelf cell segmentation models and can combine them into an ensemble that improves tracking. The code is available https://github.com/royerlab/ultrack.
Related papers
- ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - View-Consistent Hierarchical 3D Segmentation Using Ultrametric Feature Fields [52.08335264414515]
We learn a novel feature field within a Neural Radiance Field (NeRF) representing a 3D scene.
Our method takes view-inconsistent multi-granularity 2D segmentations as input and produces a hierarchy of 3D-consistent segmentations as output.
We evaluate our method and several baselines on synthetic datasets with multi-view images and multi-granular segmentation, showcasing improved accuracy and viewpoint-consistency.
arXiv Detail & Related papers (2024-05-30T04:14:58Z) - PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic
Occupancy Prediction [72.75478398447396]
We propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively.
Considering the distance distribution of LiDAR point clouds, we construct the tri-perspective view in the cylindrical coordinate system.
We employ spatial group pooling to maintain structural details during projection and adopt 2D backbones to efficiently process each TPV plane.
arXiv Detail & Related papers (2023-08-31T17:57:17Z) - Contrastive Lift: 3D Object Instance Segmentation by Slow-Fast
Contrastive Fusion [110.84357383258818]
We propose a novel approach to lift 2D segments to 3D and fuse them by means of a neural field representation.
The core of our approach is a slow-fast clustering objective function, which is scalable and well-suited for scenes with a large number of objects.
Our approach outperforms the state-of-the-art on challenging scenes from the ScanNet, Hypersim, and Replica datasets.
arXiv Detail & Related papers (2023-06-07T17:57:45Z) - YOLO2U-Net: Detection-Guided 3D Instance Segmentation for Microscopy [0.0]
We introduce a comprehensive method for accurate 3D instance segmentation of cells in the brain tissue.
The proposed method combines the 2D YOLO detection method with a multi-view fusion algorithm to construct a 3D localization of the cells.
The promising performance of the proposed method is shown in comparison with some current deep learning-based 3D instance segmentation methods.
arXiv Detail & Related papers (2022-07-13T14:17:52Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - EfficientCellSeg: Efficient Volumetric Cell Segmentation Using Context
Aware Pseudocoloring [4.555723508665994]
We introduce a small convolutional neural network (CNN) for volumetric cell segmentation.
Our model is efficient and has an asymmetric encoder-decoder structure with very few parameters in the decoder.
Our method achieves top-ranking results, while our CNN model has an up to 25x lower number of parameters than other top-ranking methods.
arXiv Detail & Related papers (2022-04-06T18:02:15Z) - VoxelEmbed: 3D Instance Segmentation and Tracking with Voxel Embedding
based Deep Learning [5.434831972326107]
We propose a novel spatial-temporal voxel-embedding (VoxelEmbed) based learning method to perform simultaneous cell instance segmenting and tracking on 3D volumetric video sequences.
We evaluate our VoxelEmbed method on four 3D datasets (with different cell types) from the I SBI Cell Tracking Challenge.
arXiv Detail & Related papers (2021-06-22T02:03:26Z) - Robust 3D Cell Segmentation: Extending the View of Cellpose [0.1384477926572109]
We extend the Cellpose approach to improve segmentation accuracy on 3D image data.
We show how the formulation of the gradient maps can be simplified while still being robust and reaching similar segmentation accuracy.
arXiv Detail & Related papers (2021-05-03T12:47:41Z) - Segment as Points for Efficient Online Multi-Object Tracking and
Segmentation [66.03023110058464]
We propose a highly effective method for learning instance embeddings based on segments by converting the compact image representation to un-ordered 2D point cloud representation.
Our method generates a new tracking-by-points paradigm where discriminative instance embeddings are learned from randomly selected points rather than images.
The resulting online MOTS framework, named PointTrack, surpasses all the state-of-the-art methods by large margins.
arXiv Detail & Related papers (2020-07-03T08:29:35Z) - Cell Segmentation and Tracking using CNN-Based Distance Predictions and
a Graph-Based Matching Strategy [0.20999222360659608]
We present a method for the segmentation of touching cells in microscopy images.
By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process.
This representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types.
arXiv Detail & Related papers (2020-04-03T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.