YOLO2U-Net: Detection-Guided 3D Instance Segmentation for Microscopy
- URL: http://arxiv.org/abs/2207.06215v1
- Date: Wed, 13 Jul 2022 14:17:52 GMT
- Title: YOLO2U-Net: Detection-Guided 3D Instance Segmentation for Microscopy
- Authors: Amirkoushyar Ziabari, Derek C. Ros, Abbas Shirinifard, David Solecki
- Abstract summary: We introduce a comprehensive method for accurate 3D instance segmentation of cells in the brain tissue.
The proposed method combines the 2D YOLO detection method with a multi-view fusion algorithm to construct a 3D localization of the cells.
The promising performance of the proposed method is shown in comparison with some current deep learning-based 3D instance segmentation methods.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Microscopy imaging techniques are instrumental for characterization and
analysis of biological structures. As these techniques typically render 3D
visualization of cells by stacking 2D projections, issues such as out-of-plane
excitation and low resolution in the $z$-axis may pose challenges (even for
human experts) to detect individual cells in 3D volumes as these
non-overlapping cells may appear as overlapping. In this work, we introduce a
comprehensive method for accurate 3D instance segmentation of cells in the
brain tissue. The proposed method combines the 2D YOLO detection method with a
multi-view fusion algorithm to construct a 3D localization of the cells. Next,
the 3D bounding boxes along with the data volume are input to a 3D U-Net
network that is designed to segment the primary cell in each 3D bounding box,
and in turn, to carry out instance segmentation of cells in the entire volume.
The promising performance of the proposed method is shown in comparison with
some current deep learning-based 3D instance segmentation methods.
Related papers
- Large-Scale Multi-Hypotheses Cell Tracking Using Ultrametric Contours Maps [1.015920567871904]
We describe a method for large-scale 3D cell-tracking through a segmentation selection approach.
We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge.
Our framework is flexible and supports segmentations from off-the-shelf cell segmentation models.
arXiv Detail & Related papers (2023-08-08T18:41:38Z) - ONeRF: Unsupervised 3D Object Segmentation from Multiple Views [59.445957699136564]
ONeRF is a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations.
The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering.
arXiv Detail & Related papers (2022-11-22T06:19:37Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - VoxelEmbed: 3D Instance Segmentation and Tracking with Voxel Embedding
based Deep Learning [5.434831972326107]
We propose a novel spatial-temporal voxel-embedding (VoxelEmbed) based learning method to perform simultaneous cell instance segmenting and tracking on 3D volumetric video sequences.
We evaluate our VoxelEmbed method on four 3D datasets (with different cell types) from the I SBI Cell Tracking Challenge.
arXiv Detail & Related papers (2021-06-22T02:03:26Z) - FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle
Detection [81.79171905308827]
We propose frustum-aware geometric reasoning (FGR) to detect vehicles in point clouds without any 3D annotations.
Our method consists of two stages: coarse 3D segmentation and 3D bounding box estimation.
It is able to accurately detect objects in 3D space with only 2D bounding boxes and sparse point clouds.
arXiv Detail & Related papers (2021-05-17T07:29:55Z) - Robust 3D Cell Segmentation: Extending the View of Cellpose [0.1384477926572109]
We extend the Cellpose approach to improve segmentation accuracy on 3D image data.
We show how the formulation of the gradient maps can be simplified while still being robust and reaching similar segmentation accuracy.
arXiv Detail & Related papers (2021-05-03T12:47:41Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Spatial Context-Aware Self-Attention Model For Multi-Organ Segmentation [18.76436457395804]
Multi-organ segmentation is one of most successful applications of deep learning in medical image analysis.
Deep convolutional neural nets (CNNs) have shown great promise in achieving clinically applicable image segmentation performance on CT or MRI images.
We propose a new framework for combining 3D and 2D models, in which the segmentation is realized through high-resolution 2D convolutions.
arXiv Detail & Related papers (2020-12-16T21:39:53Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.