Advanced Deep Networks for 3D Mitochondria Instance Segmentation
- URL: http://arxiv.org/abs/2104.07961v1
- Date: Fri, 16 Apr 2021 08:27:44 GMT
- Title: Advanced Deep Networks for 3D Mitochondria Instance Segmentation
- Authors: Mingxing Li, Chang Chen, Xiaoyu Liu, Wei Huang, Yueyi Zhang, Zhiwei
Xiong
- Abstract summary: We propose two advanced deep networks, named ResUNet-R and ResUNet-H, for 3D mitochondria instance segmentation from Rat and Human samples.
Specifically, we design a simple yet effective anisotropic convolution block and deploy a multi-scale training strategy, which together boost the segmentation performance.
- Score: 46.295601731565725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mitochondria instance segmentation from electron microscopy (EM) images has
seen notable progress since the introduction of deep learning methods. In this
paper, we propose two advanced deep networks, named Res-UNet-R and Res-UNet-H,
for 3D mitochondria instance segmentation from Rat and Human samples.
Specifically, we design a simple yet effective anisotropic convolution block
and deploy a multi-scale training strategy, which together boost the
segmentation performance. Moreover, we enhance the generalizability of the
trained models on the test set by adding a denoising operation as
pre-processing. In the Large-scale 3D Mitochondria Instance Segmentation
Challenge, our team ranks the 1st on the leaderboard at the end of the testing
phase. Code is available at
https://github.com/Limingxing00/MitoEM2021-Challenge.
Related papers
- Automated 3D Tumor Segmentation using Temporal Cubic PatchGAN (TCuP-GAN) [0.276240219662896]
Temporal Cubic PatchGAN (TCuP-GAN) is a volume-to-volume translational model that marries the concepts of a generative feature learning framework with Convolutional Long Short-Term Memory Networks (LSTMs)
We demonstrate the capabilities of our TCuP-GAN on the data from four segmentation challenges (Adult Glioma, Meningioma, Pediatric Tumors, and Sub-Saharan Africa)
We demonstrate the successful learning of our framework to predict robust multi-class segmentation masks across all the challenges.
arXiv Detail & Related papers (2023-11-23T18:37:26Z) - 3D Mitochondria Instance Segmentation with Spatio-Temporal Transformers [101.44668514239959]
We propose a hybrid encoder-decoder framework that efficiently computes spatial and temporal attentions in parallel.
We also introduce a semantic clutter-background adversarial loss during training that aids in the region of mitochondria instances from the background.
arXiv Detail & Related papers (2023-03-21T17:58:49Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Memory-efficient Segmentation of High-resolution Volumetric MicroCT
Images [11.723370840090453]
We propose a memory-efficient network architecture for 3D high-resolution image segmentation.
The network incorporates both global and local features via a two-stage U-net-based cascaded framework.
Experiments show that it outperforms state-of-the-art 3D segmentation methods in terms of both segmentation accuracy and memory efficiency.
arXiv Detail & Related papers (2022-05-31T16:42:48Z) - HIVE-Net: Centerline-Aware HIerarchical View-Ensemble Convolutional
Network for Mitochondria Segmentation in EM Images [3.1498833540989413]
We introduce a novel hierarchical view-ensemble convolution (HVEC) to learn 3D spatial contexts using more efficient 2D convolutions.
The proposed method performs favorably against the state-of-the-art methods in accuracy and visual quality but with a greatly reduced model size.
arXiv Detail & Related papers (2021-01-08T06:56:40Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - RDCNet: Instance segmentation with a minimalist recurrent residual
network [0.14999444543328289]
We propose a minimalist recurrent network called recurrent dilated convolutional network (RDCNet)
RDCNet consists of a shared stacked dilated convolution (sSDC) layer that iteratively refines its output and thereby generates interpretable intermediate predictions.
We demonstrate its versatility on 3 tasks with different imaging modalities: nuclear segmentation of H&E slides, of 3D anisotropic stacks from light-sheet fluorescence microscopy and leaf segmentation of top-view images of plants.
arXiv Detail & Related papers (2020-10-02T13:36:45Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.