Memory-efficient Segmentation of High-resolution Volumetric MicroCT
Images
- URL: http://arxiv.org/abs/2205.15941v1
- Date: Tue, 31 May 2022 16:42:48 GMT
- Title: Memory-efficient Segmentation of High-resolution Volumetric MicroCT
Images
- Authors: Yuan Wang, Laura Blackie, Irene Miguel-Aliaga, Wenjia Bai
- Abstract summary: We propose a memory-efficient network architecture for 3D high-resolution image segmentation.
The network incorporates both global and local features via a two-stage U-net-based cascaded framework.
Experiments show that it outperforms state-of-the-art 3D segmentation methods in terms of both segmentation accuracy and memory efficiency.
- Score: 11.723370840090453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, 3D convolutional neural networks have become the dominant
approach for volumetric medical image segmentation. However, compared to their
2D counterparts, 3D networks introduce substantially more training parameters
and higher requirement for the GPU memory. This has become a major limiting
factor for designing and training 3D networks for high-resolution volumetric
images. In this work, we propose a novel memory-efficient network architecture
for 3D high-resolution image segmentation. The network incorporates both global
and local features via a two-stage U-net-based cascaded framework and at the
first stage, a memory-efficient U-net (meU-net) is developed. The features
learnt at the two stages are connected via post-concatenation, which further
improves the information flow. The proposed segmentation method is evaluated on
an ultra high-resolution microCT dataset with typically 250 million voxels per
volume. Experiments show that it outperforms state-of-the-art 3D segmentation
methods in terms of both segmentation accuracy and memory efficiency.
Related papers
- Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images [16.55283939924806]
General networks for 3D medical image segmentation have recently undergone extensive exploration.
The emergence of the Segment Anything Model (SAM) has enabled this model to achieve superior performance in 2D medical image segmentation tasks.
We present two major innovations: 1) multi-scale 3D convolutional adapters, optimized for efficiently processing local depth-level information, and 2) a tri-plane mamba module, engineered to capture long-range depth-level representation.
arXiv Detail & Related papers (2024-09-13T02:37:13Z) - E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D
Medical Image Segmentation [36.367368163120794]
We propose a 3D medical image segmentation model, named Efficient to Efficient Network (E2ENet)
It incorporates two parametrically and computationally efficient designs.
It consistently achieves a superior trade-off between accuracy and efficiency across various resource constraints.
arXiv Detail & Related papers (2023-12-07T22:13:37Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Memory transformers for full context and high-resolution 3D Medical
Segmentation [76.93387214103863]
This paper introduces the Full resolutIoN mEmory (FINE) transformer to overcome this issue.
The core idea behind FINE is to learn memory tokens to indirectly model full range interactions.
Experiments on the BCV image segmentation dataset shows better performances than state-of-the-art CNN and transformer baselines.
arXiv Detail & Related papers (2022-10-11T10:11:05Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - HIVE-Net: Centerline-Aware HIerarchical View-Ensemble Convolutional
Network for Mitochondria Segmentation in EM Images [3.1498833540989413]
We introduce a novel hierarchical view-ensemble convolution (HVEC) to learn 3D spatial contexts using more efficient 2D convolutions.
The proposed method performs favorably against the state-of-the-art methods in accuracy and visual quality but with a greatly reduced model size.
arXiv Detail & Related papers (2021-01-08T06:56:40Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Efficient embedding network for 3D brain tumor segmentation [0.33727511459109777]
In this paper, we investigate a way to transfer the performance of a two-dimensional classiffication network for the purpose of three-dimensional semantic segmentation of brain tumors.
As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network.
Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.
arXiv Detail & Related papers (2020-11-22T16:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.