VolumeNet: A Lightweight Parallel Network for Super-Resolution of
Medical Volumetric Data
- URL: http://arxiv.org/abs/2010.08357v2
- Date: Sat, 24 Oct 2020 06:00:55 GMT
- Title: VolumeNet: A Lightweight Parallel Network for Super-Resolution of
Medical Volumetric Data
- Authors: Yinhao Li, Yutaro Iwamoto, Lanfen Lin, Rui Xu, Yen-Wei Chen
- Abstract summary: We propose a 3D convolutional neural network (CNN) for SR of medical volumetric data called ParallelNet using parallel connections.
We show that the proposed VolumeNet significantly reduces the number of model parameters and achieves high precision results.
- Score: 20.34783243852236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based super-resolution (SR) techniques have generally achieved
excellent performance in the computer vision field. Recently, it has been
proven that three-dimensional (3D) SR for medical volumetric data delivers
better visual results than conventional two-dimensional (2D) processing.
However, deepening and widening 3D networks increases training difficulty
significantly due to the large number of parameters and small number of
training samples. Thus, we propose a 3D convolutional neural network (CNN) for
SR of medical volumetric data called ParallelNet using parallel connections. We
construct a parallel connection structure based on the group convolution and
feature aggregation to build a 3D CNN that is as wide as possible with few
parameters. As a result, the model thoroughly learns more feature maps with
larger receptive fields. In addition, to further improve accuracy, we present
an efficient version of ParallelNet (called VolumeNet), which reduces the
number of parameters and deepens ParallelNet using a proposed lightweight
building block module called the Queue module. Unlike most lightweight CNNs
based on depthwise convolutions, the Queue module is primarily constructed
using separable 2D cross-channel convolutions. As a result, the number of
network parameters and computational complexity can be reduced significantly
while maintaining accuracy due to full channel fusion. Experimental results
demonstrate that the proposed VolumeNet significantly reduces the number of
model parameters and achieves high precision results compared to
state-of-the-art methods.
Related papers
- Spatiotemporal Modeling Encounters 3D Medical Image Analysis:
Slice-Shift UNet with Multi-View Fusion [0.0]
We propose a new 2D-based model dubbed Slice SHift UNet which encodes three-dimensional features at 2D CNN's complexity.
More precisely multi-view features are collaboratively learned by performing 2D convolutions along the three planes of a volume.
The effectiveness of our approach is validated in Multi-Modality Abdominal Multi-Organ axis (AMOS) and Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) datasets.
arXiv Detail & Related papers (2023-07-24T14:53:23Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - MNet: Rethinking 2D/3D Networks for Anisotropic Medical Image
Segmentation [13.432274819028505]
A novel mesh network (MNet) is proposed to balance the spatial representation inter axes via learning.
Comprehensive experiments are performed on four public datasets (CT&MR)
arXiv Detail & Related papers (2022-05-10T12:39:08Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - EDNet: Efficient Disparity Estimation with Cost Volume Combination and
Attention-based Spatial Residual [17.638034176859932]
Existing disparity estimation works mostly leverage the 4D concatenation volume and construct a very deep 3D convolution neural network (CNN) for disparity regression.
In this paper, we propose a network named EDNet for efficient disparity estimation.
Experiments on the Scene Flow and KITTI datasets show that EDNet outperforms the previous 3D CNN based works.
arXiv Detail & Related papers (2020-10-26T04:49:44Z) - The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs
with Hybrid Parallelism [3.4377970608678314]
We present scalable hybrid-parallel algorithms for training large-scale 3D convolutional neural networks.
We evaluate our proposed training algorithms with two challenging 3D CNNs, CosmoFlow and 3D U-Net.
arXiv Detail & Related papers (2020-07-25T05:06:06Z) - Dense Hybrid Recurrent Multi-view Stereo Net with Dynamic Consistency
Checking [54.58791377183574]
Our novel hybrid recurrent multi-view stereo net consists of two core modules: 1) a light DRENet (Dense Reception Expanded) module to extract dense feature maps of original size with multi-scale context information, 2) a HU-LSTM (Hybrid U-LSTM) to regularize 3D matching volume into predicted depth map.
Our method exhibits competitive performance to the state-of-the-art method while dramatically reduces memory consumption, which costs only $19.4%$ of R-MVSNet memory consumption.
arXiv Detail & Related papers (2020-07-21T14:59:59Z) - LAMP: Large Deep Nets with Automated Model Parallelism for Image
Segmentation [13.933491086186809]
We introduce Large deep 3D ConvNets with Automated Model Parallelism (LAMP)
It is feasible to train large deep 3D ConvNets with a large input patch, even the whole image.
Experiments demonstrate that, facilitated by the automated model parallelism, the segmentation accuracy can be improved.
arXiv Detail & Related papers (2020-06-22T19:20:35Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.