3D U-NetR: Low Dose Computed Tomography Reconstruction via Deep Learning
and 3 Dimensional Convolutions
- URL: http://arxiv.org/abs/2105.14130v1
- Date: Fri, 28 May 2021 22:37:50 GMT
- Title: 3D U-NetR: Low Dose Computed Tomography Reconstruction via Deep Learning
and 3 Dimensional Convolutions
- Authors: Doga Gunduzalp, Batuhan Cengiz, Mehmet Ozan Unal, Isa Yildirim
- Abstract summary: 3D U-NetR captures medically critical visual details that cannot be visualized by 2D network.
More importantly, 3D U-NetR captures medically critical visual details that cannot be visualized by 2D network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduced a novel deep learning based reconstruction
technique using the correlations of all 3 dimensions with each other by taking
into account the correlation between 2-dimensional low-dose CT images. Sparse
or noisy sinograms are back projected to the image domain with FBP operation,
then denoising process is applied with a U-Net like 3 dimensional network
called 3D U-NetR. Proposed network is trained with synthetic and real chest CT
images, and 2D U-Net is also trained with the same dataset to prove the
importance of the 3rd dimension. Proposed network shows better quantitative
performance on SSIM and PSNR. More importantly, 3D U-NetR captures medically
critical visual details that cannot be visualized by 2D network.
Related papers
- Swap-Net: A Memory-Efficient 2.5D Network for Sparse-View 3D Cone Beam CT Reconstruction [13.891441371598546]
Reconstructing 3D cone beam computed tomography (CBCT) images from a limited set of projections is an inverse problem in many imaging applications.
This paper proposes Swap-Net, a memory-efficient 2.5D network for sparse-view 3D CBCT image reconstruction.
arXiv Detail & Related papers (2024-09-29T08:36:34Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - Decomposing 3D Neuroimaging into 2+1D Processing for Schizophrenia
Recognition [25.80846093248797]
We propose to process the 3D data by a 2+1D framework so that we can exploit the powerful deep 2D Convolutional Neural Network (CNN) networks pre-trained on the huge ImageNet dataset for 3D neuroimaging recognition.
Specifically, 3D volumes of Magnetic Resonance Imaging (MRI) metrics are decomposed to 2D slices according to neighboring voxel positions.
Global pooling is applied to remove redundant information as the activation patterns are sparsely distributed over feature maps.
Channel-wise and slice-wise convolutions are proposed to aggregate the contextual information in the third dimension unprocessed by the 2D CNN model.
arXiv Detail & Related papers (2022-11-21T15:22:59Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - R2U3D: Recurrent Residual 3D U-Net for Lung Segmentation [17.343802171952195]
We propose a novel model, namely, Recurrent Residual 3D U-Net (R2U3D), for the 3D lung segmentation task.
In particular, the proposed model integrates 3D convolution into the Recurrent Residual Neural Network based on U-Net.
The proposed R2U3D network is trained on the publicly available dataset LUNA16 and it achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-05T19:17:14Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Learning Joint 2D-3D Representations for Depth Completion [90.62843376586216]
We design a simple yet effective neural network block that learns to extract joint 2D and 3D features.
Specifically, the block consists of two domain-specific sub-networks that apply 2D convolution on image pixels and continuous convolution on 3D points.
arXiv Detail & Related papers (2020-12-22T22:58:29Z) - Efficient embedding network for 3D brain tumor segmentation [0.33727511459109777]
In this paper, we investigate a way to transfer the performance of a two-dimensional classiffication network for the purpose of three-dimensional semantic segmentation of brain tumors.
As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network.
Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.
arXiv Detail & Related papers (2020-11-22T16:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.