A three-dimensional dual-domain deep network for high-pitch and sparse
helical CT reconstruction
- URL: http://arxiv.org/abs/2201.02309v1
- Date: Fri, 7 Jan 2022 03:26:15 GMT
- Title: A three-dimensional dual-domain deep network for high-pitch and sparse
helical CT reconstruction
- Authors: Wei Wang, Xiang-Gen Xia, Chuanjiang He, Zemin Ren and Jian Lu
- Abstract summary: We propose a new GPU implementation of the Katsevich algorithm for helical CT reconstruction.
Our implementation divides the sinograms and reconstructs the CT images pitch by pitch.
By embedding our implementation into the network, we propose an end-to-end deep network for the high pitch helical CT reconstruction.
- Score: 13.470588027095264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new GPU implementation of the Katsevich algorithm
for helical CT reconstruction. Our implementation divides the sinograms and
reconstructs the CT images pitch by pitch. By utilizing the periodic properties
of the parameters of the Katsevich algorithm, our method only needs to
calculate these parameters once for all the pitches and so has lower GPU-memory
burdens and is very suitable for deep learning. By embedding our implementation
into the network, we propose an end-to-end deep network for the high pitch
helical CT reconstruction with sparse detectors. Since our network utilizes the
features extracted from both sinograms and CT images, it can simultaneously
reduce the streak artifacts caused by the sparsity of sinograms and preserve
fine details in the CT images. Experiments show that our network outperforms
the related methods both in subjective and objective evaluations.
Related papers
- UnWave-Net: Unrolled Wavelet Network for Compton Tomography Image Reconstruction [0.0]
Compton scatter tomography (CST) presents an interesting alternative to conventional CT.
Deep unrolling networks have demonstrated potential in CT image reconstruction.
UnWave-Net is a novel unrolled wavelet-based reconstruction network.
arXiv Detail & Related papers (2024-06-05T16:10:29Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Generative Modeling in Sinogram Domain for Sparse-view CT Reconstruction [12.932897771104825]
radiation dose in computed tomography (CT) examinations can be significantly reduced by intuitively decreasing the number of projection views.
Previous deep learning techniques with sparse-view data require sparse-view/full-view CT image pairs to train the network with supervised manners.
We present a fully unsupervised score-based generative model in sinogram domain for sparse-view CT reconstruction.
arXiv Detail & Related papers (2022-11-25T06:49:18Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - A Lightweight Dual-Domain Attention Framework for Sparse-View CT
Reconstruction [6.553233856627479]
We design a novel lightweight network called CAGAN, and propose a dual-domain reconstruction pipeline for parallel beam sparse-view CT.
The application of Shuffle Blocks reduces the parameters by a quarter without sacrificing its performance.
Experiments indicate that the CAGAN strikes an excellent balance between model complexity and performance, and our pipeline outperforms the DD-Net and the DuDoNet.
arXiv Detail & Related papers (2022-02-19T14:04:59Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - Sparse-View Spectral CT Reconstruction Using Deep Learning [0.283239609744735]
We propose an approach for fast reconstruction of sparse-view spectral CT data using a U-Net convolutional neural network architecture with multi-channel input and output.
Our method is fast at run-time and because the internal convolutions are shared between the channels, the computational load increases only at the first and last layers.
arXiv Detail & Related papers (2020-11-30T14:36:23Z) - Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks [87.50632573601283]
We present a novel method for multi-view depth estimation from a single video.
Our method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer.
To reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network.
arXiv Detail & Related papers (2020-11-26T04:04:21Z) - A model-guided deep network for limited-angle computed tomography [28.175533839713847]
We first propose a variational model for the limited-angle computed tomography (CT) image reconstruction and then convert the model into an end-to-end deep network.
Our network tackles both the sinograms and the CT images, and can simultaneously suppress the artifacts caused by the incomplete data.
arXiv Detail & Related papers (2020-08-10T09:42:32Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.