A Lightweight Dual-Domain Attention Framework for Sparse-View CT
Reconstruction
- URL: http://arxiv.org/abs/2202.09609v1
- Date: Sat, 19 Feb 2022 14:04:59 GMT
- Title: A Lightweight Dual-Domain Attention Framework for Sparse-View CT
Reconstruction
- Authors: Chang Sun, Ken Deng, Yitong Liu, Hongwen Yang
- Abstract summary: We design a novel lightweight network called CAGAN, and propose a dual-domain reconstruction pipeline for parallel beam sparse-view CT.
The application of Shuffle Blocks reduces the parameters by a quarter without sacrificing its performance.
Experiments indicate that the CAGAN strikes an excellent balance between model complexity and performance, and our pipeline outperforms the DD-Net and the DuDoNet.
- Score: 6.553233856627479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computed Tomography (CT) plays an essential role in clinical diagnosis. Due
to the adverse effects of radiation on patients, the radiation dose is expected
to be reduced as low as possible. Sparse sampling is an effective way, but it
will lead to severe artifacts on the reconstructed CT image, thus sparse-view
CT image reconstruction has been a prevailing and challenging research area.
With the popularity of mobile devices, the requirements for lightweight and
real-time networks are increasing rapidly. In this paper, we design a novel
lightweight network called CAGAN, and propose a dual-domain reconstruction
pipeline for parallel beam sparse-view CT. CAGAN is an adversarial
auto-encoder, combining the Coordinate Attention unit, which preserves the
spatial information of features. Also, the application of Shuffle Blocks
reduces the parameters by a quarter without sacrificing its performance. In the
Radon domain, the CAGAN learns the mapping between the interpolated data and
fringe-free projection data. After the restored Radon data is reconstructed to
an image, the image is sent into the second CAGAN trained for recovering the
details, so that a high-quality image is obtained. Experiments indicate that
the CAGAN strikes an excellent balance between model complexity and
performance, and our pipeline outperforms the DD-Net and the DuDoNet.
Related papers
- CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - UnWave-Net: Unrolled Wavelet Network for Compton Tomography Image Reconstruction [0.0]
Compton scatter tomography (CST) presents an interesting alternative to conventional CT.
Deep unrolling networks have demonstrated potential in CT image reconstruction.
UnWave-Net is a novel unrolled wavelet-based reconstruction network.
arXiv Detail & Related papers (2024-06-05T16:10:29Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - REGAS: REspiratory-GAted Synthesis of Views for Multi-Phase CBCT
Reconstruction from a single 3D CBCT Acquisition [75.64791080418162]
REGAS proposes a self-supervised method to synthesize the undersampled tomographic views and mitigate aliasing artifacts in reconstructed images.
To address the large memory cost of deep neural networks on high resolution 4D data, REGAS introduces a novel Ray Path Transformation (RPT) that allows for distributed, differentiable forward projections.
arXiv Detail & Related papers (2022-08-17T03:42:19Z) - A three-dimensional dual-domain deep network for high-pitch and sparse
helical CT reconstruction [13.470588027095264]
We propose a new GPU implementation of the Katsevich algorithm for helical CT reconstruction.
Our implementation divides the sinograms and reconstructs the CT images pitch by pitch.
By embedding our implementation into the network, we propose an end-to-end deep network for the high pitch helical CT reconstruction.
arXiv Detail & Related papers (2022-01-07T03:26:15Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - DuDoTrans: Dual-Domain Transformer Provides More Attention for Sinogram
Restoration in Sparse-View CT Reconstruction [13.358197688568463]
iodine radiation in the imaging process induces irreversible injury.
Iterative models are proposed to alleviate the appeared artifacts in sparse-view CT images, but the cost is too expensive.
We propose textbfDual-textbfDomain textbfDuDoTrans to reconstruct CT image with both the enhanced and raw sinograms.
arXiv Detail & Related papers (2021-11-21T10:41:07Z) - High-quality Low-dose CT Reconstruction Using Convolutional Neural
Networks with Spatial and Channel Squeeze and Excitation [15.05273611411106]
We present a High-Quality Imaging network (HQINet) for the CT image reconstruction from Low-dose computed tomography (CT) acquisitions.
HQINet was a convolutional encoder-decoder architecture, where the encoder was used to extract spatial and temporal information from three contiguous slices.
arXiv Detail & Related papers (2021-04-01T08:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.