MVMS-RCN: A Dual-Domain Unfolding CT Reconstruction with Multi-sparse-view and Multi-scale Refinement-correction
- URL: http://arxiv.org/abs/2405.17141v1
- Date: Mon, 27 May 2024 13:01:25 GMT
- Title: MVMS-RCN: A Dual-Domain Unfolding CT Reconstruction with Multi-sparse-view and Multi-scale Refinement-correction
- Authors: Xiaohong Fan, Ke Chen, Huaming Yi, Yin Yang, Jianping Zhang,
- Abstract summary: Sparse-view CT imaging reduces the number of projection views to a lower radiation dose.
Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods do not fully use the projection data.
This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view tomography reconstructions.
- Score: 9.54126979075279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: X-ray Computed Tomography (CT) is one of the most important diagnostic imaging techniques in clinical applications. Sparse-view CT imaging reduces the number of projection views to a lower radiation dose and alleviates the potential risk of radiation exposure. Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods: 1) do not fully use the projection data; 2) do not always link their architecture designs to a mathematical theory; 3) do not flexibly deal with multi-sparse-view reconstruction assignments. This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view tomography reconstructions. We propose a novel dual-domain deep unfolding unified framework that offers a great deal of flexibility for multi-sparse-view CT reconstruction with different sampling views through a single model. This framework combines the theoretical advantages of model-based methods with the superior reconstruction performance of DL-based methods, resulting in the expected generalizability of DL. We propose a refinement module that utilizes unfolding projection domain to refine full-sparse-view projection errors, as well as an image domain correction module that distills multi-scale geometric error corrections to reconstruct sparse-view CT. This provides us with a new way to explore the potential of projection information and a new perspective on designing network architectures. All parameters of our proposed framework are learnable end to end, and our method possesses the potential to be applied to plug-and-play reconstruction. Extensive experiments demonstrate that our framework is superior to other existing state-of-the-art methods. Our source codes are available at https://github.com/fanxiaohong/MVMS-RCN.
Related papers
- CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction across All Sampling Rates [16.985836345715963]
Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose degradation.
Recent studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT)
Our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at any sampling rate.
arXiv Detail & Related papers (2024-09-03T03:06:15Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - APRF: Anti-Aliasing Projection Representation Field for Inverse Problem
in Imaging [74.9262846410559]
Sparse-view Computed Tomography (SVCT) reconstruction is an ill-posed inverse problem in imaging.
Recent works use Implicit Neural Representations (INRs) to build the coordinate-based mapping between sinograms and CT images.
We propose a self-supervised SVCT reconstruction method -- Anti-Aliasing Projection Representation Field (APRF)
APRF can build the continuous representation between adjacent projection views via the spatial constraints.
arXiv Detail & Related papers (2023-07-11T14:04:12Z) - MEPNet: A Model-Driven Equivariant Proximal Network for Joint
Sparse-View Reconstruction and Metal Artifact Reduction in CT Images [29.458632068296854]
We propose a model-driven equivariant proximal network, called MEPNet.
MEPNet is optimization-inspired and has a clear working mechanism.
We will release the code at urlhttps://github.com/hongwang01/MEPNet.
arXiv Detail & Related papers (2023-06-25T15:50:11Z) - Learned Alternating Minimization Algorithm for Dual-domain Sparse-View
CT Reconstruction [6.353014736326698]
We propose a novel Learned Minimization Algorithm (LAMA) for dual-domain-view CT image reconstruction.
LAMA is provably convergent for reliable reconstructions.
arXiv Detail & Related papers (2023-06-05T07:29:18Z) - Generative Modeling in Sinogram Domain for Sparse-view CT Reconstruction [12.932897771104825]
radiation dose in computed tomography (CT) examinations can be significantly reduced by intuitively decreasing the number of projection views.
Previous deep learning techniques with sparse-view data require sparse-view/full-view CT image pairs to train the network with supervised manners.
We present a fully unsupervised score-based generative model in sinogram domain for sparse-view CT reconstruction.
arXiv Detail & Related papers (2022-11-25T06:49:18Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed
Sensing CT [17.168584459606272]
The LEARN++ model integrates two parallel and interactiveworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously.
Results show that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
arXiv Detail & Related papers (2020-12-13T07:00:50Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.