DuDoTrans: Dual-Domain Transformer Provides More Attention for Sinogram
Restoration in Sparse-View CT Reconstruction
- URL: http://arxiv.org/abs/2111.10790v1
- Date: Sun, 21 Nov 2021 10:41:07 GMT
- Title: DuDoTrans: Dual-Domain Transformer Provides More Attention for Sinogram
Restoration in Sparse-View CT Reconstruction
- Authors: Ce Wang, Kun Shang, Haimiao Zhang, Qian Li, Yuan Hui, and S. Kevin
Zhou
- Abstract summary: iodine radiation in the imaging process induces irreversible injury.
Iterative models are proposed to alleviate the appeared artifacts in sparse-view CT images, but the cost is too expensive.
We propose textbfDual-textbfDomain textbfDuDoTrans to reconstruct CT image with both the enhanced and raw sinograms.
- Score: 13.358197688568463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Computed Tomography (CT) reconstruction from X-ray sinograms is
necessary for clinical diagnosis, iodine radiation in the imaging process
induces irreversible injury, thereby driving researchers to study sparse-view
CT reconstruction, that is, recovering a high-quality CT image from a sparse
set of sinogram views. Iterative models are proposed to alleviate the appeared
artifacts in sparse-view CT images, but the computation cost is too expensive.
Then deep-learning-based methods have gained prevalence due to the excellent
performances and lower computation. However, these methods ignore the mismatch
between the CNN's \textbf{local} feature extraction capability and the
sinogram's \textbf{global} characteristics. To overcome the problem, we propose
\textbf{Du}al-\textbf{Do}main \textbf{Trans}former (\textbf{DuDoTrans}) to
simultaneously restore informative sinograms via the long-range dependency
modeling capability of Transformer and reconstruct CT image with both the
enhanced and raw sinograms. With such a novel design, reconstruction
performance on the NIH-AAPM dataset and COVID-19 dataset experimentally
confirms the effectiveness and generalizability of DuDoTrans with fewer
involved parameters. Extensive experiments also demonstrate its robustness with
different noise-level scenarios for sparse-view CT reconstruction. The code and
models are publicly available at https://github.com/DuDoTrans/CODE
Related papers
- Differentiable Gaussian Representation for Incomplete CT Reconstruction [20.390232991700977]
We propose a novel Gaussian Representation for Incomplete CT Reconstruction (GRCT) without the usage of any neural networks or full-dose CT data.
Our method can be applied to multiple views and angles without changing the architecture.
Experiments on multiple datasets and settings demonstrate significant improvements in reconstruction quality metrics and high efficiency.
arXiv Detail & Related papers (2024-11-07T16:32:29Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - APRF: Anti-Aliasing Projection Representation Field for Inverse Problem
in Imaging [74.9262846410559]
Sparse-view Computed Tomography (SVCT) reconstruction is an ill-posed inverse problem in imaging.
Recent works use Implicit Neural Representations (INRs) to build the coordinate-based mapping between sinograms and CT images.
We propose a self-supervised SVCT reconstruction method -- Anti-Aliasing Projection Representation Field (APRF)
APRF can build the continuous representation between adjacent projection views via the spatial constraints.
arXiv Detail & Related papers (2023-07-11T14:04:12Z) - Generative Modeling in Sinogram Domain for Sparse-view CT Reconstruction [12.932897771104825]
radiation dose in computed tomography (CT) examinations can be significantly reduced by intuitively decreasing the number of projection views.
Previous deep learning techniques with sparse-view data require sparse-view/full-view CT image pairs to train the network with supervised manners.
We present a fully unsupervised score-based generative model in sinogram domain for sparse-view CT reconstruction.
arXiv Detail & Related papers (2022-11-25T06:49:18Z) - Anatomically constrained CT image translation for heterogeneous blood
vessel segmentation [3.88838725116957]
Anatomical structures in contrast-enhanced CT (ceCT) images can be challenging to segment due to variability in contrast medium diffusion.
To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it.
CycleGAN has attracted particular attention because it alleviates the need for paired data.
We present an extension of CycleGAN to generate high fidelity images, with good structural consistency.
arXiv Detail & Related papers (2022-10-04T16:14:49Z) - A Lightweight Dual-Domain Attention Framework for Sparse-View CT
Reconstruction [6.553233856627479]
We design a novel lightweight network called CAGAN, and propose a dual-domain reconstruction pipeline for parallel beam sparse-view CT.
The application of Shuffle Blocks reduces the parameters by a quarter without sacrificing its performance.
Experiments indicate that the CAGAN strikes an excellent balance between model complexity and performance, and our pipeline outperforms the DD-Net and the DuDoNet.
arXiv Detail & Related papers (2022-02-19T14:04:59Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.