LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed
Sensing CT
- URL: http://arxiv.org/abs/2012.06983v1
- Date: Sun, 13 Dec 2020 07:00:50 GMT
- Title: LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed
Sensing CT
- Authors: Yi Zhang, Hu Chen, Wenjun Xia, Yang Chen, Baodong Liu, Yan Liu,
Huaiqiang Sun, and Jiliu Zhou
- Abstract summary: The LEARN++ model integrates two parallel and interactiveworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously.
Results show that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
- Score: 17.168584459606272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compressed sensing (CS) computed tomography has been proven to be important
for several clinical applications, such as sparse-view computed tomography
(CT), digital tomosynthesis and interior tomography. Traditional compressed
sensing focuses on the design of handcrafted prior regularizers, which are
usually image-dependent and time-consuming. Inspired by recently proposed deep
learning-based CT reconstruction models, we extend the state-of-the-art LEARN
model to a dual-domain version, dubbed LEARN++. Different from existing
iteration unrolling methods, which only involve projection data in the data
consistency layer, the proposed LEARN++ model integrates two parallel and
interactive subnetworks to perform image restoration and sinogram inpainting
operations on both the image and projection domains simultaneously, which can
fully explore the latent relations between projection data and reconstructed
images. The experimental results demonstrate that the proposed LEARN++ model
achieves competitive qualitative and quantitative results compared to several
state-of-the-art methods in terms of both artifact reduction and detail
preservation.
Related papers
- CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction across All Sampling Rates [16.985836345715963]
Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose degradation.
Recent studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT)
Our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at any sampling rate.
arXiv Detail & Related papers (2024-09-03T03:06:15Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - MVMS-RCN: A Dual-Domain Unfolding CT Reconstruction with Multi-sparse-view and Multi-scale Refinement-correction [9.54126979075279]
Sparse-view CT imaging reduces the number of projection views to a lower radiation dose.
Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods do not fully use the projection data.
This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view tomography reconstructions.
arXiv Detail & Related papers (2024-05-27T13:01:25Z) - Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction [42.95604565673447]
This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - Enhancing Low-dose CT Image Reconstruction by Integrating Supervised and
Unsupervised Learning [13.17680480211064]
We propose a hybrid supervised-unsupervised learning framework for X-ray computed tomography (CT) image reconstruction.
Each proposed trained block consists of a deterministic MBIR solver and a neural network.
We demonstrate the efficacy of this learned hybrid model for low-dose CT image reconstruction with limited training data.
arXiv Detail & Related papers (2023-11-19T20:23:59Z) - APRF: Anti-Aliasing Projection Representation Field for Inverse Problem
in Imaging [74.9262846410559]
Sparse-view Computed Tomography (SVCT) reconstruction is an ill-posed inverse problem in imaging.
Recent works use Implicit Neural Representations (INRs) to build the coordinate-based mapping between sinograms and CT images.
We propose a self-supervised SVCT reconstruction method -- Anti-Aliasing Projection Representation Field (APRF)
APRF can build the continuous representation between adjacent projection views via the spatial constraints.
arXiv Detail & Related papers (2023-07-11T14:04:12Z) - Self-Supervised Coordinate Projection Network for Sparse-View Computed
Tomography [31.774432128324385]
We propose a Self-supervised COordinate Projection nEtwork (SCOPE) to reconstruct the artifacts-free CT image from a single SV sinogram.
Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy.
arXiv Detail & Related papers (2022-09-12T06:14:04Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.