Multi-layer Residual Sparsifying Transform (MARS) Model for Low-dose CT
Image Reconstruction
- URL: http://arxiv.org/abs/2010.06144v3
- Date: Fri, 28 May 2021 10:12:44 GMT
- Title: Multi-layer Residual Sparsifying Transform (MARS) Model for Low-dose CT
Image Reconstruction
- Authors: Xikai Yang, Yong Long, Saiprasad Ravishankar
- Abstract summary: We develop a new image reconstruction approach based on a novel multi-layer model learned in an unsupervised manner.
The proposed framework extends the classical sparsifying transform model for images to a Multi-lAyer Residual Sparsifying transform (MARS) model.
We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images.
- Score: 12.37556184089774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signal models based on sparse representations have received considerable
attention in recent years. On the other hand, deep models consisting of a
cascade of functional layers, commonly known as deep neural networks, have been
highly successful for the task of object classification and have been recently
introduced to image reconstruction. In this work, we develop a new image
reconstruction approach based on a novel multi-layer model learned in an
unsupervised manner by combining both sparse representations and deep models.
The proposed framework extends the classical sparsifying transform model for
images to a Multi-lAyer Residual Sparsifying transform (MARS) model, wherein
the transform domain data are jointly sparsified over layers. We investigate
the application of MARS models learned from limited regular-dose images for
low-dose CT reconstruction using Penalized Weighted Least Squares (PWLS)
optimization. We propose new formulations for multi-layer transform learning
and image reconstruction. We derive an efficient block coordinate descent
algorithm to learn the transforms across layers, in an unsupervised manner from
limited regular-dose images. The learned model is then incorporated into the
low-dose image reconstruction phase. Low-dose CT experimental results with both
the XCAT phantom and Mayo Clinic data show that the MARS model outperforms
conventional methods such as FBP and PWLS methods based on the edge-preserving
(EP) regularizer in terms of two numerical metrics (RMSE and SSIM) and noise
suppression. Compared with the single-layer learned transform (ST) model, the
MARS model performs better in maintaining some subtle details.
Related papers
- Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction [42.95604565673447]
This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Improving Pixel-based MIM by Reducing Wasted Modeling Capability [77.99468514275185]
We propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction.
To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures.
Our method yields significant performance gains, such as 1.2% on fine-tuning, 2.8% on linear probing, and 2.6% on semantic segmentation.
arXiv Detail & Related papers (2023-08-01T03:44:56Z) - Masked Image Modeling with Local Multi-Scale Reconstruction [54.91442074100597]
Masked Image Modeling (MIM) achieves outstanding success in self-supervised representation learning.
Existing MIM models conduct reconstruction task only at the top layer of encoder.
We design local multi-scale reconstruction, where the lower and upper layers reconstruct fine-scale and coarse-scale supervision signals respectively.
arXiv Detail & Related papers (2023-03-09T13:42:04Z) - DOLCE: A Model-Based Probabilistic Diffusion Framework for Limited-Angle
CT Reconstruction [42.028139152832466]
Limited-Angle Computed Tomography (LACT) is a non-destructive evaluation technique used in a variety of applications ranging from security to medicine.
We present DOLCE, a new deep model-based framework for LACT that uses a conditional diffusion model as an image prior.
arXiv Detail & Related papers (2022-11-22T15:30:38Z) - FewGAN: Generating from the Joint Distribution of a Few Images [95.6635227371479]
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-07-18T07:11:28Z) - Multi-layer Clustering-based Residual Sparsifying Transform for Low-dose
CT Image Reconstruction [11.011268090482575]
We propose a network-structured sparsifying transform learning approach for X-ray computed tomography (CT) reconstruction.
We apply the MCST model to low-dose CT reconstruction by deploying the learned MCST model into the regularizer in penalized weighted least squares (PWLS) reconstruction.
Our simulation results demonstrate that PWLS-MCST achieves better image reconstruction quality than the conventional FBP method and PWLS with edge-preserving (EP) regularizer.
arXiv Detail & Related papers (2022-03-22T09:38:41Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed
Sensing CT [17.168584459606272]
The LEARN++ model integrates two parallel and interactiveworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously.
Results show that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
arXiv Detail & Related papers (2020-12-13T07:00:50Z) - Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT
Reconstruction [11.470070927586017]
Sparsifying transform learning involves highly efficient sparse coding and operator update steps.
We propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers.
arXiv Detail & Related papers (2020-05-08T02:36:50Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.