Multi-layer Clustering-based Residual Sparsifying Transform for Low-dose
CT Image Reconstruction
- URL: http://arxiv.org/abs/2203.11565v1
- Date: Tue, 22 Mar 2022 09:38:41 GMT
- Title: Multi-layer Clustering-based Residual Sparsifying Transform for Low-dose
CT Image Reconstruction
- Authors: Xikai Yang, Zhishen Huang, Yong Long, Saiprasad Ravishankar
- Abstract summary: We propose a network-structured sparsifying transform learning approach for X-ray computed tomography (CT) reconstruction.
We apply the MCST model to low-dose CT reconstruction by deploying the learned MCST model into the regularizer in penalized weighted least squares (PWLS) reconstruction.
Our simulation results demonstrate that PWLS-MCST achieves better image reconstruction quality than the conventional FBP method and PWLS with edge-preserving (EP) regularizer.
- Score: 11.011268090482575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recently proposed sparsifying transform models incur low computational
cost and have been applied to medical imaging. Meanwhile, deep models with
nested network structure reveal great potential for learning features in
different layers. In this study, we propose a network-structured sparsifying
transform learning approach for X-ray computed tomography (CT), which we refer
to as multi-layer clustering-based residual sparsifying transform (MCST)
learning. The proposed MCST scheme learns multiple different unitary transforms
in each layer by dividing each layer's input into several classes. We apply the
MCST model to low-dose CT (LDCT) reconstruction by deploying the learned MCST
model into the regularizer in penalized weighted least squares (PWLS)
reconstruction. We conducted LDCT reconstruction experiments on XCAT phantom
data and Mayo Clinic data and trained the MCST model with 2 (or 3) layers and
with 5 clusters in each layer. The learned transforms in the same layer showed
rich features while additional information is extracted from representation
residuals. Our simulation results demonstrate that PWLS-MCST achieves better
image reconstruction quality than the conventional FBP method and PWLS with
edge-preserving (EP) regularizer. It also outperformed recent advanced methods
like PWLS with a learned multi-layer residual sparsifying transform prior
(MARS) and PWLS with a union of learned transforms (ULTRA), especially for
displaying clear edges and preserving subtle details.
Related papers
- CMFDFormer: Transformer-based Copy-Move Forgery Detection with Continual
Learning [52.72888626663642]
Copy-move forgery detection aims at detecting duplicated regions in a suspected forged image.
Deep learning based copy-move forgery detection methods are in the ascendant.
We propose a Transformer-style copy-move forgery network named as CMFDFormer.
We also provide a novel PCSD continual learning framework to help CMFDFormer handle new tasks.
arXiv Detail & Related papers (2023-11-22T09:27:46Z) - Learned Alternating Minimization Algorithm for Dual-domain Sparse-View
CT Reconstruction [6.353014736326698]
We propose a novel Learned Minimization Algorithm (LAMA) for dual-domain-view CT image reconstruction.
LAMA is provably convergent for reliable reconstructions.
arXiv Detail & Related papers (2023-06-05T07:29:18Z) - ConvBLS: An Effective and Efficient Incremental Convolutional Broad
Learning System for Image Classification [63.49762079000726]
We propose a convolutional broad learning system (ConvBLS) based on the spherical K-means (SKM) algorithm and two-stage multi-scale (TSMS) feature fusion.
Our proposed ConvBLS method is unprecedentedly efficient and effective.
arXiv Detail & Related papers (2023-04-01T04:16:12Z) - Learning with Multigraph Convolutional Filters [153.20329791008095]
We introduce multigraph convolutional neural networks (MGNNs) as stacked and layered structures where information is processed according to an MSP model.
We also develop a procedure for tractable computation of filter coefficients in the MGNNs and a low cost method to reduce the dimensionality of the information transferred between layers.
arXiv Detail & Related papers (2022-10-28T17:00:50Z) - Spatiotemporal Feature Learning Based on Two-Step LSTM and Transformer
for CT Scans [2.3682456328966115]
We propose a novel, effective, two-step-wise approach to tickle this issue for COVID-19 symptom classification thoroughly.
First, the semantic feature embedding of each slice for a CT scan is extracted by conventional backbone networks.
Then, we proposed a long short-term memory (LSTM) and Transformer-based sub-network to deal with temporal feature learning.
arXiv Detail & Related papers (2022-07-04T16:59:05Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Two-layer clustering-based sparsifying transform learning for low-dose
CT reconstruction [12.37556184089774]
We propose an approach to learn a rich two-layer clustering-based sparsifying transform model (MCST2)
Experimental results show the superior performance of the proposed PWLS-MCST2 approach compared to other related recent schemes.
arXiv Detail & Related papers (2020-11-01T05:15:37Z) - Multi-layer Residual Sparsifying Transform (MARS) Model for Low-dose CT
Image Reconstruction [12.37556184089774]
We develop a new image reconstruction approach based on a novel multi-layer model learned in an unsupervised manner.
The proposed framework extends the classical sparsifying transform model for images to a Multi-lAyer Residual Sparsifying transform (MARS) model.
We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images.
arXiv Detail & Related papers (2020-10-10T09:04:43Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT
Reconstruction [11.470070927586017]
Sparsifying transform learning involves highly efficient sparse coding and operator update steps.
We propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers.
arXiv Detail & Related papers (2020-05-08T02:36:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.