Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT
Reconstruction
- URL: http://arxiv.org/abs/2005.03825v1
- Date: Fri, 8 May 2020 02:36:50 GMT
- Title: Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT
Reconstruction
- Authors: Xikai Yang, Xuehang Zheng, Yong Long, Saiprasad Ravishankar
- Abstract summary: Sparsifying transform learning involves highly efficient sparse coding and operator update steps.
We propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers.
- Score: 11.470070927586017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signal models based on sparse representation have received considerable
attention in recent years. Compared to synthesis dictionary learning,
sparsifying transform learning involves highly efficient sparse coding and
operator update steps. In this work, we propose a Multi-layer Residual
Sparsifying Transform (MRST) learning model wherein the transform domain
residuals are jointly sparsified over layers. In particular, the transforms for
the deeper layers exploit the more intricate properties of the residual maps.
We investigate the application of the learned MRST model for low-dose CT
reconstruction using Penalized Weighted Least Squares (PWLS) optimization.
Experimental results on Mayo Clinic data show that the MRST model outperforms
conventional methods such as FBP and PWLS methods based on edge-preserving (EP)
regularizer and single-layer transform (ST) model, especially for maintaining
some subtle details.
Related papers
- Language Models as Zero-shot Lossless Gradient Compressors: Towards
General Neural Parameter Prior Models [66.1595537904019]
Large language models (LLMs) can act as gradient priors in a zero-shot setting.
We introduce LM-GC, a novel method that integrates LLMs with arithmetic coding.
arXiv Detail & Related papers (2024-09-26T13:38:33Z) - Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Learning Explicitly Conditioned Sparsifying Transforms [7.335712499936904]
We consider a new sparsifying transform model that enforces explicit control over the data representation quality and the condition number of the learned transforms.
We confirm through numerical experiments that our model presents better numerical behavior than the state-of-the-art.
arXiv Detail & Related papers (2024-03-05T18:03:51Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Efficient GPT Model Pre-training using Tensor Train Matrix
Representation [65.96485282393361]
Large-scale transformer models feature billions of parameters, leading to difficulties in their deployment and prohibitive training costs from scratch.
To reduce the number of parameters in the GPT-2 architecture, we replace the matrices of fully-connected layers with the corresponding Train Matrix(TTM) structure.
The resulting GPT-based model stores up to 40% fewer parameters, showing the perplexity comparable to the original model.
arXiv Detail & Related papers (2023-06-05T08:38:25Z) - Masked Pre-Training of Transformers for Histology Image Analysis [4.710921988115685]
In digital pathology, whole slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction.
Visual transformer models have emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches.
We propose a pretext task for training the transformer model without labeled data to address this problem.
Our model, MaskHIT, uses the transformer output to reconstruct masked patches and learn representative histological features based on their positions and visual features.
arXiv Detail & Related papers (2023-04-14T23:56:49Z) - Multi-layer Clustering-based Residual Sparsifying Transform for Low-dose
CT Image Reconstruction [11.011268090482575]
We propose a network-structured sparsifying transform learning approach for X-ray computed tomography (CT) reconstruction.
We apply the MCST model to low-dose CT reconstruction by deploying the learned MCST model into the regularizer in penalized weighted least squares (PWLS) reconstruction.
Our simulation results demonstrate that PWLS-MCST achieves better image reconstruction quality than the conventional FBP method and PWLS with edge-preserving (EP) regularizer.
arXiv Detail & Related papers (2022-03-22T09:38:41Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Two-layer clustering-based sparsifying transform learning for low-dose
CT reconstruction [12.37556184089774]
We propose an approach to learn a rich two-layer clustering-based sparsifying transform model (MCST2)
Experimental results show the superior performance of the proposed PWLS-MCST2 approach compared to other related recent schemes.
arXiv Detail & Related papers (2020-11-01T05:15:37Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Multi-layer Residual Sparsifying Transform (MARS) Model for Low-dose CT
Image Reconstruction [12.37556184089774]
We develop a new image reconstruction approach based on a novel multi-layer model learned in an unsupervised manner.
The proposed framework extends the classical sparsifying transform model for images to a Multi-lAyer Residual Sparsifying transform (MARS) model.
We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images.
arXiv Detail & Related papers (2020-10-10T09:04:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.