Accelerated MRI With Deep Linear Convolutional Transform Learning
- URL: http://arxiv.org/abs/2204.07923v1
- Date: Sun, 17 Apr 2022 04:47:32 GMT
- Title: Accelerated MRI With Deep Linear Convolutional Transform Learning
- Authors: Hongyi Gu, Burhaneddin Yaman, Steen Moeller, Il Yong Chun, Mehmet
Ak\c{c}akaya
- Abstract summary: Recent studies show that deep learning based MRI reconstruction outperforms conventional methods in multiple applications.
In this work, we combine ideas from CS, TL and DL reconstructions to learn deep linear convolutional transforms.
Our results show that the proposed technique can reconstruct MR images to a level comparable to DL methods, while supporting uniform undersampling patterns.
- Score: 7.927206441149002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies show that deep learning (DL) based MRI reconstruction
outperforms conventional methods, such as parallel imaging and compressed
sensing (CS), in multiple applications. Unlike CS that is typically implemented
with pre-determined linear representations for regularization, DL inherently
uses a non-linear representation learned from a large database. Another line of
work uses transform learning (TL) to bridge the gap between these two
approaches by learning linear representations from data. In this work, we
combine ideas from CS, TL and DL reconstructions to learn deep linear
convolutional transforms as part of an algorithm unrolling approach. Using
end-to-end training, our results show that the proposed technique can
reconstruct MR images to a level comparable to DL methods, while supporting
uniform undersampling patterns unlike conventional CS methods. Our proposed
method relies on convex sparse image reconstruction with linear representation
at inference time, which may be beneficial for characterizing robustness,
stability and generalizability.
Related papers
- RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-identification [59.5042031913258]
Non-linear modality discrepancy mainly comes from diverse linear transformations acting on the surface of different materials.
We propose a Random Linear Enhancement (RLE) strategy which includes Moderate Random Linear Enhancement (MRLE) and Radical Random Linear Enhancement (RRLE)
The experimental results not only demonstrate the superiority and effectiveness of RLE but also confirm its great potential as a general-purpose data augmentation for cross-spectral re-identification.
arXiv Detail & Related papers (2024-11-02T12:13:37Z) - vSHARP: variable Splitting Half-quadratic Admm algorithm for Reconstruction of inverse-Problems [7.043932618116216]
vSHARP (variable Splitting Half-quadratic ADMM algorithm for Reconstruction of inverse Problems) is a novel Deep Learning (DL)-based method for solving ill-posed inverse problems arising in Medical Imaging (MI)
For data consistency, vSHARP unrolls a differentiable gradient descent process in the image domain, while a DL-based denoiser, such as a U-Net architecture, is applied to enhance image quality.
Our comparative analysis with state-of-the-art methods demonstrates the superior performance of vSHARP in these applications.
arXiv Detail & Related papers (2023-09-18T17:26:22Z) - Generative Diffusion Prior for Unified Image Restoration and Enhancement [62.76390152617949]
Existing image restoration methods mostly leverage the posterior distribution of natural images.
We propose the Generative Diffusion Prior (GDP) to effectively model the posterior distributions in an unsupervised sampling manner.
GDP utilizes a pre-train denoising diffusion generative model (DDPM) for solving linear inverse, non-linear, or blind problems.
arXiv Detail & Related papers (2023-04-03T16:52:43Z) - Curvature regularization for Non-line-of-sight Imaging from
Under-sampled Data [5.591221518341613]
Non-line-of-sight (NLOS) imaging aims to reconstruct the three-dimensional hidden scenes from the data measured in the line-of-sight.
We propose novel NLOS reconstruction models based on curvature regularization.
We evaluate the proposed algorithms on both synthetic and real datasets.
arXiv Detail & Related papers (2023-01-01T14:10:43Z) - Convergent Data-driven Regularizations for CT Reconstruction [41.791026380947685]
In this work, we investigate simple, but still provably convergent approaches to learning linear regularization methods from data.
We prove that such approaches become convergent regularization methods as well as the fact that the reconstructions they provide are typically much smoother than the training data they were trained on.
arXiv Detail & Related papers (2022-12-14T17:34:03Z) - A Unifying Multi-sampling-ratio CS-MRI Framework With Two-grid-cycle
Correction and Geometric Prior Distillation [7.643154460109723]
We propose a unifying deep unfolding multi-sampling-ratio CS-MRI framework, by merging advantages of model-based and deep learning-based methods.
Inspired by multigrid algorithm, we first embed the CS-MRI-based optimization algorithm into correction-distillation scheme.
We employ a condition module to learn adaptively step-length and noise level from compressive sampling ratio in every stage.
arXiv Detail & Related papers (2022-05-14T13:36:27Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Tensor Component Analysis for Interpreting the Latent Space of GANs [41.020230946351816]
This paper addresses the problem of finding interpretable directions in the latent space of pre-trained Generative Adversarial Networks (GANs)
Our scheme allows for both linear edits corresponding to the individual modes of the tensor, and non-linear ones that model the multiplicative interactions between them.
We show experimentally that we can utilise the former to better separate style- from geometry-based transformations, and the latter to generate an extended set of possible transformations.
arXiv Detail & Related papers (2021-11-23T09:14:39Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - FLAMBE: Structural Complexity and Representation Learning of Low Rank
MDPs [53.710405006523274]
This work focuses on the representation learning question: how can we learn such features?
Under the assumption that the underlying (unknown) dynamics correspond to a low rank transition matrix, we show how the representation learning question is related to a particular non-linear matrix decomposition problem.
We develop FLAMBE, which engages in exploration and representation learning for provably efficient RL in low rank transition models.
arXiv Detail & Related papers (2020-06-18T19:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.