Neural Network-based Reconstruction in Compressed Sensing MRI Without
Fully-sampled Training Data
- URL: http://arxiv.org/abs/2007.14979v1
- Date: Wed, 29 Jul 2020 17:46:55 GMT
- Title: Neural Network-based Reconstruction in Compressed Sensing MRI Without
Fully-sampled Training Data
- Authors: Alan Q. Wang, Adrian V. Dalca, and Mert R. Sabuncu
- Abstract summary: CS-MRI has shown promise in reconstructing under-sampled MR images.
Deep learning models have been developed that model the iterative nature of classical techniques by unrolling iterations in a neural network.
In this paper, we explore a novel strategy to train an unrolled reconstruction network in an unsupervised fashion by adopting a loss function widely-used in classical optimization schemes.
- Score: 17.415937218905125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compressed Sensing MRI (CS-MRI) has shown promise in reconstructing
under-sampled MR images, offering the potential to reduce scan times. Classical
techniques minimize a regularized least-squares cost function using an
expensive iterative optimization procedure. Recently, deep learning models have
been developed that model the iterative nature of classical techniques by
unrolling iterations in a neural network. While exhibiting superior
performance, these methods require large quantities of ground-truth images and
have shown to be non-robust to unseen data. In this paper, we explore a novel
strategy to train an unrolled reconstruction network in an unsupervised fashion
by adopting a loss function widely-used in classical optimization schemes. We
demonstrate that this strategy achieves lower loss and is computationally cheap
compared to classical optimization solvers while also exhibiting superior
robustness compared to supervised models. Code is available at
https://github.com/alanqrwang/HQSNet.
Related papers
- Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A
Memory-Efficient Inverse Problem Solver [26.87738024952936]
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
We propose an LU algorithm with shallow equilibrium regularizers (L)
These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training.
arXiv Detail & Related papers (2022-10-10T19:50:37Z) - Learning Optimal K-space Acquisition and Reconstruction using
Physics-Informed Neural Networks [46.751292014516025]
Deep neural networks have been applied to reconstruct undersampled k-space data and have shown improved reconstruction performance.
This work proposes a novel framework to learn k-space sampling trajectories by considering it as an Ordinary Differential Equation (ODE) problem.
Experiments were conducted on different in-viv datasets (textite.g., brain and knee images) acquired with different sequences.
arXiv Detail & Related papers (2022-04-05T20:28:42Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - DRO: Deep Recurrent Optimizer for Structure-from-Motion [46.34708595941016]
This paper presents a novel optimization method based on recurrent neural networks in structure-from-motion (SfM)
Our neural alternatively updates the depth and camera poses through iterations to minimize a feature-metric cost.
Experiments demonstrate that our recurrent computation effectively reduces the feature-metric cost while refining the depth and poses.
arXiv Detail & Related papers (2021-03-24T13:59:40Z) - Regularization-Agnostic Compressed Sensing MRI Reconstruction with
Hypernetworks [21.349071909858218]
We present a novel strategy of using a hypernetwork to generate the parameters of a separate reconstruction network as a function of the regularization weight(s)
At test time, for a given under-sampled image, our model can rapidly compute reconstructions with different amounts of regularization.
We analyze the variability of these reconstructions, especially in situations when the overall quality is similar.
arXiv Detail & Related papers (2021-01-06T18:55:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.