Accelerated parallel MRI using memory efficient and robust monotone
operator learning (MOL)
- URL: http://arxiv.org/abs/2304.01351v1
- Date: Mon, 3 Apr 2023 20:26:59 GMT
- Title: Accelerated parallel MRI using memory efficient and robust monotone
operator learning (MOL)
- Authors: Aniket Pramanik, Mathews Jacob
- Abstract summary: The main focus of this paper is to determine the utility of the monotone operator learning framework in the parallel MRI setting.
The benefits of this approach include similar guarantees as compressive sensing algorithms including uniqueness, convergence, and stability.
We validate the proposed scheme by comparing it with different unrolled algorithms in the context of accelerated parallel MRI for static and dynamic settings.
- Score: 24.975981795360845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-based deep learning methods that combine imaging physics with learned
regularization priors have been emerging as powerful tools for parallel MRI
acceleration. The main focus of this paper is to determine the utility of the
monotone operator learning (MOL) framework in the parallel MRI setting. The MOL
algorithm alternates between a gradient descent step using a monotone
convolutional neural network (CNN) and a conjugate gradient algorithm to
encourage data consistency. The benefits of this approach include similar
guarantees as compressive sensing algorithms including uniqueness, convergence,
and stability, while being significantly more memory efficient than unrolled
methods. We validate the proposed scheme by comparing it with different
unrolled algorithms in the context of accelerated parallel MRI for static and
dynamic settings.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - vSHARP: variable Splitting Half-quadratic Admm algorithm for Reconstruction of inverse-Problems [7.043932618116216]
vSHARP (variable Splitting Half-quadratic ADMM algorithm for Reconstruction of inverse Problems) is a novel Deep Learning (DL)-based method for solving ill-posed inverse problems arising in Medical Imaging (MI)
For data consistency, vSHARP unrolls a differentiable gradient descent process in the image domain, while a DL-based denoiser, such as a U-Net architecture, is applied to enhance image quality.
Our comparative analysis with state-of-the-art methods demonstrates the superior performance of vSHARP in these applications.
arXiv Detail & Related papers (2023-09-18T17:26:22Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - NL-CS Net: Deep Learning with Non-Local Prior for Image Compressive
Sensing [7.600617428107161]
Deep learning has been applied to compressive sensing (CS) of images successfully in recent years.
This paper proposes a novel CS method using non-local prior which combines the interpretability of the traditional optimization methods with the speed of network-based methods, called NL-CS Net.
arXiv Detail & Related papers (2023-05-06T02:34:28Z) - A scan-specific unsupervised method for parallel MRI reconstruction via
implicit neural representation [9.388253054229155]
implicit neural representation (INR) has emerged as a new deep learning paradigm for learning the internal continuity of an object.
The proposed method outperforms existing methods by suppressing the aliasing artifacts and noise.
The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.
arXiv Detail & Related papers (2022-10-19T10:16:03Z) - Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A
Memory-Efficient Inverse Problem Solver [26.87738024952936]
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
We propose an LU algorithm with shallow equilibrium regularizers (L)
These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training.
arXiv Detail & Related papers (2022-10-10T19:50:37Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Improved Model based Deep Learning using Monotone Operator Learning
(MOL) [25.077510176642807]
MoDL algorithms that rely on unrolling are emerging as powerful tools for image recovery.
We introduce a novel monotone operator learning framework to overcome some of the challenges associated with current unrolled frameworks.
We demonstrate the utility of the proposed scheme in the context of parallel MRI.
arXiv Detail & Related papers (2021-11-22T17:42:27Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.