Improved Model based Deep Learning using Monotone Operator Learning
(MOL)
- URL: http://arxiv.org/abs/2111.11380v1
- Date: Mon, 22 Nov 2021 17:42:27 GMT
- Title: Improved Model based Deep Learning using Monotone Operator Learning
(MOL)
- Authors: Aniket Pramanik, Mathews Jacob
- Abstract summary: MoDL algorithms that rely on unrolling are emerging as powerful tools for image recovery.
We introduce a novel monotone operator learning framework to overcome some of the challenges associated with current unrolled frameworks.
We demonstrate the utility of the proposed scheme in the context of parallel MRI.
- Score: 25.077510176642807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-based deep learning (MoDL) algorithms that rely on unrolling are
emerging as powerful tools for image recovery. In this work, we introduce a
novel monotone operator learning framework to overcome some of the challenges
associated with current unrolled frameworks, including high memory cost, lack
of guarantees on robustness to perturbations, and low interpretability. Unlike
current unrolled architectures that use finite number of iterations, we use the
deep equilibrium (DEQ) framework to iterate the algorithm to convergence and to
evaluate the gradient of the convolutional neural network blocks using Jacobian
iterations. This approach significantly reduces the memory demand, facilitating
the extension of MoDL algorithms to high dimensional problems. We constrain the
CNN to be a monotone operator, which allows us to introduce algorithms with
guaranteed convergence properties and robustness guarantees. We demonstrate the
utility of the proposed scheme in the context of parallel MRI.
Related papers
- Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Local monotone operator learning using non-monotone operators: MnM-MOL [13.037647287689442]
Recovery of magnetic resonance (MR) images from undersampled measurements is a key problem that has seen extensive research in recent years.
Unrolled approaches restrict on end-to-end training of convolutional neural network (CNN) blocks.
We introduce the MOL approach, which eliminates the need for unrolling, thus reducing the memory demand during training.
arXiv Detail & Related papers (2023-12-01T07:15:51Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Accelerated parallel MRI using memory efficient and robust monotone
operator learning (MOL) [24.975981795360845]
The main focus of this paper is to determine the utility of the monotone operator learning framework in the parallel MRI setting.
The benefits of this approach include similar guarantees as compressive sensing algorithms including uniqueness, convergence, and stability.
We validate the proposed scheme by comparing it with different unrolled algorithms in the context of accelerated parallel MRI for static and dynamic settings.
arXiv Detail & Related papers (2023-04-03T20:26:59Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Improving Gradient Flow with Unrolled Highway Expectation Maximization [0.9539495585692008]
We propose Highway Expectation Maximization Networks (HEMNet), which is comprised of unrolled iterations of the generalized EM (GEM) algorithm.
HEMNet features scaled skip connections, or highways, along the depths of the unrolled architecture, resulting in improved gradient flow during backpropagation.
We achieve significant improvement on several semantic segmentation benchmarks and empirically show that HEMNet effectively alleviates gradient decay.
arXiv Detail & Related papers (2020-12-09T09:11:45Z) - Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction [4.2476585678737395]
We propose a general learning based framework for solving nonsmooth non image reconstruction problems.
We show that the proposed is-efficient convergence state-of-the-art methods in an image problems in training.
arXiv Detail & Related papers (2020-07-22T07:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.