Local monotone operator learning using non-monotone operators: MnM-MOL
- URL: http://arxiv.org/abs/2312.00386v1
- Date: Fri, 1 Dec 2023 07:15:51 GMT
- Title: Local monotone operator learning using non-monotone operators: MnM-MOL
- Authors: Maneesh John, Jyothi Rikhab Chand, Mathews Jacob
- Abstract summary: Recovery of magnetic resonance (MR) images from undersampled measurements is a key problem that has seen extensive research in recent years.
Unrolled approaches restrict on end-to-end training of convolutional neural network (CNN) blocks.
We introduce the MOL approach, which eliminates the need for unrolling, thus reducing the memory demand during training.
- Score: 13.037647287689442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recovery of magnetic resonance (MR) images from undersampled measurements
is a key problem that has seen extensive research in recent years. Unrolled
approaches, which rely on end-to-end training of convolutional neural network
(CNN) blocks within iterative reconstruction algorithms, offer state-of-the-art
performance. These algorithms require a large amount of memory during training,
making them difficult to employ in high-dimensional applications. Deep
equilibrium (DEQ) models and the recent monotone operator learning (MOL)
approach were introduced to eliminate the need for unrolling, thus reducing the
memory demand during training. Both approaches require a Lipschitz constraint
on the network to ensure that the forward and backpropagation iterations
converge. Unfortunately, the constraint often results in reduced performance
compared to unrolled methods. The main focus of this work is to relax the
constraint on the CNN block in two different ways. Inspired by
convex-non-convex regularization strategies, we now impose the monotone
constraint on the sum of the gradient of the data term and the CNN block,
rather than constrain the CNN itself to be a monotone operator. This approach
enables the CNN to learn possibly non-monotone score functions, which can
translate to improved performance. In addition, we only restrict the operator
to be monotone in a local neighborhood around the image manifold. Our
theoretical results show that the proposed algorithm is guaranteed to converge
to the fixed point and that the solution is robust to input perturbations,
provided that it is initialized close to the true solution. Our empirical
results show that the relaxed constraints translate to improved performance and
that the approach enjoys robustness to input perturbations similar to MOL.
Related papers
- Learning truly monotone operators with applications to nonlinear inverse problems [15.736235440441478]
This article introduces a novel approach to learning monotone neural networks through a newly defined penalization loss.
The Forward-Backward-Forward (FBF) algorithm is employed to address monotone inclusion problems.
We then show simulation examples where the non-linear inverse problem is successfully solved.
arXiv Detail & Related papers (2024-03-30T15:03:52Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Improving the Robustness of Neural Multiplication Units with Reversible
Stochasticity [2.4278445972594525]
Multilayer Perceptrons struggle to learn certain simple arithmetic tasks.
Specialist neural NMU (sNMU) is proposed to apply reversibleity, encouraging avoidance of such optima.
arXiv Detail & Related papers (2022-11-10T14:56:37Z) - An alternative approach to train neural networks using monotone
variational inequality [22.320632565424745]
We propose an alternative approach to neural network training using the monotone vector field.
Our approach can be used for more efficient fine-tuning of a pre-trained neural network.
arXiv Detail & Related papers (2022-02-17T19:24:20Z) - Improved Model based Deep Learning using Monotone Operator Learning
(MOL) [25.077510176642807]
MoDL algorithms that rely on unrolling are emerging as powerful tools for image recovery.
We introduce a novel monotone operator learning framework to overcome some of the challenges associated with current unrolled frameworks.
We demonstrate the utility of the proposed scheme in the context of parallel MRI.
arXiv Detail & Related papers (2021-11-22T17:42:27Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Monotone operator equilibrium networks [97.86610752856987]
We develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ)
We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem.
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
arXiv Detail & Related papers (2020-06-15T17:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.