Learning truly monotone operators with applications to nonlinear inverse problems
- URL: http://arxiv.org/abs/2404.00390v1
- Date: Sat, 30 Mar 2024 15:03:52 GMT
- Title: Learning truly monotone operators with applications to nonlinear inverse problems
- Authors: Younes Belkouchi, Jean-Christophe Pesquet, Audrey Repetti, Hugues Talbot,
- Abstract summary: This article introduces a novel approach to learning monotone neural networks through a newly defined penalization loss.
The Forward-Backward-Forward (FBF) algorithm is employed to address monotone inclusion problems.
We then show simulation examples where the non-linear inverse problem is successfully solved.
- Score: 15.736235440441478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article introduces a novel approach to learning monotone neural networks through a newly defined penalization loss. The proposed method is particularly effective in solving classes of variational problems, specifically monotone inclusion problems, commonly encountered in image processing tasks. The Forward-Backward-Forward (FBF) algorithm is employed to address these problems, offering a solution even when the Lipschitz constant of the neural network is unknown. Notably, the FBF algorithm provides convergence guarantees under the condition that the learned operator is monotone. Building on plug-and-play methodologies, our objective is to apply these newly learned operators to solving non-linear inverse problems. To achieve this, we initially formulate the problem as a variational inclusion problem. Subsequently, we train a monotone neural network to approximate an operator that may not inherently be monotone. Leveraging the FBF algorithm, we then show simulation examples where the non-linear inverse problem is successfully solved.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Local monotone operator learning using non-monotone operators: MnM-MOL [13.037647287689442]
Recovery of magnetic resonance (MR) images from undersampled measurements is a key problem that has seen extensive research in recent years.
Unrolled approaches restrict on end-to-end training of convolutional neural network (CNN) blocks.
We introduce the MOL approach, which eliminates the need for unrolling, thus reducing the memory demand during training.
arXiv Detail & Related papers (2023-12-01T07:15:51Z) - Deep neural networks can stably solve high-dimensional, noisy,
non-linear inverse problems [2.6651200086513107]
We study the problem of reconstructing solutions of inverse problems when only noisy measurements are available.
For the inverse operator, we demonstrate that there exists a neural network which is a robust-to-noise approximation of the operator.
arXiv Detail & Related papers (2022-06-02T08:51:46Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network [5.676923179244324]
It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
arXiv Detail & Related papers (2021-07-19T02:54:37Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Regularization of Inverse Problems by Neural Networks [0.0]
Inverse problems arise in a variety of imaging applications including computed tomography, non-destructive testing, and remote sensing.
The characteristic features of inverse problems are the non-uniqueness and instability of their solutions.
Deep learning techniques and neural networks demonstrated to significantly outperform classical solution methods for inverse problems.
arXiv Detail & Related papers (2020-06-06T20:49:12Z) - A Novel Learnable Gradient Descent Type Algorithm for Non-convex
Non-smooth Inverse Problems [3.888272676868008]
We propose a novel type to solve inverse problems consisting general architecture and neural intimating.
Results that the proposed network outperforms the state reconstruction methods on different image problems in terms of efficiency and results.
arXiv Detail & Related papers (2020-03-15T03:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.