DRIP: Deep Regularizers for Inverse Problems
- URL: http://arxiv.org/abs/2304.00015v2
- Date: Fri, 25 Aug 2023 09:06:36 GMT
- Title: DRIP: Deep Regularizers for Inverse Problems
- Authors: Moshe Eliasof, Eldad Haber, Eran Treister
- Abstract summary: We introduce a new family of neural regularizers for the solution of inverse problems.
These regularizers are based on a variational formulation and are guaranteed to fit the data.
We demonstrate their use on a number of highly ill-posed problems, from image deblurring to limited angle tomography.
- Score: 15.919986945096182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider inverse problems that are mathematically ill-posed.
That is, given some (noisy) data, there is more than one solution that
approximately fits the data. In recent years, deep neural techniques that find
the most appropriate solution, in the sense that it contains a-priori
information, were developed. However, they suffer from several shortcomings.
First, most techniques cannot guarantee that the solution fits the data at
inference. Second, while the derivation of the techniques is inspired by the
existence of a valid scalar regularization function, such techniques do not in
practice rely on such a function, and therefore veer away from classical
variational techniques. In this work we introduce a new family of neural
regularizers for the solution of inverse problems. These regularizers are based
on a variational formulation and are guaranteed to fit the data. We demonstrate
their use on a number of highly ill-posed problems, from image deblurring to
limited angle tomography.
Related papers
- Weak neural variational inference for solving Bayesian inverse problems without forward models: applications in elastography [1.6385815610837167]
We introduce a novel, data-driven approach for solving high-dimensional Bayesian inverse problems based on partial differential equations (PDEs)
The Weak Neural Variational Inference (WNVI) method complements real measurements with virtual observations derived from the physical model.
We demonstrate that WNVI is not only as accurate and more efficient than traditional methods that rely on repeatedly solving the (non-linear) forward problem as a black-box.
arXiv Detail & Related papers (2024-07-30T09:46:03Z) - ASPIRE: Iterative Amortized Posterior Inference for Bayesian Inverse Problems [0.974963895316339]
New advances in machine learning and variational inference (VI) have lowered the computational barrier by learning from examples.
Two VI paradigms have emerged that represent different tradeoffs: amortized and non-amortized.
We present a solution that enables iterative improvement of amortized posteriors that uses the same networks architectures and training data.
arXiv Detail & Related papers (2024-05-08T20:03:12Z) - ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference [69.24516189971929]
In this paper, we introduce a new type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE)
While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network.
arXiv Detail & Related papers (2024-03-16T02:07:45Z) - Reverse em-problem based on Bregman divergence and its application to classical and quantum information theory [53.64687146666141]
Recent paper introduced an analytical method for calculating the channel capacity without the need for iteration.
We turn our attention to the reverse em-problem, proposed by Toyota.
We derive a non-iterative formula for the reverse em-problem.
arXiv Detail & Related papers (2024-03-14T10:20:28Z) - Single-Shot Plug-and-Play Methods for Inverse Problems [24.48841512811108]
Plug-and-Play priors in inverse problems have become increasingly prominent in recent years.
Existing models predominantly rely on pre-trained denoisers using large datasets.
In this work, we introduce Single-Shot perturbative methods, shifting the focus to solving inverse problems with minimal data.
arXiv Detail & Related papers (2023-11-22T20:31:33Z) - Deep Learning and Bayesian inference for Inverse Problems [8.315530799440554]
We focus on NN, DL and more specifically the Bayesian DL particularly adapted for inverse problems.
We consider two cases: First the case where the forward operator is known and used as physics constraint, the second more general data driven DL methods.
arXiv Detail & Related papers (2023-08-28T04:27:45Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Learning a Single Neuron with Bias Using Gradient Descent [53.15475693468925]
We study the fundamental problem of learning a single neuron with a bias term.
We show that this is a significantly different and more challenging problem than the bias-less case.
arXiv Detail & Related papers (2021-06-02T12:09:55Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Regularization of Inverse Problems by Neural Networks [0.0]
Inverse problems arise in a variety of imaging applications including computed tomography, non-destructive testing, and remote sensing.
The characteristic features of inverse problems are the non-uniqueness and instability of their solutions.
Deep learning techniques and neural networks demonstrated to significantly outperform classical solution methods for inverse problems.
arXiv Detail & Related papers (2020-06-06T20:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.