Deep neural networks can stably solve high-dimensional, noisy,
non-linear inverse problems
- URL: http://arxiv.org/abs/2206.00934v5
- Date: Fri, 20 Oct 2023 14:06:30 GMT
- Title: Deep neural networks can stably solve high-dimensional, noisy,
non-linear inverse problems
- Authors: Andr\'es Felipe Lerma Pineda and Philipp Christian Petersen
- Abstract summary: We study the problem of reconstructing solutions of inverse problems when only noisy measurements are available.
For the inverse operator, we demonstrate that there exists a neural network which is a robust-to-noise approximation of the operator.
- Score: 2.6651200086513107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of reconstructing solutions of inverse problems when
only noisy measurements are available. We assume that the problem can be
modeled with an infinite-dimensional forward operator that is not continuously
invertible. Then, we restrict this forward operator to finite-dimensional
spaces so that the inverse is Lipschitz continuous. For the inverse operator,
we demonstrate that there exists a neural network which is a robust-to-noise
approximation of the operator. In addition, we show that these neural networks
can be learned from appropriately perturbed training data. We demonstrate the
admissibility of this approach to a wide range of inverse problems of practical
interest. Numerical examples are given that support the theoretical findings.
Related papers
- Learning truly monotone operators with applications to nonlinear inverse problems [15.736235440441478]
This article introduces a novel approach to learning monotone neural networks through a newly defined penalization loss.
The Forward-Backward-Forward (FBF) algorithm is employed to address monotone inclusion problems.
We then show simulation examples where the non-linear inverse problem is successfully solved.
arXiv Detail & Related papers (2024-03-30T15:03:52Z) - Convergence Guarantees of Overparametrized Wide Deep Inverse Prior [1.5362025549031046]
Inverse Priors is an unsupervised approach to transform a random input into an object whose image under the forward model matches the observation.
We provide overparametrization bounds under which such network trained via continuous-time gradient descent will converge exponentially fast with high probability.
This work is thus a first step towards a theoretical understanding of overparametrized DIP networks, and more broadly it participates to the theoretical understanding of neural networks in inverse problem settings.
arXiv Detail & Related papers (2023-03-20T16:49:40Z) - Energy Regularized RNNs for Solving Non-Stationary Bandit Problems [97.72614340294547]
We present an energy term that prevents the neural network from becoming too confident in support of a certain action.
We demonstrate that our method is at least as effective as methods suggested to solve the sub-problem of Rotting Bandits.
arXiv Detail & Related papers (2023-03-12T03:32:43Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse
Problems with Denoising Diffusion Restoration [64.8770356696056]
We propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown.
The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning.
arXiv Detail & Related papers (2023-01-30T06:27:48Z) - Semi-supervised Invertible DeepONets for Bayesian Inverse Problems [8.594140167290098]
DeepONets offer a powerful, data-driven tool for solving parametric PDEs by learning operators.
In this work, we employ physics-informed DeepONets in the context of high-dimensional, Bayesian inverse problems.
arXiv Detail & Related papers (2022-09-06T18:55:06Z) - Limitations of Deep Learning for Inverse Problems on Digital Hardware [65.26723285209853]
We analyze what actually can be computed on current hardware platforms modeled as Turing machines.
We prove that finite-dimensional inverse problems are not Banach-Mazur computable for small relaxation parameters.
arXiv Detail & Related papers (2022-02-28T00:20:12Z) - Verifying Inverse Model Neural Networks [39.4062479625023]
Inverse problems exist in a wide variety of physical domains from aerospace engineering to medical imaging.
We introduce a method for verifying the correctness of inverse model neural networks.
arXiv Detail & Related papers (2022-02-04T23:13:22Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Revisiting the Continuity of Rotation Representations in Neural Networks [14.63787408331962]
We analyze certain pathological behavior of Euler angles and unit quaternions encountered in previous works related to rotation representation in neural networks.
We show that this behavior is inherent in the topological property of the problem itself and is not caused by unsuitable network architectures or training procedures.
arXiv Detail & Related papers (2020-06-11T07:28:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.