Paired Autoencoders for Inverse Problems
- URL: http://arxiv.org/abs/2405.13220v1
- Date: Tue, 21 May 2024 22:00:34 GMT
- Title: Paired Autoencoders for Inverse Problems
- Authors: Matthias Chung, Emma Hart, Julianne Chung, Bas Peters, Eldad Haber,
- Abstract summary: We consider the solution of nonlinear inverse problems where the forward problem is a discretization of a partial differential equation.
The main computational bottleneck of typical algorithms is the direct estimation of the data misfit.
We use a paired autoencoder framework as a likelihood-free estimator for inverse problems.
- Score: 3.355436702348694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the solution of nonlinear inverse problems where the forward problem is a discretization of a partial differential equation. Such problems are notoriously difficult to solve in practice and require minimizing a combination of a data-fit term and a regularization term. The main computational bottleneck of typical algorithms is the direct estimation of the data misfit. Therefore, likelihood-free approaches have become appealing alternatives. Nonetheless, difficulties in generalization and limitations in accuracy have hindered their broader utility and applicability. In this work, we use a paired autoencoder framework as a likelihood-free estimator for inverse problems. We show that the use of such an architecture allows us to construct a solution efficiently and to overcome some known open problems when using likelihood-free estimators. In particular, our framework can assess the quality of the solution and improve on it if needed. We demonstrate the viability of our approach using examples from full waveform inversion and inverse electromagnetic imaging.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - A Guide to Stochastic Optimisation for Large-Scale Inverse Problems [4.926711494319977]
optimisation algorithms are the de facto standard for machine learning with large amounts of data.
We provide a comprehensive account of the state-of-the-art in optimisation from the viewpoint of inverse problems.
We focus on the challenges for optimisation that are unique and are not commonly encountered in machine learning.
arXiv Detail & Related papers (2024-06-10T15:02:30Z) - Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces [47.907236421762626]
This work studies discrete-time discounted Markov decision processes with continuous state and action spaces.
We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem.
arXiv Detail & Related papers (2024-05-24T12:53:07Z) - Instance-Dependent Confidence and Early Stopping for Reinforcement
Learning [99.57168572237421]
Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure.
This research provides guarantees that explain textitex post the performance differences observed.
A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice.
arXiv Detail & Related papers (2022-01-21T04:25:35Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Learning Hard Optimization Problems: A Data Generation Perspective [44.4747903763245]
This paper demonstrates the volatility of the training data to the ability to approximate it.
It proposes a method for producing (exact or approximate) solutions to optimization problems that are amenable to supervised tasks.
arXiv Detail & Related papers (2021-06-04T17:03:44Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Consistency analysis of bilevel data-driven learning in inverse problems [1.0705399532413618]
We consider the adaptive learning of the regularization parameter from data by means of optimization.
We demonstrate how to implement our framework on linear inverse problems.
Online numerical schemes are derived using the gradient descent method.
arXiv Detail & Related papers (2020-07-06T12:23:29Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Inexact Derivative-Free Optimization for Bilevel Learning [0.27074235008521236]
Variational regularization techniques are dominant in the field of mathematical imaging.
A by now common strategy to resolve this issue is to learn these parameters from data.
It is common when solving the upper-level problem to assume access to exact solutions of the lower-level problem, which is practically infeasible.
We propose to solve these problems using inexact derivative-free optimization algorithms which never require exact lower-level problem solutions.
arXiv Detail & Related papers (2020-06-23T00:17:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.