Benchmarking deep inverse models over time, and the neural-adjoint
method
- URL: http://arxiv.org/abs/2009.12919v4
- Date: Mon, 11 Oct 2021 19:04:33 GMT
- Title: Benchmarking deep inverse models over time, and the neural-adjoint
method
- Authors: Simiao Ren, Willie Padilla, Jordan Malof
- Abstract summary: We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system.
We conceptualize these models as different schemes for efficiently, but randomly, exploring the space of possible inverse solutions.
We compare several state-of-the-art inverse modeling approaches on four benchmark tasks.
- Score: 3.4376560669160394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the task of solving generic inverse problems, where one wishes to
determine the hidden parameters of a natural system that will give rise to a
particular set of measurements. Recently many new approaches based upon deep
learning have arisen generating impressive results. We conceptualize these
models as different schemes for efficiently, but randomly, exploring the space
of possible inverse solutions. As a result, the accuracy of each approach
should be evaluated as a function of time rather than a single estimated
solution, as is often done now. Using this metric, we compare several
state-of-the-art inverse modeling approaches on four benchmark tasks: two
existing tasks, one simple task for visualization and one new task from
metamaterial design. Finally, inspired by our conception of the inverse
problem, we explore a solution that uses a deep learning model to approximate
the forward model, and then uses backpropagation to search for good inverse
solutions. This approach, termed the neural-adjoint, achieves the best
performance in many scenarios.
Related papers
- Inverse Problems with Diffusion Models: A MAP Estimation Perspective [5.002087490888723]
In Computer, several image restoration tasks such as inpainting, deblurring, and super-resolution can be formally modeled as inverse problems.
We propose a MAP estimation framework to model the reverse conditional generation process of a continuous time diffusion model.
We use our proposed framework to develop effective algorithms for image restoration.
arXiv Detail & Related papers (2024-07-27T15:41:13Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Accelerating Inverse Learning via Intelligent Localization with
Exploratory Sampling [1.5976506570992293]
solving inverse problems is a longstanding challenge in materials and drug discovery.
Deep generative models are recently proposed to solve inverse problems.
We propose a novel approach (called iPage) to accelerate the inverse learning process.
arXiv Detail & Related papers (2022-12-02T08:00:04Z) - Mixture Manifold Networks: A Computationally Efficient Baseline for
Inverse Modeling [7.891408798179181]
We propose and show the efficacy of a new method to address generic inverse problems.
Recent work has shown impressive results using deep learning, but we note that there is a trade-off between model performance and computational time.
arXiv Detail & Related papers (2022-11-25T20:18:07Z) - Pareto Set Learning for Neural Multi-objective Combinatorial
Optimization [6.091096843566857]
Multiobjective optimization (MOCO) problems can be found in many real-world applications.
We develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
Our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiconditioned vehicle routing problem and multi knapsack problem in terms of solution quality, speed, and model efficiency.
arXiv Detail & Related papers (2022-03-29T09:26:22Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Regularization via deep generative models: an analysis point of view [8.818465117061205]
This paper proposes a new way of regularizing an inverse problem in imaging (e.g., deblurring or inpainting) by means of a deep generative neural network.
In many cases our technique achieves a clear improvement of the performance and seems to be more robust.
arXiv Detail & Related papers (2021-01-21T15:04:57Z) - Deep Feedback Inverse Problem Solver [141.26041463617963]
We present an efficient, effective, and generic approach towards solving inverse problems.
We leverage the feedback signal provided by the forward process and learn an iterative update model.
Our approach does not have any restrictions on the forward process; it does not require any prior knowledge either.
arXiv Detail & Related papers (2021-01-19T16:49:06Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.