Accelerating Inverse Learning via Intelligent Localization with
Exploratory Sampling
- URL: http://arxiv.org/abs/2212.01016v1
- Date: Fri, 2 Dec 2022 08:00:04 GMT
- Title: Accelerating Inverse Learning via Intelligent Localization with
Exploratory Sampling
- Authors: Jiaxin Zhang, Sirui Bi, Victor Fung
- Abstract summary: solving inverse problems is a longstanding challenge in materials and drug discovery.
Deep generative models are recently proposed to solve inverse problems.
We propose a novel approach (called iPage) to accelerate the inverse learning process.
- Score: 1.5976506570992293
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the scope of "AI for Science", solving inverse problems is a longstanding
challenge in materials and drug discovery, where the goal is to determine the
hidden structures given a set of desirable properties. Deep generative models
are recently proposed to solve inverse problems, but these currently use
expensive forward operators and struggle in precisely localizing the exact
solutions and fully exploring the parameter spaces without missing solutions.
In this work, we propose a novel approach (called iPage) to accelerate the
inverse learning process by leveraging probabilistic inference from deep
invertible models and deterministic optimization via fast gradient descent.
Given a target property, the learned invertible model provides a posterior over
the parameter space; we identify these posterior samples as an intelligent
prior initialization which enables us to narrow down the search space. We then
perform gradient descent to calibrate the inverse solutions within a local
region. Meanwhile, a space-filling sampling is imposed on the latent space to
better explore and capture all possible solutions. We evaluate our approach on
three benchmark tasks and two created datasets with real-world applications
from quantum chemistry and additive manufacturing, and find our method achieves
superior performance compared to several state-of-the-art baseline methods. The
iPage code is available at https://github.com/jxzhangjhu/MatDesINNe.
Related papers
- Optimization by Parallel Quasi-Quantum Annealing with Gradient-Based Sampling [0.0]
This study proposes a different approach that integrates gradient-based update through continuous relaxation, combined with Quasi-Quantum Annealing (QQA)
Numerical experiments demonstrate that our method is a competitive general-purpose solver, achieving performance comparable to iSCO and learning-based solvers.
arXiv Detail & Related papers (2024-09-02T12:55:27Z) - Reverse Engineering Deep ReLU Networks An Optimization-based Algorithm [0.0]
We present a novel method for reconstructing deep ReLU networks by leveraging convex optimization techniques and a sampling-based approach.
Our research contributes to the growing body of work on reverse engineering deep ReLU networks and paves the way for new advancements in neural network interpretability and security.
arXiv Detail & Related papers (2023-12-07T20:15:06Z) - Enhanced Exploration in Neural Feature Selection for Deep Click-Through
Rate Prediction Models via Ensemble of Gating Layers [7.381829794276824]
The goal of neural feature selection (NFS) is to choose a relatively small subset of features with the best explanatory power.
Gating approach inserts a set of differentiable binary gates to drop less informative features.
To improve the exploration capacity of gradient-based solutions, we propose a simple but effective ensemble learning approach.
arXiv Detail & Related papers (2021-12-07T04:37:05Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Intermediate Layer Optimization for Inverse Problems using Deep
Generative Models [86.29330440222199]
ILO is a novel optimization algorithm for solving inverse problems with deep generative models.
We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
arXiv Detail & Related papers (2021-02-15T06:52:22Z) - Benchmarking deep inverse models over time, and the neural-adjoint
method [3.4376560669160394]
We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system.
We conceptualize these models as different schemes for efficiently, but randomly, exploring the space of possible inverse solutions.
We compare several state-of-the-art inverse modeling approaches on four benchmark tasks.
arXiv Detail & Related papers (2020-09-27T18:32:06Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.