Data-informed Deep Optimization
- URL: http://arxiv.org/abs/2107.08166v1
- Date: Sat, 17 Jul 2021 02:53:54 GMT
- Title: Data-informed Deep Optimization
- Authors: Lulu Zhang, Zhi-Qin John Xu, Yaoyu Zhang
- Abstract summary: We propose a data-informed deep optimization (DiDo) approach to solve high-dimensional design problems.
We use a deep neural network (DNN) to learn the feasible region and to sample feasible points for fitting the objective function.
Our results indicate that the DiDo approach empowered by DNN is flexible and promising for solving general high-dimensional design problems in practice.
- Score: 3.331457049134526
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Complex design problems are common in the scientific and industrial fields.
In practice, objective functions or constraints of these problems often do not
have explicit formulas, and can be estimated only at a set of sampling points
through experiments or simulations. Such optimization problems are especially
challenging when design parameters are high-dimensional due to the curse of
dimensionality. In this work, we propose a data-informed deep optimization
(DiDo) approach as follows: first, we use a deep neural network (DNN)
classifier to learn the feasible region; second, we sample feasible points
based on the DNN classifier for fitting of the objective function; finally, we
find optimal points of the DNN-surrogate optimization problem by gradient
descent. To demonstrate the effectiveness of our DiDo approach, we consider a
practical design case in industry, in which our approach yields good solutions
using limited size of training data. We further use a 100-dimension toy example
to show the effectiveness of our model for higher dimensional problems. Our
results indicate that the DiDo approach empowered by DNN is flexible and
promising for solving general high-dimensional design problems in practice.
Related papers
- Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Diffusion Generative Inverse Design [28.04683283070957]
Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.
Recent developments in learned graph neural networks (GNNs) can be used for accurate, efficient, differentiable estimation of simulator dynamics.
We show how denoising diffusion diffusion models can be used to solve inverse design problems efficiently and propose a particle sampling algorithm for further improving their efficiency.
arXiv Detail & Related papers (2023-09-05T08:32:07Z) - Differentiable Projection for Constrained Deep Learning [17.228410662469994]
In some applications, some prior knowledge could be easily obtained, such as constraints which the ground truth observation follows.
In this paper, we propose to use a differentiable projection layer in DNN instead of directly solving time-consuming KKT conditions.
The proposed projection method is differentiable, and no heavy computation is required.
arXiv Detail & Related papers (2021-11-21T10:32:43Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Train Like a (Var)Pro: Efficient Training of Neural Networks with
Variable Projection [2.7561479348365734]
Deep neural networks (DNNs) have achieved state-of-theart performance across a variety of traditional machine learning tasks.
In this paper, we consider training of DNNs, which arises in many state-of-the-art applications.
arXiv Detail & Related papers (2020-07-26T16:29:39Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.