Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization
- URL: http://arxiv.org/abs/2008.06731v1
- Date: Sat, 15 Aug 2020 15:09:21 GMT
- Title: Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization
- Authors: S. Ashwin Renganathan, Romit Maulik and, Jai Ahuja
- Abstract summary: Adjoint-based optimization methods are attractive for aerodynamic shape design.
They can become prohibitively expensive when multiple optimization problems are being solved.
We propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adjoint-based optimization methods are attractive for aerodynamic shape
design primarily due to their computational costs being independent of the
dimensionality of the input space and their ability to generate high-fidelity
gradients that can then be used in a gradient-based optimizer. This makes them
very well suited for high-fidelity simulation based aerodynamic shape
optimization of highly parametrized geometries such as aircraft wings. However,
the development of adjoint-based solvers involve careful mathematical treatment
and their implementation require detailed software development. Furthermore,
they can become prohibitively expensive when multiple optimization problems are
being solved, each requiring multiple restarts to circumvent local optima. In
this work, we propose a machine learning enabled, surrogate-based framework
that replaces the expensive adjoint solver, without compromising on predicting
predictive accuracy. Specifically, we first train a deep neural network (DNN)
from training data generated from evaluating the high-fidelity simulation model
on a model-agnostic, design of experiments on the geometry shape parameters.
The optimum shape may then be computed by using a gradient-based optimizer
coupled with the trained DNN. Subsequently, we also perform a gradient-free
Bayesian optimization, where the trained DNN is used as the prior mean. We
observe that the latter framework (DNN-BO) improves upon the DNN-only based
optimization strategy for the same computational cost. Overall, this framework
predicts the true optimum with very high accuracy, while requiring far fewer
high-fidelity function calls compared to the adjoint-based method. Furthermore,
we show that multiple optimization problems can be solved with the same machine
learning model with high accuracy, to amortize the offline costs associated
with constructing our models.
Related papers
- Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Sample-Efficient and Surrogate-Based Design Optimization of Underwater Vehicle Hulls [0.4543820534430522]
We show that theBO-LCB algorithm is the most sample-efficient optimization framework and has the best convergence behavior of those considered.
We also show that our DNN-based surrogate model predicts drag force on test data in tight agreement with CFD simulations, with a mean absolute percentage error (MAPE) of 1.85%.
We demonstrate a two-orders-of-magnitude speedup for the design optimization process when the surrogate model is used.
arXiv Detail & Related papers (2023-04-24T19:52:42Z) - Physics Informed Piecewise Linear Neural Networks for Process
Optimization [0.0]
It is proposed to upgrade piece-wise linear neural network models with physics informed knowledge for optimization problems with neural network models embedded.
For all cases, physics-informed trained neural network based optimal results are closer to global optimality.
arXiv Detail & Related papers (2023-02-02T10:14:54Z) - Deep neural operators can serve as accurate surrogates for shape
optimization: A case study for airfoils [3.2996060586026354]
We investigate the use of DeepONets to infer flow fields around unseen airfoils with the aim of shape optimization.
We present results which display little to no degradation in prediction accuracy, while reducing the online optimization cost by orders of magnitude.
arXiv Detail & Related papers (2023-02-02T00:19:09Z) - Data-driven evolutionary algorithm for oil reservoir well-placement and
control optimization [3.012067935276772]
Generalized data-driven evolutionary algorithm (GDDE) is proposed to reduce the number of simulation runs on well-placement and control optimization problems.
Probabilistic neural network (PNN) is adopted as the classifier to select informative and promising candidates.
arXiv Detail & Related papers (2022-06-07T09:07:49Z) - Deep convolutional neural network for shape optimization using level-set
approach [0.0]
This article presents a reduced-order modeling methodology for shape optimization applications via deep convolutional neural networks (CNNs)
A CNN-based reduced-order model (ROM) is constructed in a completely data-driven manner, and suited for non-intrusive applications.
arXiv Detail & Related papers (2022-01-17T04:41:51Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - DEBOSH: Deep Bayesian Shape Optimization [48.80431740983095]
We propose a novel uncertainty-based method tailored to shape optimization.
It enables effective BO and increases the quality of the resulting shapes beyond that of state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-28T11:01:42Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.