Deep neural operators can serve as accurate surrogates for shape
optimization: A case study for airfoils
- URL: http://arxiv.org/abs/2302.00807v1
- Date: Thu, 2 Feb 2023 00:19:09 GMT
- Title: Deep neural operators can serve as accurate surrogates for shape
optimization: A case study for airfoils
- Authors: Khemraj Shukla, Vivek Oommen, Ahmad Peyvan, Michael Penwarden, Luis
Bravo, Anindya Ghoshal, Robert M. Kirby and George Em Karniadakis
- Abstract summary: We investigate the use of DeepONets to infer flow fields around unseen airfoils with the aim of shape optimization.
We present results which display little to no degradation in prediction accuracy, while reducing the online optimization cost by orders of magnitude.
- Score: 3.2996060586026354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural operators, such as DeepONets, have changed the paradigm in
high-dimensional nonlinear regression from function regression to
(differential) operator regression, paving the way for significant changes in
computational engineering applications. Here, we investigate the use of
DeepONets to infer flow fields around unseen airfoils with the aim of shape
optimization, an important design problem in aerodynamics that typically taxes
computational resources heavily. We present results which display little to no
degradation in prediction accuracy, while reducing the online optimization cost
by orders of magnitude. We consider NACA airfoils as a test case for our
proposed approach, as their shape can be easily defined by the four-digit
parametrization. We successfully optimize the constrained NACA four-digit
problem with respect to maximizing the lift-to-drag ratio and validate all
results by comparing them to a high-order CFD solver. We find that DeepONets
have low generalization error, making them ideal for generating solutions of
unseen shapes. Specifically, pressure, density, and velocity fields are
accurately inferred at a fraction of a second, hence enabling the use of
general objective functions beyond the maximization of the lift-to-drag ratio
considered in the current work.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Machine-learning-based multipoint optimization of fluidic injection parameters for improving nozzle performance [2.5864426808687893]
This paper uses a pretrained neural network model to replace computational fluid dynamic (CFD) simulations.
Considering the physical characteristics of the nozzle flow field, a prior-based prediction strategy is adopted to enhance the model's transferability.
An improvement in the thrust coefficient of 1.14% is achieved, and the time cost is greatly reduced compared with the traditional optimization methods.
arXiv Detail & Related papers (2024-09-19T12:32:54Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Gradient-free neural topology optimization [0.0]
gradient-free algorithms require many more iterations to converge when compared to gradient-based algorithms.
This has made them unviable for topology optimization due to the high computational cost per iteration and high dimensionality of these problems.
We propose a pre-trained neural reparameterization strategy that leads to at least one order of magnitude decrease in iteration count when optimizing the designs in latent space.
arXiv Detail & Related papers (2024-03-07T23:00:49Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - DEBOSH: Deep Bayesian Shape Optimization [48.80431740983095]
We propose a novel uncertainty-based method tailored to shape optimization.
It enables effective BO and increases the quality of the resulting shapes beyond that of state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-28T11:01:42Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection [0.0]
We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B'enard convection flows.
We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs.
The predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm.
arXiv Detail & Related papers (2021-03-05T09:48:57Z) - Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization [0.0]
Adjoint-based optimization methods are attractive for aerodynamic shape design.
They can become prohibitively expensive when multiple optimization problems are being solved.
We propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver.
arXiv Detail & Related papers (2020-08-15T15:09:21Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.