Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning
- URL: http://arxiv.org/abs/2305.02116v1
- Date: Wed, 3 May 2023 13:45:40 GMT
- Title: Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning
- Authors: Zhen Wei and Pascal Fua and Micha\"el Bauerheim
- Abstract summary: We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
- Score: 60.69217130006758
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose two deep learning models that fully automate shape
parameterization for aerodynamic shape optimization. Both models are optimized
to parameterize via deep geometric learning to embed human prior knowledge into
learned geometric patterns, eliminating the need for further handcrafting. The
Latent Space Model (LSM) learns a low-dimensional latent representation of an
object from a dataset of various geometries, while the Direct Mapping Model
(DMM) builds parameterization on the fly using only one geometry of interest.
We also devise a novel regularization loss that efficiently integrates
volumetric mesh deformation into the parameterization model. The models
directly manipulate the high-dimensional mesh data by moving vertices. LSM and
DMM are fully differentiable, enabling gradient-based, end-to-end pipeline
design and plug-and-play deployment of surrogate models or adjoint solvers. We
perform shape optimization experiments on 2D airfoils and discuss the
applicable scenarios for the two models.
Related papers
- Optimization-Driven Statistical Models of Anatomies using Radial Basis Function Shape Representation [3.743399165184124]
Particle-based shape modeling is a popular approach to quantify shape variability in populations of anatomies.
We propose an adaptation of this method using a traditional optimization approach that allows more precise control over the desired characteristics of models.
We demonstrate the efficacy of the proposed approach to state-of-the-art methods on two real datasets and justify our choice of losses empirically.
arXiv Detail & Related papers (2024-11-24T15:43:01Z) - Generative Aerodynamic Design with Diffusion Probabilistic Models [0.7373617024876725]
We show that generative models have the potential to provide geometries by generalizing geometries over a large dataset of simulations.
In particular, we leverage diffusion probabilistic models trained on XFOIL simulations to synthesize two-dimensional airfoil geometries conditioned on given aerodynamic features and constraints.
We show that the models are able to generate diverse candidate designs for identical requirements and constraints, effectively exploring the design space to provide multiple starting points to optimization procedures.
arXiv Detail & Related papers (2024-09-20T08:38:36Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - Flexible Isosurface Extraction for Gradient-Based Mesh Optimization [65.76362454554754]
This work considers gradient-based mesh optimization, where we iteratively optimize for a 3D surface mesh by representing it as the isosurface of a scalar field.
We introduce FlexiCubes, an isosurface representation specifically designed for optimizing an unknown mesh with respect to geometric, visual, or even physical objectives.
arXiv Detail & Related papers (2023-08-10T06:40:19Z) - Generalizable data-driven turbulence closure modeling on unstructured grids with differentiable physics [1.8749305679160366]
We introduce a framework for embedding deep learning models within a generic finite element solver to solve the Navier-Stokes equations.
We validate our method for flow over a backwards-facing step and test its performance on novel geometries.
We show that our GNN-based closure model may be learned in a data-limited scenario by interpreting closure modeling as a solver-constrained optimization.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - Combining data assimilation and machine learning to infer unresolved
scale parametrisation [0.0]
In recent years, machine learning has been proposed to devise data-driven parametrisations of unresolved processes in dynamical numerical models.
Our goal is to go beyond the use of high-resolution simulations and train ML-based parametrisation using direct data.
We show that in both cases the hybrid model yields forecasts with better skill than the truncated model.
arXiv Detail & Related papers (2020-09-09T14:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.