Towards aerodynamic surrogate modeling based on $β$-variational autoencoders
- URL: http://arxiv.org/abs/2408.04969v2
- Date: Wed, 16 Oct 2024 14:57:33 GMT
- Title: Towards aerodynamic surrogate modeling based on $β$-variational autoencoders
- Authors: Víctor Francés-Belda, Alberto Solera-Rico, Javier Nieto-Centenero, Esther Andrés, Carlos Sanmiguel Vila, Rodrigo Castellanos,
- Abstract summary: Surrogate models that combine dimensionality reduction and regression techniques are essential to reduce the need for costly high-fidelity computational fluid dynamics data.
We propose a surrogate model based on latent space regression to predict pressure distributions on a transonic wing given the flight conditions: Mach number and angle of attack.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Surrogate models that combine dimensionality reduction and regression techniques are essential to reduce the need for costly high-fidelity computational fluid dynamics data. New approaches using $\beta$-Variational Autoencoder ($\beta$-VAE) architectures have shown promise in obtaining high-quality low-dimensional representations of high-dimensional flow data while enabling physical interpretation of their latent spaces. We propose a surrogate model based on latent space regression to predict pressure distributions on a transonic wing given the flight conditions: Mach number and angle of attack. The $\beta$-VAE model, enhanced with Principal Component Analysis (PCA), maps high-dimensional data to a low-dimensional latent space, showing a direct correlation with flight conditions. Regularization through $\beta$ requires careful tuning to improve overall performance, while PCA preprocessing helps to construct an effective latent space, improving autoencoder training and performance. Gaussian Process Regression is used to predict latent space variables from flight conditions, showing robust behavior independent of $\beta$, and the decoder reconstructs the high-dimensional pressure field data. This pipeline provides insight into unexplored flight conditions. Furthermore, a fine-tuning process of the decoder further refines the model, reducing the dependence on $\beta$ and enhancing accuracy. Structured latent space, robust regression performance, and significant improvements in fine-tuning collectively create a highly accurate and efficient surrogate model. Our methodology demonstrates the effectiveness of $\beta$-VAEs for aerodynamic surrogate modeling, offering a rapid, cost-effective, and reliable alternative for aerodynamic data prediction.
Related papers
- Open-Source High-Speed Flight Surrogate Modeling Framework [0.0]
High-speed flight vehicles, which travel much faster than the speed of sound, are crucial for national defense and space exploration.
accurately predicting their behavior under numerous, varied flight conditions is a challenge and often expensive.
The proposed approach involves creating smarter, more efficient machine learning models.
arXiv Detail & Related papers (2024-11-06T01:34:06Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Machine learning enhanced real-time aerodynamic forces prediction based
on sparse pressure sensor inputs [7.112725255953468]
This paper presents a data-driven aerodynamic force prediction model based on a small number of pressure sensors.
The model is tested on numerical and experimental dynamic stall data of a 2D NACA0015 airfoil, and numerical simulation data of dynamic stall of a 3D drone.
arXiv Detail & Related papers (2023-05-16T06:15:13Z) - $\beta$-Variational autoencoders and transformers for reduced-order
modelling of fluid flows [0.3644907558168858]
Variational autoencoder (VAE) architectures have the potential to develop reduced-order models (ROMs) for chaotic fluid flows.
We propose a method for learning compact and near-orthogonal ROMs using a combination of a $beta$-VAE and a transformer.
arXiv Detail & Related papers (2023-04-07T10:11:32Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - $\textit{FastSVD-ML-ROM}$: A Reduced-Order Modeling Framework based on
Machine Learning for Real-Time Applications [0.0]
High-fidelity numerical simulations constitute the backbone of engineering design.
Reduced order models (ROMs) are employed to approximate the high-fidelity solutions.
The present work proposes a new machine learning (ML) platform for the development of ROMs.
arXiv Detail & Related papers (2022-07-24T23:11:07Z) - Generalised Latent Assimilation in Heterogeneous Reduced Spaces with
Machine Learning Surrogate Models [10.410970649045943]
We develop a system which combines reduced-order surrogate models with a novel data assimilation technique.
Generalised Latent Assimilation can benefit both the efficiency provided by the reduced-order modelling and the accuracy of data assimilation.
arXiv Detail & Related papers (2022-04-07T15:13:12Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.